# Workflows and Analyses

A **workflow** is a container that organizes multiple executable components (apps or applets) along with their configuration settings. Think of a workflow as a pipeline that connects multiple tools together.

For example, a DNA sequencing workflow might include three apps in sequence:

* Mapping
* Variant calling
* Variant annotation

You can configure outputs from one component to feed directly into the next. Each component in a workflow, along with its settings and input/output parameters, is called a **stage**. You cannot use workflows as stages within other workflows.

An **analysis** is what happens when you run a workflow—similar to how a job runs when you execute an app. Both jobs and analyses can be referred to as **runs** of their respective executables.

To create a new workflow, use the [`/workflow/new`](#api-method-workflow-new) API method. You can modify the workflow with specific API methods that support different types of edits. When ready, run the workflow with the [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) API method. This creates an **analysis** object that tracks all the jobs and any analyses created during execution.

After creating an analysis, you can recreate the original workflow by calling [`/workflow/new`](#api-method-workflow-new) with the `initializeFrom` parameter set to the analysis ID.

For information on building or managing Nextflow workflows, see [Running Nextflow Pipelines](https://documentation.dnanexus.com/user/running-apps-and-workflows/running-nextflow-pipelines).

## Editing a Workflow

A workflow object has an **edit version number** which can be retrieved using the API call [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe). It must be provided every time an API call is made to edit a workflow and must match the current value to succeed. The new edit version number is returned on a successful edit.

### Managing Stages

You can specify what stages should be run in your workflows:

* When creating a workflow, use the [`/workflow/new`](#api-method-workflow-new) API method to define the initial set of stages. For each stage, you must specify both the executable to run and a unique stage ID.
* After creating a workflow, use the [`/workflow-xxxx/addStage`](#api-method-workflow-xxxx-addstage) API method to add additional stages. You must specify the executable to run. The stage ID is optional. If you do not provide a stage ID, a unique one is generated for you. For more information, see [Stage ID and Name](#stage-id-and-name).

This ID is unique for the stage in the workflow. You need to provide it when making further changes to the stage or cross-linking outputs and inputs of the stages.

Besides the executable that it runs, each stage can also have the following metadata:

* Name
* [Output folder](#customizing-output-folders)
* [Default input values](#binding-input)
* [IO spec modifications](#customizing-io-specifications) that are exported for the workflow
* [Execution Policy](#execution-policy)
* [System Requirements](#system-requirements)

Most of the above options can also be set when the stage is created and can always be modified afterwards via the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) method.

Stages can be reordered or removed using the [`/workflow-xxxx/moveStage`](#api-method-workflow-xxxx-movestage) and [`/workflow-xxxx/removeStage`](#api-method-workflow-xxxx-removestage) API methods. As mentioned previously, both the stage ID and the workflow's edit version need to be provided to modify them.

Replacing the executable of a stage in-place (keeping all other metadata associated with the stage such as its name, output folder, bound inputs, and configuration settings), can only be done using the [`/workflow-xxxx/updateStageExecutable`](#api-method-workflow-xxxx-updatestageexecutable) API method. This method tests whether the replacement candidate has input and output specifications which are fully compatible with the previous executable if it is still accessible. If it is not fully compatible, it can still be updated by setting the `force` flag to true, in which case the workflow is also updated to remove any outdated links between stages and other such outdated metadata.

#### Stage ID and Name

A **stage ID** uniquely identifies the stage within a workflow and allows inputs and outputs of different stages to be linked to each other. When adding a stage (either in [`/workflow/new`](#api-method-workflow-new) or [`/workflow-xxxx/addStage`](#api-method-workflow-xxxx-addstage)) you must supply a unique ID to identify each stage. As an exception, in [`/workflow-xxxx/addStage`](#api-method-workflow-xxxx-addstage) it is not mandatory to supply an ID. If you do not do so, an arbitrary unique ID is generated on your behalf.

Stage IDs must match the regular expression `^[a-zA-Z_][0-9a-zA-Z_-]{0,255}$` (only letters, numbers, underscores, and dashes, at least one char, does not start with a number or dash. Maximum length is 256 characters).

The stage name is a non-unique label used for display purposes. It allows you to provide a descriptive identifier for the stage that is shown in the UI in the workflow view. If not provided, the executable's name is displayed instead.

### Customizing Output Folders

The workflow can have a default *output folder* which is set by its `outputFolder` field (either at workflow creation time or through the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) method). This value can be overridden at runtime using the `folder` field. If no value for the output folder can be found in the API call nor in the workflow, then the system default of `"/"` is used.

#### Stage Output Folders

Each stage can also specify its default output folder. This can be defined relative to the workflow's output folder, or as an absolute path. This field can be set in the [`/workflow-xxxx/addStage`](#api-method-workflow-xxxx-addstage) method and further updated using the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) method.

If the value set for the stage's `folder` field starts with the character `"/"`, then this is interpreted as an absolute path that is used for the stage's outputs, regardless of what is provided as `folder` in the [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) method.

If, however, the value set for the field does *not* start with the character `"/"`, then it is interpreted as a path relative to the field `folder` provided to [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) method.

The following table shows some examples for where a stage's output goes for different values of the stage's `folder` field, under the condition that the workflow's output folder is `"/foo"`:

| Stage's `folder` Value | Stage Output Folder |
| ---------------------- | ------------------- |
| `null` (no value)      | "/foo"              |
| "bar/baz"              | "/foo/bar/baz"      |
| "/quux"                | "/quux"             |

### Workflow Input and Output (Locked Workflows)

#### Workflow Input

It is possible to define an explicit input to the workflow by specifying `inputs` for the [`/workflow/new`](#api-method-workflow-new) method, for example:

```json
{
  "inputs": [
    {
      "name": "reference_genome",
      "class": "file"
    }
  ]
}
```

One consequence of defining a workflow with an explicit input is that once the workflow is created, all the input values need to be provided by the user to workflow inputs and not to stages. By linking stage inputs with workflow inputs during workflow build time, all the values provided to a workflow-level input (here `reference_genome`) are passed during execution to the stage-level inputs that link to it.

Defining `inputs` for the workflow creates a **locked workflow**. If the workflow creator defines this property, the inputs listed in this array *can* be set by the user when they run the workflow, and all other inputs are locked. When the `inputs` property is undefined or `null` the workflow is fully unlocked and acts like any other regular workflow where all the inputs can be provided or overridden by the user that runs the workflow. When the `inputs` property is set to an empty array, there are no unlocked fields so the workflow is fully locked.

#### Workflow Output

The outputs of the stages can be defined as the output of the workflow. To do that, pass the `outputs` field to [`/workflow/new`](#api-method-workflow-new), which defines references to stages' outputs in `outputSource`. For example, if you want the workflow output to be "outputFieldName" from stage `stage-xxxx` and outputs from other stages are not needed, define it as follows:

```json
{
  "outputs": [
    {
      "name": "pipeline_output",
      "class": "array:file",
      "outputSource": {
        "$dnanexus_link": {
          "stage": "stage-xxxx",
          "outputField": "output_field_of_stage_xxxx"
        }
      }
    }
  ]
}
```

### Binding Input

When adding an executable as a stage or modifying it using the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) API method, you can choose to specify values for the stage inputs. These bound inputs can be overridden when the workflow is actually run. The syntax for providing bound input is the same as when providing an input hash to run the executable directly. For example, you can set the input for a stage with the hash:

```json
{ "input_field": "input_value" }
```

You can also use **stage references** as values to link an input to the input or output of another stage. These references are *hashes* with a single key `$dnanexus_link` whose value is a hash with exactly two keys/values:

* `stage` *string* another stage's ID whose output is used

exactly one of the following key/values:

* `outputField` *string* the output field name of the stage's executable to be used
* `inputField` *string* the input field name of the stage's executable to be used

and, optionally:

* `index` *integer* the index of the array that is the output or input of the linked stage. This is 0-indexed, so a value of 0 indicates the first element should be used.

If the workflow has defined `inputs`, you can use **workflow input references** to link stage inputs to the workflow level inputs. These references are *hashes* with a single key `$dnanexus_link` whose value is a hash with exactly one key/value:

* `workflowInputField`: *string* the input field name of the current workflow

#### Linking input to other stage output

Using the `outputField` option is useful for chaining the output of a stage to the input of another stage to make an analysis pipeline. For example, a first stage (`stage-xxxx`) could map reads to a reference genome and then pass those mappings on to a second stage (`stage-yyyy`) that calls variants on those mappings. Do this by setting the following input for the second stage:

```json
{
  "mappings_input_field_of_stage_yyyy": {
    "$dnanexus_link": {
      "stage": "stage-xxxx",
      "outputField": "mappings_output_field_of_stage_xxxx"
    }
  }
}
```

When the workflow is run, the second stage receives the mappings input once the first stage has finished.

#### Linking input to other stage input

Linking input fields together can also be useful. For example, if two stages require the same reference genome, link the input of one (`stage-xxxx`) to the other (`stage-yyyy`) by setting the input of the first as follows:

```json
{
  "reference_genome_field_of_stage_xxxx": {
    "$dnanexus_link": {
      "stage": "stage-yyyy",
      "inputField": "reference_genome_field_of_stage_yyyy"
    }
  }
}
```

When running the workflow, the reference genome input only needs to be provided once to the input of `stage-yyyy`, and the other stage `stage-xxxx` inherits the same value.

#### Linking workflow input to stage input

It is possible to link stage input to the input of the current workflow. For example, if the `stage-xxxx` requires a reference genome, link the input of `stage-xxxx` to the input of the workflow as follows:

```json
{
  "reference_genome_field_of_stage_xxxx": {
    "$dnanexus_link": {
      "workflowInputField": "reference_genome"
    }
  }
}
```

The workflow `inputs` field should then be defined for the workflow, for example:

```json
{
  "inputs": [
    {
      "name": "reference_genome",
      "class": "file"
    }
  ]
}
```

During runtime, the stage inputs consume the input values provided on the workflow level. That is, the value passed to the field `reference_genome` is used by `reference_genome_field_of_stage_xxxx`.

See the section on [Workflow input and output](#workflow-input-and-output-locked-workflows) for more information.

### Customizing IO Specifications

The [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) API method can also be used to modify how an input or output to a stage can be represented as an input or output of the workflow. For example, a particular input parameter can be hidden so that it does not appear in the `inputSpec` field when describing the workflow. Or, it can be given a name (unique in the workflow) so that its stage does not have to be specified when providing input to the workflow. Its label or help can also be changed to document how it may interact with other stages in the workflow.

{% hint style="info" %}
Hiding an output for a stage means the output is treated as intermediate output and is deleted after the analysis has finished running.
{% endhint %}

### Execution Policy

Each stage can have an `executionPolicy` field to request the value to be passed on when the stage is run (see the `executionPolicy` field in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) of apps and applets for the accepted options).

These stored execution policies can also change the failure propagation behavior. By default, if a stage fails, the entire analysis enters the `partially_failed` state, and other stages are allowed to finish successfully if they are not dependent on the failed stage. This behavior can be modified to propagate failure to all other stages by setting the `onNonRestartableFailure` flag in the `executionPolicy` field for an individual stage to have value "failAllStages". These stage-specific options can also be overridden at runtime by providing a single value to be used by all stages in the [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) call.

### System Requirements

Each stage of the workflow can have a `systemRequirements` field to request certain instance types by default when the workflow is run. This field uses the same syntax as used in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) for applets and apps. This value can be set when the stage is added or modified afterwards with the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) API method.

These stored defaults can be further overridden (in part or in full) at runtime by providing some combination of `systemRequirementsByExecutable`, `systemRequirements`, and `stageSystemRequirements` fields in [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run). A stage's *stored* value for `systemRequirements` remains active for a specific entry point unless explicitly overridden with a new value for that entry point, or implicitly overridden via a value for the `"*"` entry point. Refer to the information in [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) for more details.

## Reuse of Previous Results

{% hint style="info" %}
A license is required to use the Smart Reuse feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

When running a workflow, if the Smart Reuse feature has been enabled, the system attempts to reuse previously computed results by looking up analyses that have been created for the workflow. To find out which stages have cached results on hand without running the workflow, you can call the [`/workflow-xxxx/dryRun`](#api-method-workflow-xxxx-dryrun) method or call [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) with `getRerunInfo` set to true. To turn off this automatic behavior, you can request that certain stages be forcibly rerun using `rerunStages` in the [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) method.

[See this documentation](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-reuse) for more on this feature.

## Analysis Input

When specifying input for `/workflow-xxxx/run`, the input field names for an analysis are automatically generated to have the form `<stage ID>.<input field name>` if the input is provided to a stage directly, or `<input field name>` if it is the input defined for the workflow.

For example, if the first stage has ID `stage-xxxx` and would run an executable which takes in an input named `reads`, then to provide the input for this parameter, you would use the key `stage-xxxx.reads` in the input hash. These names can be renamed via the API method [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) using the `stages.stage-xxxx.inputSpecMods` field.

Connecting the input to the input or output of another stage in the workflow is also possible. In such a situation, a **workflow stage reference** should be used. To reference the input of another stage, say of stage `stage-xxxx` with input `reference_genome`, you would provide the value:

```json
{ "$dnanexus_link": {
    "stage": "stage-xxxx",
    "inputField": "reference_genome"
  }
}
```

When the workflow is run, this is translated into whatever value is given as input for `reference_genome` for the stage `stage-xxxx` in the workflow.

If the key `outputField` is used in place of `inputField`, then the value represents the output of that stage instead. When the workflow is run and an analysis created, the workflow stage reference is translated into an **analysis stage reference**:

```json
{ "$dnanexus_link": {
    "analysis": "analysis-xxxx",
    "stage": "stage-xxxx",
    "field": "reference_genome"
  }
}
```

which is resolved when the stage `stage-xxxx` finishes running in analysis `analysis-xxxx`.

## Workflow API Method Specifications

### API method: `/workflow/new`

#### Specification

Creates a new workflow data object which can be used to execute a series of apps, applets, and/or workflows.

#### Inputs

* `project` **string** (required) ID of the project or container to which the workflow should belong, such as `project-xxxx`.
* `name` **string** (optional) The name of the object. Defaults to the new ID.
* `title` **string** (optional, nullable) Title of the workflow, for example, "Micro Map Pipeline". If `null`, then the name of the workflow is used as the title. Defaults to `null`.
* `summary` **string** (optional) A short description of the workflow. Defaults to `""`.
* `description` **string** (optional) A longer description about the workflow. Defaults to `""`.
* `outputFolder` **string** (optional) The default output folder for the workflow. See the [Customizing Output Folders](#customizing-output-folders) section above for more details on how it interacts with stages' output folders.
* `tags` **array of strings** (optional) Tags to associate with the object.
* `types` **array of strings** (optional) Types to associate with the object.
* `hidden` **boolean** (optional) Whether the object should be hidden. Defaults to `false`.
* `properties` **mapping** (optional) Properties to associate with the object.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** (optional) JSON object or array that is to be associated with the object. See the [Object Details](https://documentation.dnanexus.com/developer/api/data-object-lifecycle/details-and-links) section for details on valid input. Defaults to `{}`.
* `folder` **string** (optional) Full path of the folder that is to contain the new object. Defaults to `"/"`.
* `parents` **boolean** (optional) Whether all folders in the path provided in `folder` should be created if they do not exist. Defaults to `false`.
* `inputs` **array of mappings** (optional) An input specification of the workflow as described in the [Input Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#input-specification) section.
* `outputs` **array of mappings** (optional) An output specification of the workflow as described in the [Output Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#output-specification) section with an additional field specifying `outputSource`. See the [Workflow output](#workflow-input-and-output-locked-workflows) section for details.
* `initializeFrom` **mapping** (optional) Indicate an existing workflow or analysis from which to use the metadata as default values for all fields that are not given.
  * `id` **string** (required) ID of the workflow or analysis from which to retrieve workflow metadata.
  * `project` **string** (optional) ID of the project in which the workflow specified in `id` should be found.
    * Required when `id` is a workflow ID. Ignored otherwise.
* `stages` **array of mappings** (optional) Stages to add to the workflow. If not supplied, the workflow that is created is empty.
  * `id` **string** (required) ID that uniquely identifies the stage. See the section on [Stage ID and Name](#managing-stages) for more information.
  * `executable` **string** (required) ID of app or applet to be run in this stage.
  * `name` **string** (optional) Name (display label) for the stage.
  * `folder` **string** (optional, nullable) The output folder into which outputs should be cloned for the stage. See the [Customizing Output Folders](#customizing-output-folders) section above for more details. Defaults to `null`.
  * `input` **mapping** (optional) The inputs to this stage to be bound. See the section on [Binding Input](#binding-input) for more information.
    * **key** — Input field name.
    * **value** — Input field value.
  * `executionPolicy` **mapping** (optional) A collection of options that govern automatic job restart on certain types of failures. This can only be set at the user-level API call (jobs cannot override this for their subjobs). Contents of this field override any of the corresponding keys in the `executionPolicy` mapping found in the executable's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) (if present).
    * `restartOn` **mapping** (optional) Indicate a job restart policy.
      * **key** — A restartable failure reason (`ExecutionError`, `UnresponsiveWorker`, `JMInternalError`, `AppInternalError`, `AppInsufficientResourceError`, `JobTimeoutExceeded`, or `SpotInstanceInterruption`) or `*` to indicate all restartable failure reasons that are otherwise not present as keys.
      * **value** **integer** — Maximum number of restarts for the failure reason.
    * `maxRestarts` **integer** (optional) Non-negative integer less than 10, indicating the maximum number of times that the job is restarted. Defaults to `9`.
    * `onNonRestartableFailure` **string** (optional) Indicates whether the failure of this stage (when run as part of an analysis) should force all other non-terminal stages in the analysis to also fail if a non-restartable failure occurs, even if those stages do not have any dependencies on this stage. Stages that have dependencies on this stage still fail irrespective of this setting. Defaults to `"failStage"`.
      * Must be one of `"failStage"` or `"failAllStages"`.
  * `systemRequirements` **mapping** (optional) Request specific resources for the stage's executable. See the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section for more details.
* `ignoreReuse` **array of strings** (optional) Specifies ids of workflow stages (or `"*"` for all stages) that ignore job reuse. If a specified stage points to a nested sub-workflow, reuse is ignored recursively by the whole nested sub-workflow. Overrides `ignoreReuse` setting in stage executables.
* `nonce` **string** (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single workflow is created. For more information, see [Nonces](https://documentation.dnanexus.com/developer/api/nonces).
* `treeTurnaroundTimeThreshold` **integer** (optional) The turnaround time threshold (in seconds) for trees (specifically, root executions) that run this executable. See [Job Notifications](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-notifications) for more information about turnaround time and managing job notifications.
  * Defaults to the `treeTurnaroundTimeThreshold` field of the `initializeFrom` workflow if the `billTo` of the project has `jobNotifications` enabled and `initializeFrom` is not N/A, otherwise N/A, with N/A meaning not available.

{% hint style="info" %}
A license is required to use the `jobNotifications` feature. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
{% endhint %}

#### Outputs

* `id` **string** ID of the created workflow object, such as "workflow-xxxx".
* `editVersion` **integer** The initial edit version number of the workflow object.

#### Errors

* InvalidInput
  * A reserved linking string (`$dnanexus_link`) appears as a key in a hash in "details" but is not the only key in the hash
  * A reserved linking string (`$dnanexus_link`) appears as the only key in a hash in `details` but has value other than a string
  * The `id` given under `initializeFrom` is not a valid workflow or analysis ID
  * "project" is missing if `id` given under `initializeFrom` is a workflow ID
  * For each property key-value pair, the size, encoded in UTF-8, of the property key may not exceed 100 bytes and the property value may not exceed 700 bytes
  * A `nonce` was reused in a request but other inputs had changed signifying a new and different request
  * A `nonce` may not exceed 128 bytes
  * `instanceTypeSelector` keyword is not allowed when building workflows
* InvalidType
  * `The project` is not a valid project ID
* SpendingLimitExceeded
  * The `billTo` has reached its spending limit.
* OrgExpired
  * The `billTo` organization has expired.
* PermissionDenied
  * CONTRIBUTE access required, VIEW access required for the project specified under `initializeFrom` if a workflow or analysis was specified.
* ResourceNotFound
  * The specified project is not found
  * The path in `folder` does not exist while `parents` is false, or a specified project, workflow, or analysis ID specified in `initializeFrom` is not found, or a stage in `ignoreReuse` is not found)

### API method: `/workflow-xxxx/overwrite`

#### Specification

Overwrites the workflow with the workflow-specific metadata from another workflow or an analysis other than the `editVersion`. The workflow's name, tags, properties, types, visibility, and details are left unchanged.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `from` **mapping** (required) Indicate the existing workflow or analysis from which to use the metadata.
  * `id` **string** (required) ID of the workflow or analysis from which to retrieve workflow metadata.
  * `project` **string** (optional) ID of the project in which the workflow specified in `id` should be found.
    * Required when `id` is a workflow ID. Ignored otherwise.

#### Outputs

* `id` **string** ID of the manipulated workflow.
* `editVersion` **integer** The new edit version number.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `from` is not a hash
  * `from.id` is not a string
  * `from.project` is not a string if `from.id` is a workflow ID
* ResourceNotFound
  * The specified workflow does not exist
  * The workflow or analysis specified in `from` cannot be found
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project
  * User does not have VIEW access to the project containing the workflow or analysis represented in `from`

### API method: `/workflow-xxxx/addStage`

#### Specification

Adds a stage to the workflow.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `id` **string** (optional) ID that uniquely identifies the stage. If not provided, a system-generated stage ID is set. See the section on [Stage ID and Name](#managing-stages) for more information.
* `executable` **string** (required) App or applet ID.
* `name` **string** (optional, nullable) Name (display label) for the stage, or `null` to indicate no name. Defaults to `null`.
* `folder` **string** (optional, nullable) The output folder into which outputs should be cloned for the stage. See the [Customizing Output Folders](#customizing-output-folders) section above for more details. Defaults to `null`.
* `input` **mapping** (optional) A subset of the inputs to this stage to be bound. See the section on [Binding Input](#binding-input) for more information.
  * **key** — Input field name.
  * **value** — Input field value.
* `executionPolicy` **mapping** (optional) A collection of options that govern automatic job restart on certain types of failures. This can only be set at the user-level API call (jobs cannot override this for their subjobs). Contents of this field override any of the corresponding keys in the `executionPolicy` mapping found in the executable's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) (if present).
  * `restartOn` **mapping** (optional) Indicate a job restart policy.
    * **key** — A restartable failure reason (`ExecutionError`, `UnresponsiveWorker`, `JMInternalError`, `AppInternalError`, `AppInsufficientResourceError`, `JobTimeoutExceeded`, or `SpotInstanceInterruption`) or `*` to indicate all restartable failure reasons that are otherwise not present as keys.
    * **value** **integer** — Maximum number of restarts for the failure reason.
  * `maxRestarts` **integer** (optional) Non-negative integer less than 10, indicating the maximum number of times that the job is restarted. Defaults to `9`.
  * `onNonRestartableFailure` **string** (optional) Indicates whether the failure of this stage (when run as part of an analysis) should force all other non-terminal stages in the analysis to also fail if a non-restartable failure occurs, even if those stages do not have any dependencies on this stage. Stages that have dependencies on this stage still fail irrespective of this setting. Defaults to `"failStage"`.
    * Must be one of `"failStage"` or `"failAllStages"`.
* `systemRequirements` **mapping** (optional) Request specific resources for the stage's executable. See the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section for more details.

#### Outputs

* `id` **string** ID of the manipulated workflow.
* `editVersion` **integer** The new edit version number.
* `stage` **string** ID of the new stage.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `executable` is not a string
  * `name` if provided is not a string
  * `folder` if provided is not a valid folder path
  * `input` if provided is not a hash or is not valid input for the specified executable
  * `executionPolicy` if provided is not a hash
  * `executionPolicy.restartOn` if provided is not a hash, contains a failure reason key that cannot be restarted, or contains a value which is not an integer between 0 and 9
  * `executionPolicy.onNonRestartableFailure` is not one of the allowed values
  * `instanceTypeSelector` keyword is not allowed when building workflows
* ResourceNotFound
  * The specified workflow does not exist
  * The specified executable does not exist
  * A provided input value in `input` could not be found
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project
  * An accessible copy of the executable could not be found

### API method: `/workflow-xxxx/removeStage`

#### Specification

Removes a stage from the workflow.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `stage` **string** (required) ID of the stage to remove.

#### Outputs

* `id` **string** ID of the manipulated workflow.
* `editVersion` **integer** The new edit version number.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `stage` is not a string
* ResourceNotFound
  * The specified workflow does not exist
  * The specified stage does not exist in the workflow
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project

### API method: `/workflow-xxxx/moveStage`

#### Specification

Reorders the stages by moving a specified stage to a new index or position in the workflow. This does not affect how the stages are run but is merely for personal preference and organization.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `stage` **string** (required) ID of the stage to move.
* `newIndex` **integer** (required) The index or key that the stage has after the move. All other stages are moved to accommodate this change. Must be in \[0, n), where n is the total number of stages.

#### Outputs

* `id` **string** ID of the manipulated workflow.
* `editVersion` **integer** The new edit version number.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `stage` is not a string
  * `newIndex` is not in the range \[0, n) where n is the number of stages in the workflow
* ResourceNotFound
  * The specified workflow does not exist
  * The specified stage does not exist in the workflow)
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project

### API method: `/workflow-xxxx/update`

#### Specification

Update the workflow with any fields that are provided.

For workflows in projects associated with a [TRE](https://documentation.dnanexus.com/developer/api/trusted-research-environments), updates must remain compatible with TRE execution restrictions that are enforced when launching runs from the workflow.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `title` **string** (optional, nullable) The workflow's title, for example, "Micro Map Pipeline". If `null`, the name of the workflow is used as the title.
* `summary` **string** (optional) A short description of the workflow.
* `description` **string** (optional) A longer description about the workflow.
* `outputFolder` **string** (optional, nullable) The default output folder for the workflow, or `null` to unset. See the [Customizing Output Folders](#customizing-output-folders) section above for more details on how it interacts with stages' output folders.
* `inputs` **array of mappings** (optional, nullable) An input specification of the workflow as described in the [Input Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#input-specification) section.
* `outputs` **array of mappings** (optional, nullable) An output specification of the workflow as described in the [Output Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#output-specification) section with an additional field specifying `outputSource`. See the [Workflow output](#workflow-input-and-output-locked-workflows) section for details.
* `stages` **mapping** (optional) Updates for one or more of the workflow's stages.
  * **key** — ID of the stage to update.
  * **value** **mapping** — Updates to make to the stage.
    * `name` **string** (optional, nullable) New name for the stage. Use `null` to unset the name.
    * `folder` **string** (optional, nullable) The output folder into which outputs should be cloned for the stage. See the [Customizing Output Folders](#customizing-output-folders) section above for more details. Use `null` to unset the folder.
    * `input` **mapping** (optional) A subset of the inputs to this stage to be bound or unbound (using `null` to unset a previously-bound input). See the section on [Binding Input](#binding-input) for more information.
      * **key** — Input field name from this stage's executable.
      * **value** — Input field value, or `null` to unset.
    * `executionPolicy` **mapping** (optional) Set the default execution policy for this stage. Use the empty mapping { } to unset.
    * `stageSystemRequirements` **mapping** (optional) Request specific resources for the stage's executable. See the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section for more details. Use the empty mapping { } to unset.
    * `inputSpecMods` **mapping** (optional) Updates to how the stage input specification is exported for the workflow. Any subset can be provided.
      * **key** — Input field name from this stage's executable.
      * **value** **mapping** — Updates for the specified stage input field name.
        * `name` **string** (optional, nullable) The canonical name by which a stage's input can be addressed when running the workflow is of the form "\<stage ID>.\<original field name>". By providing a different string here, you override the name as shown in the `inputSpec` of the workflow, and it can be used when giving input to run the workflow. The canonical name value can still be used to refer to this input, but both names cannot be used simultaneously. If `null` is provided, then any previously-set name is dropped. Only the canonical name can be used.
        * `label` **string** (optional, nullable) A replacement label for the input parameter. If `null` is provided, then any previously-set label is dropped. The original executable's label is used.
        * `help` **string** (optional, nullable) A replacement help string for the input parameter. If `null` is provided, then any previously-set help string is dropped and the original executable's help string is used.
        * `group` **string** (optional, nullable) A replacement group for the input parameter. The default group for a stage's input is the stage's ID (if it had no group in the executable), or the string "\<stage ID>:\<group name>" (if it was part of a group in the executable). By providing a different string here, you override the group in which the input parameter appears in the `inputSpec` of the workflow. If the `null` value is provided, then any previously-set group value is dropped. The canonical group name is used. If the empty string is provided, the parameter is not in any group.
        * `hidden` **boolean** (optional) Whether to hide the input parameter from the `inputSpec` of the workflow. The input can still be provided and overridden by its name "\<stage ID>.\<original field name>".
    * `outputSpecMods` **mapping** (optional) Updates to how the stage output specification is exported for the workflow. Any subset can be provided. This field follows the same syntax as for `inputSpecMods` defined above and behaves the same but modifies `outputSpec` instead. The exception in behavior occurs for the `hidden` field. If an output has `hidden` set to `true`, its data object value (if applicable) is not cloned into the parent container when the stage or analysis is done. This may be a useful feature if a stage in your analysis produces many intermediate outputs that are not relevant to the analysis or are not useful once the analysis has finished.

#### Outputs

* `id` **string** ID of the manipulated workflow.
* `editVersion` **integer** The new edit version number.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `title` if provided is not a string nor `null`
  * `summary` if provided is not a string
  * `description` if provided is not a string
  * `stages` if provided is not a hash
  * A key in `stages` is not a stage ID string
  * `name` if provided in a stage hash is not a string
  * `folder` if provided in a stage hash is not a valid folder path
  * `input` if provided in a stage hash is not a hash or is not valid input for the specified executable
  * `inputSpecMods` or `outputSpecMods` if provided in a stage hash is not a hash or contains a key which does not abide by the syntax specification above
  * `instanceTypeSelector` keyword is not allowed when building workflows
* ResourceNotFound
  * The specified workflow does not exist
  * One of the specified stage IDs could not be found in the workflow
  * A provided input value in an `input` hash in a stage's hash could not be found
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project

### API method: `/workflow-xxxx/isStageCompatible`

#### Specification

Check whether the proposed replacement executable for a stage is going to be a fully compatible replacement or not.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `stage` **string** (required) ID of the stage to check for compatibility.
* `executable` **string** (required) ID of the executable that would be used as a replacement.

#### Outputs

* `id` **string** ID of the workflow that was checked for compatibility.
* `compatible` **boolean** The value true if it is compatible and false otherwise.

If `compatible` is false, the following key is also present:

* `incompatibilities` **array of strings** A list of reasons for which the two executables are not compatible.
  * Only present when `compatible` is false.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `stage` is not a string
  * `executable` is not a string
  * The given executable is missing an input or output specification
* ResourceNotFound
  * The specified workflow does not exist
  * The specified stage does not exist in the workflow
  * The specified executable does not exist
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
* PermissionDenied
  * User does not have VIEW access to the workflow's project required
  * An accessible copy of the executable could not be found

### API method: `/workflow-xxxx/updateStageExecutable`

#### Specification

Update the executable to be run in one of the workflow's stages.

#### Inputs

* `editVersion` **integer** (required) The edit version number that was last observed, either via [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) or as output from an API call that changed the workflow. This value must match the current version stored in the workflow object for the API call to succeed.
* `stage` **string** (required) ID of the stage to update with the executable.
* `executable` **string** (required) ID of the executable to use for the stage.
* `force` **boolean** (optional) Whether to update the executable even if the one specified in `executable` is incompatible with the one that is in use for the stage. Defaults to `false`.

#### Outputs

* `id` **string** ID of the workflow.
* `editVersion` **integer** The new edit version number.
* `compatible` **boolean** Whether `executable` was compatible. If false, then further action (such as setting new inputs) may need to be taken to run the workflow as is.

If `compatible` is false, the following is also present:

* `incompatibilities` **array of strings** A list of reasons for which the two executables are not compatible.
  * Only present when `compatible` is false.

#### Errors

* InvalidInput
  * Input is not a hash
  * `editVersion` is not an integer
  * `stage` is not a string, `executable` is not a string
  * The given executable is missing an input or output specification
  * `force` is not a boolean
* ResourceNotFound
  * The specified workflow does not exist
  * The specified stage does not exist in the workflow
  * The specified executable does not exist
* InvalidState
  * Workflow is not in the "open" state
  * `editVersion` provided does not match the current stored value
  * The requested executable is not compatible with the previous executable
  * `force` was not set to true
* PermissionDenied
  * User does not have CONTRIBUTE access to the workflow's project
  * An accessible copy of the executable could not be found

### API method: `/workflow-xxxx/describe`

#### Specification

Describes the specified workflow object.

Alternatively, you can use the [`/system/describeDataObjects`](https://documentation.dnanexus.com/developer/system-methods#api-method-system-describedataobjects) method to describe many data objects at once.

#### Inputs

* `project` **string** (optional) Project or container ID to be used as a hint for finding an accessible copy of the object.
* `defaultFields` **boolean** (optional) Whether to include the default set of fields in the output (the default fields are described in the "Outputs" section below). The selections are overridden by any fields explicitly named in `fields`.
  * Defaults to `false` if `fields` is supplied, `true` otherwise.
* `fields` **mapping** (optional) Include or exclude the specified fields from the output. These selections override the settings in `defaultFields`.
  * **key** — Output field to include or exclude. See the "Outputs" section below for valid values here.
  * **value** **boolean** — Whether to include the field.
* `includeHidden` **boolean** (optional) Whether hidden input and output parameters should appear in the `inputSpec` and `outputSpec` fields. Defaults to `false`.
* `getRerunInfo` **boolean** (optional) Whether rerun information should be returned for each stage. Defaults to `false`.
* `rerunStages` **array of strings** (optional) Applicable only if `getRerunInfo` is set to true. A set of stage IDs that would be forcibly rerun and to return rerun information accordingly.
* `rerunProject` **string** (optional) Project ID to use for retrieving rerun information. Defaults to the value of `project` returned.

The following options are deprecated (and are not respected if `fields` is present):

* `properties` **boolean** (optional, deprecated) Whether the properties should be returned. Defaults to `false`.
* `details` **boolean** (optional, deprecated) Whether the details should also be returned. Defaults to `false`.

#### Outputs

* `id` **string** The object ID, such as "workflow-xxxx".

The following fields are included by default (but can be disabled using `fields` or `defaultFields`):

* `project` **string** ID of the project or container in which the object was found.
* `class` **string** The value "workflow".
* `types` **array of strings** Types associated with the object.
* `created` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this object was created.
* `state` **string** Either "open" or "closed".
* `hidden` **boolean** Whether the object is hidden or not.
* `links` **array of strings** The object IDs that are pointed to from this object.
* `name` **string** The name of the object.
* `folder` **string** The full path to the folder containing the object.
* `sponsored` **boolean** Whether the object is sponsored by DNAnexus.
* `tags` **array of strings** Tags associated with the object.
* `modified` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the user-provided metadata of the object was last modified.
* `createdBy` **mapping** How the object was created.
  * `user` **string** ID of the user who created the object or launched an execution which created the object.
  * `job` **string** ID of the job that created the object.
    * Only present when a job created the object.
  * `executable` **string** ID of the app or applet that the job was running.
    * Only present when a job created the object.
* `title` **string** The workflow's effective title (always equals `name` if it has not been set to a string).
* `summary` **string** The workflow's summary.
* `description` **string** The workflow's description.
* `outputFolder` **string** (nullable) The default output folder for the workflow, or `null` if unset. See the [Customizing Output Folders](#customizing-output-folders) section above for more details on how it interacts with stages' output folders.
* `inputSpec` **array of mappings** (nullable) The value is `null` for inaccessible stage executables. Otherwise, the value is the effective input specification for the workflow. This is generated automatically, taking into account the stages' input specifications and any modifications that have been made to them in the context of the workflow (see the field `inputSpecMods` under the specification for the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) API method). If not otherwise modified via the API, the group name of an input field is transformed to include a prefix using its stage ID. Hidden parameters are not included unless requested via `includeHidden`. They have a flag `hidden` set to `true`. Bound inputs always show up as `default` values for the respective input fields.
* `outputSpec` **array of mappings** (nullable) The value is `null` if a stage's executable is inaccessible. Otherwise, the value is the effective output specification for the workflow. This is generated automatically, taking into account the stages' output specifications and any modifications that have been made to them in the context of the workflow (see the field `outputSpecMods` under the specification for the [`/workflow-xxxx/update`](#api-method-workflow-xxxx-update) API method). Hidden parameters are not included unless requested via `includeHidden` and they have a flag `hidden` set to `true`.
* `inputs` **array of mappings** (nullable) Input specification of the workflow (not the input of particular stages, which is returned in `inputSpec`).
* `outputs` **array of mappings** (nullable) Output specification of the workflow (not the output of stages, which is returned in `outputSpec`).
* `editVersion` **integer** The current edit version of the workflow. This value must be provided with any of the workflow-editing API methods to ensure that simultaneous edits are not occurring.
* `ignoreReuse` **array of strings** (nullable) Workflow stage ids that are configured to ignore job reuse.
* `stages` **array of mappings** List of metadata for each stage.

  * `id` **string** Stage ID.
  * `executable` **string** App or applet ID.
  * `name` **string** (nullable) Name of the stage, or `null` if not set.
  * `folder` **string** (nullable) The output folder into which outputs should be cloned for the stage. See the [Customizing Output Folders](#customizing-output-folders) section above for more details. Returns `null` if not set.
  * `input` **mapping** Input (possibly partial) to the stage's executable that has been bound.
  * `accessible` **boolean** Whether the executable is accessible.
  * `executionPolicy` **mapping** The default execution policy for this stage.
  * `systemRequirements` **mapping** The requested `systemRequirements` value for the stage.
  * `inputSpecMods` **mapping** Modifications for the stage's input parameters when represented in the workflow's input specification.
    * **key** — Input parameter name from this stage's executable.
    * **value** **mapping** — Modifications for the input parameter.
      * `name` **string** Replacement name of the input parameter. This is guaranteed to be unique in the stages input specification.
        * Only present when set.
      * `label` **string** Replacement label for the input parameter.
        * Only present when set.
      * `help` **string** Replacement help string for the input parameter.
        * Only present when set.
      * `group` **string** The group to which the input parameter belongs (the empty string indicates no group).
      * `hidden` **boolean** Whether the input field is hidden from the workflow's input specification.
        * Only present when true.
  * `outputSpecMods` **mapping** Modifications for restricting the stages' output and representing its output.
    * **key** — Output parameter name from this stage's executable.
    * **value** **mapping** — Modifications for the output parameter with any number of the same key/values that are also present in `inputSpecMods`. If an output has `hidden` set to `true`, its data object value (if applicable) is *not* cloned into the parent container when the stage or analysis is done and is *deleted immediately* on completion or failure of the analysis if `delayWorkspaceDestruction` is not set to `true`.

  When `getRerunInfo` is `true`, the following additional keys are present for each stage:

  * `wouldBeRerun` **boolean** Whether the stage would be rerun if the workflow were to be run (taking into account the value given for `rerunStages`, if applicable).
  * `cachedExecution` **string** The job ID from which the outputs would be used.
    * Only present when `wouldBeRerun` is false.
  * `cachedOutput` **mapping** (nullable) The output from the cached execution if available, or `null` if the execution has not finished yet.
    * Only present when `wouldBeRerun` is false.
  * `initializedFrom` **mapping** Basic metadata recording how this workflow was created.
    * Only present when the workflow was created using the `initializedFrom` option.
  * `editVersion` **integer** The `editVersion` of the original workflow at the time of creation.
    * Only present when `id` is a workflow ID.
  * `treeTurnaroundTimeThreshold` **integer** (nullable) The turnaround time threshold (in seconds) for trees (specifically, root executions) that run this executable. See [Job Notifications](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-notifications) for more information about turnaround time and managing job notifications.

{% hint style="info" %}
A license is required to use the `jobNotifications` feature. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
{% endhint %}

The following field (included by default) is available if the object is sponsored by a third party:

* `sponsoredUntil` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Indicates the expiration time of data sponsorship (this field is only set if the object is sponsored, and if set, the specified time is always in the future).
  * Only present when the object is sponsored.

The following fields are only returned if the corresponding field in the `fields` input is set to `true`:

* `properties` **mapping** Properties associated with the object.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** Contents of the object's details.

#### Errors

* ResourceNotFound
  * The specified object does not exist or the specified project does not exist
* InvalidInput
  * The input is not a hash
  * `project` (if present) is not a string
  * The value of `properties` (if present) is not a boolean
  * `includeHidden` if present is not a boolean
  * `getRerunInfo` if present is not a boolean
  * `rerunStages` if present is not an array of nonempty strings
* InvalidType
  * `rerunProject` (if present) is not a project ID
* PermissionDenied
  * VIEW access is required for the `project` input if provided
  * VIEW access is required for some project containing the specified object, may be different from the `project` input provided.

### API method: `/workflow-xxxx/run`

#### Specification

All inputs must be provided, either as bound inputs in the workflow or in the `input` field.

Intermediate results are output for the stages and outputs specified.

If any stages have been previously run with the same executable and the same inputs, then the previous results may be used.

#### Inputs

* `name` **string** (optional) Name for the resulting analysis. Defaults to the workflow name.
* `input` **mapping** (required) Input with which the analysis is launched.
  * **key** — Input field name. See the `inputSpec` and `inputs` fields in the output of [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) for what the names of the inputs are.
  * **value** — Input field value.
* `project` **string** (optional) The ID of the project in which this workflow runs, also known as the *project context*. If invoked with the `detach: true` option, then the detached analysis runs under the provided `project` (if provided), otherwise project context is inherited from that of the invoking job. If invoked by a user or run as detached, all output objects are cloned into the project context. Otherwise, all output objects are cloned into the temporary workspace of the invoking job. For more information, see [The Project Context and Temporary Workspace](https://documentation.dnanexus.com/developer/api/running-analyses/..#project-context-and-temporary-workspace).
  * Required if invoked by a user. Optional if invoked from a job with `detach: true` option. Prohibited when invoked from a job with `detach: false`.

{% hint style="info" %}
A license is required to launch detached executions. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `folder` **string** (optional) The folder into which objects output by the analysis are placed. If the folder does not exist when the job is complete, the folder is created, along with any parent folders necessary. See the [Customizing Output Folders](#customizing-output-folders) section above for more details on how it interacts with stages' output folders. If no value is provided here and the workflow does not have `outputFolder` set, then the default value is "/".
* `stageFolders` **mapping** (optional) Override any stored options for the workflow stages' `folder` fields. See the [Customizing Output Folders](#customizing-output-folders) section for more details.
  * **key** — Stage ID or `"*"` to indicate that the value should be applied to all stages not otherwise mentioned.
  * **value** **null or string** — Value to replace the stored default.
* `details` **array or mapping** (optional) Any conformant JSON, which is defined as a JSON object or array per RFC4627. This is stored with the created job. Defaults to `{}`.
* `delayWorkspaceDestruction` **boolean** (optional) If not given, the value defaults to false for root executions (launched by a user or detached from another job), or to the parent's `delayWorkspaceDestruction` setting. If set to true, the temporary workspace created for the resulting execution is preserved for 3 days after the job either succeeds or fails.
* `rerunStages` **array of strings** (optional) A list of stage IDs that should be forcibly rerun. The system automatically identifies stages requiring rerun, and this parameter adds specific stages to that list. If the list includes the string `"*"`, then all stages are rerun.
* `executionPolicy` **mapping** (optional) A collection of options that govern automatic job restart on certain types of failures. This can only be set at the user-level API call (jobs cannot override this for their subjobs). Contents of this field override any of the corresponding keys in the `executionPolicy` mapping found in individual stages and their executables' [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) (if present).
  * `restartOn` **mapping** (optional) Indicate a job restart policy.
    * **key** — A restartable failure reason (`ExecutionError`, `UnresponsiveWorker`, `JMInternalError`, `AppInternalError`, `AppInsufficientResourceError`, `JobTimeoutExceeded`, or `SpotInstanceInterruption`) or `*` to indicate all restartable failure reasons that are otherwise not present as keys.
    * **value** **integer** — Maximum number of restarts for the failure reason.
  * `maxRestarts` **integer** (optional) Non-negative integer less than 10, indicating the maximum number of times that the job is restarted. Defaults to `9`.
  * `onNonRestartableFailure` **string** (optional) If unset, allows the stages to govern their failure propagation behavior. If set, indicates whether the failure of any stage should propagate failure to all other non-terminal stages in the analysis, even if those stages do not have any dependencies on the failed stage. Stages that have dependencies on the stage that failed still fail irrespective of this setting.
    * Must be one of `"failStage"` or `"failAllStages"`.
* `systemRequirements` **mapping** (optional) Request specific resources for all stages not explicitly specified in `stageSystemRequirements`. Values are merged with stages' stored values as described in the [System Requirements](#system-requirements) section. See the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section for more details.
* `stageSystemRequirements` **mapping** (optional) Request specific resources by stage. Values are merged with stages' stored values as described in the [System Requirements](#system-requirements) section.
  * **key** — Stage ID.
  * **value** **mapping** — Value to override or merge with the stage's `systemRequirements` value.
* `systemRequirementsByExecutable` **mapping** (optional) Request system requirements for all jobs in the resulting execution tree, configurable by executable and by entry point, described in more detail in the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section.
* `timeoutPolicyByExecutable` **mapping** (optional) The timeout policies for jobs in the resulting job execution tree, configurable by executable and the entry point within that executable. See the `timeoutPolicyByExecutable` field in [`/applet-xxxx/run`](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#api-method-applet-xxxx-run) for more details.
* `allowSSH` **array of strings** (optional) Array of IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks (up to /16) from which SSH access is allowed to the user by the worker running this job. Array may also include `"*"` which is interpreted as the IP address of the client issuing this API call as seen by the API server. Defaults to `[]`.
* `debug` **mapping** (optional) Specify debugging options for running the executable. This field is only accepted when this call is made by a user (and not a job). Defaults to `{}`.
  * `debugOn` **array of strings** (optional) Array of job errors after which the job's worker should be kept running for debugging purposes, offering a chance to SSH into the worker before worker termination (assuming SSH has been enabled). This option applies to all jobs in the execution tree. Jobs in this state for longer than 2 days are automatically terminated but can be terminated earlier. Allowed entries include `ExecutionError`, `AppError`, `AppInternalError`, and `AppInsufficientResourceError`. For a description of each error type, see [Types of Errors](https://documentation.dnanexus.com/developer/apps/error-information). Defaults to `[]`.
* `editVersion` **integer** (optional) If provided, run the workflow only if the current version matches the provided value and throw an error if it does not match. If not provided, the current version is run.
* `properties` **mapping** (optional) Properties to associate with the resulting analysis.
  * **key** — Property name.
  * **value** **string** — Property value.
* `tags` **array of strings** (optional) Tags to associate with the resulting analysis.
* `singleContext` **boolean** (optional) If true then the resulting jobs and their descendants are only allowed to use the authentication token given to them at the onset. Use of any other authentication token results in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, and `singleContext: true` setting is used instead.
* `ignoreReuse` **array of strings** (optional) Specifies ids of workflow stages (or `"*"` for all stages) that ignore job reuse. If a specified stage points to a nested sub-workflow, reuse is ignored recursively by the whole nested sub-workflow. Overrides `ignoreReuse` setting in the workflow and in stage executables.
* `nonce` **string** (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single analysis is created. For more information, see [Nonces](https://documentation.dnanexus.com/developer/api/nonces).
* `detach` **boolean** (optional) This option has no impact when the API is invoked by a user. If invoked from a job with detach set to true, the new analysis is detached from the creator job and appears as a typical root execution. A failure in the detached analysis does not cause a termination of the job from which it was created and vice versa. Detached job inherits neither the access to the workspace of its creator job nor the creator job's priority. Detached analysis' access permissions are the intersection (most restricted) of access permissions of the creator job and the permissions requested by jobs' executables in the detached analysis. To launch the detached analysis, creator job must have CONTRIBUTE or higher access to the project in which the detached job is launched. The billTo of the project in which the creator job is running must have a license to launch detached executions.

{% hint style="info" %}
A license is required to be able to launch detached executions. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `rank` **integer** (optional) An integer between -1024 and 1023, inclusive. The rank indicates the priority in which the executions generated from this executable are processed. The higher the rank, the more prioritized it is. If no rank is provided, the executions default to a rank of zero. If the execution is not a root execution, it inherits its parent's rank. If a rank is provided, all executions relating to the workflow stages also inherit the rank.

{% hint style="info" %}
A license is required to use the Job Ranking feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `costLimit` **number** (optional) The limit of the cost that this execution tree should accrue before termination. This field is ignored if this is not a root execution.
* `preserveJobOutputs` **mapping** (optional, nullable) Preserves all cloneable outputs of every completed, non-jobReused job in the execution tree launched by this API call in the root execution project, even if root execution ends up failing. Preserving the job outputs in the project trades off higher costs of storage for the possibility of subsequent job reuse. Defaults to `null`.

  \
  When a non-jobReused job in the root execution tree launched with non-null `preserveJobOutputs` enters "done" state, all cloneable objects referenced by the `$dnanexus_link` in the job's `output` field are cloned to the project folder described by `preserveJobOutputs.folder`. This happens unless the output objects already appear elsewhere in the project. Cloneable objects include files, records, applets, and closed workflows, but not databases. If the folder specified by `preserveJobOutputs.folder` does not exist in the project, the system creates the folder and its parents.\
  \
  As the root job or root analysis' stages complete, the regular outputs of the root execution are moved from `preserveJobOutputs.folder` to the regular output folders of the root execution. When you run your root execution without the `preserveJobOutputs` option to completion, some root execution outputs appear in the project in the root execution's output folders. If you had run the same execution with `preserveJobOutputs.folder` set to `"/pjo_folder"`, the same set of outputs would appear in the same set of root execution folders as in the first case at completion of the root execution. Some additional job outputs that are not outputs of the root execution would appear in `"/pjo_folder"`.\
  \
  `preserveJobOutputs` argument can be specified only when starting a root execution or a detached job.\
  \
  `preserveJobOutputs` value, if not **null**, should be a mapping that may contain the following:

  * **key** — `"folder"` **string** (optional).
  * **value** **string** — Specifies a folder in the root execution project where the outputs of jobs that are a part of the launched execution are stored. `path_to_folder` starting with `/` is interpreted as absolute folder path in the project the job is running in. `path_to_folder` not starting with `/` is interpreted as a path relative to the root execution's `folder` field. An empty string `path_to_folder` value (`""`) preserves job outputs in the folder described by root execution's `folder` field.\
    If the `preserveJobOutputs` mapping does not have a `folder` key, the system uses the default folder value of `"intermediateJobOutputs"`. For example, `"preserveJobOutputs": {}` is equivalent to `"preserveJobOutputs": {"folder":"intermediateJobOutputs"}`.

    It is recommended to place preserveJobOutputs outputs for different root executions into different folders so as not to create a single folder with a large (>450K) number of files.

{% hint style="info" %}
A license is required to use `preserveJobOutputs`. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `detailedJobMetrics` **boolean** (optional) Requests detailed metrics collection for jobs if set to true. The default value for this flag is project `billTo`'s `detailedJobMetricsCollectDefault` policy setting or false if org default is not set. This flag can be specified for root executions and applies to all jobs in the root execution. The list of detailed metrics collected every 60 seconds and viewable for 15 days from the start of a job is available using [`dx watch --metrics`](https://documentation.dnanexus.com/user/helpstrings-of-sdk-command-line-utilities#watch-metrics-help).

{% hint style="info" %}
A license is required to use `detailedJobMetrics`. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

#### Outputs

* `id` **string** ID of the created analysis object, such as "analysis-xxxx".
* `stages` **array of strings** List of job IDs that are created for each stage, as ordered in the workflow.

#### Errors

* ResourceNotFound
  * The specified workflow object, any referenced apps or applets, or project context does not exist
* PermissionDenied
  * VIEW access to the workflow, VIEW access to applets, any apps must be installed
  * CONTRIBUTE access to the project context required unless called by a job
  * When specifying `allowSSH` or `debug` options, the user must have developer access to all apps in the workflow, or the apps must have the `openSource` field set to true
  * If `preserveJobOutputs` is not **null** and `billTo` of the project where execution is attempted does not have preserveJobOutputs license.
  * `detailedJobMetrics` setting of true requires project's `billTo` to have `detailedJobMetrics` license feature set to true.
  * `app{let}-xxxx` can not run in `project-xxxx` because executable's `httpsApp.shared_access` should be `NONE` to run with isolated browsing.
    * This check applies to all workflow stages.
  * The project is associated with a [TRE](https://documentation.dnanexus.com/developer/api/trusted-research-environments) and `allowSSH` was specified. SSH access is not allowed in TRE projects. See [Execution Restrictions](https://documentation.dnanexus.com/developer/trusted-research-environments#execution-restrictions).
  * The project is associated with a TRE and one or more stage executables have `httpsApp` settings that are not allowed. In TRE projects, only allowlisted HTTPS apps may expose HTTPS, and only on port `443`. See [Execution Restrictions](https://documentation.dnanexus.com/developer/trusted-research-environments#execution-restrictions).
  * The project is associated with a TRE and one or more stage executables are not in the project's `allowedExecutables` list, if the TRE policy restricts allowed executables.
* InvalidInput
  * The workflow spec is not complete
  * The project context must be in the same region as this workflow
  * All data object inputs that are specified directly must be in the same region as the project context.
  * All inputs that are job-based object references must refer to a job that was run in the same region as the project context.
  * `allowSSH` accepts only IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks up to /16
  * A `nonce` was reused in a request but other inputs had changed signifying a new and different request
  * A `nonce` may not exceed 128 bytes
  * The `billTo` of the job's project must be licensed to start detached executions when invoked from the job with `detach: true` argument
  * `preserveJobOutputs` is specified when launching a non-detached execution from a job.
  * `preserveJobOutputs.folder` value is a syntactically invalid path to a folder.
  * `detailedJobMetrics` can not be specified when launching a non-detached execution from a job.
  * `timeoutPolicyByExecutable` for all executables should not be `null`
  * `timeoutPolicyByExecutable` for all entry points of all executables should not be `null`
  * `timeoutPolicyByExecutable` for all entry points of all executables should not exceed 30 days
  * Expected key `timeoutPolicyByExecutable.*` of input to match `/^(app|applet)-[0-9A-Za-z]{24}$/`
  * `systemRequirements.*.instanceTypeSelector` keyword is not allowed at runtime
  * `systemRequirementsByExecutable.*.*.instanceTypeSelector` keyword is not allowed at runtime
  * `stageSystemRequirements.*.*.instanceTypeSelector` keyword is not allowed at runtime
* InvalidState
  * `editVersion` was provided and does not match the current stored value.
* SpendingLimitExceeded
  * The `billTo` has reached its spending limit.
* OrgExpired
  * The `billTo` organization has expired.

For InvalidInput errors that result from a mismatch of an applet or app's input specification, an additional field is provided in the error JSON of the form (see documentation for [`/applet-xxxx/run`](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#api-method-applet-xxxx-run) for more details.)

### API method: `/workflow-xxxx/dryRun`

#### Specification

Perform a dry run of the [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) API method.

**No new jobs or analyses** are created by this method. Any analysis and job IDs returned in the response (except for cached execution IDs) are placeholders and do not represent actual entities in the system.

This method can be used to determine which stages have previous results that would be used. In particular, a stage that would reuse a cached result has a `parentAnalysis` field (found at `stages.N.execution.parentAnalysis` where N is the index of the stage) that refers to a preexisting analysis and therefore does not match the top-level field `id` in the response.

#### Inputs

* Same as would be provided to [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run)

#### Outputs

* Same as the output if the resulting analysis had been described (see [`/analysis-xxxx/describe`](#api-method-analysis-xxxx-describe))

#### Errors

* Same as would be thrown if [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) had been called with the same input

### API method: `/workflow-xxxx/validateBatch`

#### Specification

This API call verifies that a set of input values for a particular workflow can be used to launch a batch of jobs in parallel.

Batch and common inputs:

`batchInput`: mapping of inputs corresponding to batches. The value at each position in the array corresponds to the execution of the workflow at that position. Including a `null` value in an array at a given position means that the corresponding workflow input field is optional and the default value, if defined, should be used. E.g.:

```json
{
  "stage_0.a": [{$dnanexus_link: "file-xxxx"}, {$dnanexus_link: "file-yyyy"}, ....],
  "stage_1.b": [1,null, ...]
}
```

`commonInput`: mapping of non-batch, constant inputs common to all batch jobs, e.g.:

```json
{
  "stage_0.c": "foo"
}
```

File references:

`files`: list of files (passed as `$dnanexus_link` references), must be a superset of files included in `batchInput` and/or `commonInput` e.g.:

```json
[
  {$dnanexus_link: "file-xxxx"},
  {$dnanexus_link: "file-yyyy"}
]
```

Output: list of mappings, each mapping corresponds to an expanded batch call. Each mapping contains the input values for the corresponding execution of the workflow, based on its position in the list. E.g.:

```json
[
  {"stage_0.a": {$dnanexus_link: "file-xxxx"}, stage_1.b: 1, stage_0.c: "foo"},
  {"stage_0.a": {$dnanexus_link: "file-yyyy"}, stage_1.b: null, stage_0.c: "foo"}
]
```

It performs the following validation:

* the input types match the expected workflow input field types,
* provided inputs are sufficient to run the workflow,
* `null` values are only among values for inputs that are optional or have no specified default values,
* all arrays of `batchInput` are of equal size,
* every file referred to in `batchInputs` exists in `files` input.

If the workflow is locked, that is, workflow-level `inputs` are specified for the workflow, this `inputs` specification is used in place of stage-level `inputSpecs`, and workflow input field names must be provided in `batchInput` and `commonInput`. This happens because for locked workflows input values can only be passed to the workflow-level inputs. For locked workflows, refer to input fields by their names defined in `inputs`. To refer to a specific field in a stage of a non-locked workflow, the `<stage id>.<input field name defined in inputSpec>` format should be used.

#### Inputs

* `batchInput` **mapping** (required) Input that the workflow is launched with.
  * **key** — Input field name. It must be one of the names of the inputs defined in the workflow input specification.
  * **value** — Input field values. It must be an array of fields.
* `commonInput` **mapping** (optional) Input that the workflow is launched with.
  * **key** — Input field name. It must be one of the names of the inputs defined in the workflow input specification.
  * **value** — Input field values. It must be an array of fields.
* `files` **array of mappings** (optional) Files that are needed to run the batch jobs, they must be provided as `$dnanexus_links`. They must correspond to all the files included in `commonInput` or `batchInput`.

#### Outputs

* `expandedBatch` **array of mappings** Each mapping contains the input values for one execution of the workflow in batch mode.

#### Errors

* InvalidInput
  * Input specification must be specified for the workflow
  * Expected `batchInput` to be a JSON object
  * Expected `commonInput` to be a JSON object
  * Expected `files` to be an array of `$dnanexus_link` references to files
  * The `batchInput` field is required but empty array was provided
  * Expected the value of `batchInput` for a workflow input field to be an array
  * Expected the length of all arrays in `batchInput` to be equal
  * The workflow input field value must be specified in `batchInput`
  * The workflow input field is not defined in the input specification of the workflow
  * All the values of a specific `batchInput` field must be provided (cannot be `null`) since the field is required and has no default value
  * Expected all the files in `batchInput` and `commonInput` to be referenced in the `files` input array

## Analysis API Method Specifications

### API method: `/analysis-xxxx/describe`

#### Specification

Describe the specified analysis object.

If the results from previously run jobs are used for any of the stages, they are still listed here. However, the stages' `parentAnalysis` field still reflects the original analyses in which they were run.

The description of an analysis may not be available if an upstream analysis is not finished running. Users with reorg apps that rely on describing the analysis that is running may want to check the output field `dependsOn` before the full analysis description becomes available using `dx describe analysis-xxx --json | jq -r .dependsOn` or equivalent `dxpy` bindings. The output of the command is an empty array `[]` if it no longer depends on anything (indicating a status like "done"), which is the signal to proceed. If it contains (sub)analysis IDs, it is not ready, and the reorg script should wait.

#### Inputs

* `defaultFields` **boolean** (optional) Specifies whether to include the default set of fields in the output (the default fields are described in the "Outputs" section below). The selections are overridden by any fields explicitly named in fields.
  * Defaults to `false` if `fields` is supplied, `true` otherwise.
* `fields` **mapping** (optional) Include or exclude the specified fields from the output. These selections override the settings in `defaultFields`.
  * **key** — Output field to include or exclude. See the "Outputs" section below for valid values here.
  * **value** **boolean** — Whether to include the field.

#### Outputs

* `id` **string** The object ID, such as "analysis-xxxx".

The following fields are included by default (but can be disabled by setting `defaultFields` to `false` or by using the `fields` input):

* `class` **string** The value "analysis".
* `name` **string** Name of the analysis (either specified at creation time or given automatically by the system).
* `executable` **string** ID of the workflow or the global workflow that was run.
* `executableName` **string** Name of the workflow or the global workflow that was run.
* `created` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this object was created.
* `modified` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this analysis was last updated.
* `billTo` **string** ID of the account to which any costs associated with this analysis are billed.
* `project` **string** ID of the project in which this analysis was run.
* `folder` **string** The output folder in which the outputs of this analysis are placed.
* `rootExecution` **string** ID of the job or analysis at the root of the execution tree (the job or analysis created by a user's API call rather than called by a job or as a stage in an analysis).
* `parentJob` **string** (nullable) ID of the job which created this analysis, or `null` if this analysis was not created by a job.
* `parentJobTry` **non-negative integer** (nullable) Returns `null` if this analysis was not created by a job, or if the parent job had a `null` `try` attribute. Otherwise, this analysis was created from the `parentJobTry` try of the `parentJob`.
* `parentAnalysis` **string** (nullable) If this is an analysis that was run as a stage in an analysis, then this is the ID of that analysis, or `null` otherwise.
* `detachedFrom` **string** (nullable) The ID of the job this analysis was detached from via the `detach` option, or `null` if not detached.
* `detachedFromTry` **non-negative integer** (nullable) If this analysis was detached from a job, `detachedFrom` and `detachedFromTry` describe the specific try of the job this analysis was detached from. Returns `null` if this analysis was not detached from another job or if the `detachedFrom` had a `null` `try` attribute.
* `analysis` **string** (nullable) The ID of the analysis this analysis is part of, or `null` if this analysis was not run as part of a stage in an analysis.
* `stage` **string** (nullable) The ID of the stage this analysis is part of, or `null` if this job was not run as part of a stage in an analysis.
* `workflow` **mapping** Metadata of the workflow that was run, including at least the following fields (analyses created after 8/2014 include the full describe output at the time that the analysis was created).
  * `id` **string** ID of the workflow.
  * `name` **string** Name of the workflow.
  * `inputs` **array of mappings** Input specification of the workflow.
  * `outputs` **array of mappings** Output specification of the workflow.
  * `stages` **array of mappings** List of metadata for each stage. See description in [`/workflow-xxxx/describe`](#api-method-workflow-xxxx-describe) for more details on what may be returned in each element of the list.
  * `editVersion` **integer** Edit version at the time of running the workflow.
  * `initializedFrom` **mapping** If applicable, the `initializedFrom` mapping from the workflow.
* `stages` **array of mappings** List of metadata for each of the stages' executions.
  * `id` **string** Stage ID.
  * `execution` **mapping** With key `id` and value of the execution ID. Additional keys are present if the describe hash of the origin job or analysis of the stage has been requested and is available (the fields returned here can be limited by setting `fields.stages` in the input to the hash one would give to describe the execution).
* `state` **string** The analysis state.
  * Must be one of `"in_progress"`, `"partially_failed"`, `"done"`, `"failed"`, `"terminating"`, or `"terminated"`.
* `workspace` **string** ID of the temporary workspace assigned to the analysis, such as "container-xxxx".
* `launchedBy` **string** ID of the user who launched `rootExecution`. This is propagated to all jobs launched by the analysis.
* `tags` **array of strings** Tags associated with the analysis.
* `properties` **mapping** Properties associated with the analysis.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **array or mapping** The JSON details that were stored with this analysis.
* `runInput` **mapping** The value given as `input` in the API call to run the workflow.
* `originalInput` **mapping** The effective input of the analysis, including all defaults as bound in the stages of the workflow, overridden with any values present in `runInput`, and all input field names are translated to their canonical names, such as "\<stage ID>.\<field name>".
* `input` **mapping** The same as `originalInput`.
* `output` **mapping** (nullable) Contains key/value pairs for all outputs that are available (final only when `state` is one of `done`, `terminated`, and `failed`), or `null` if no stages have finished.
* `delayWorkspaceDestruction` **boolean** Whether the analysis's temporary workspace is kept around for 3 days after the analysis either succeeds or fails.
* `ignoreReuse` **array of strings** (nullable) Analysis stage ids (or `"*"` for all stages) that were configured to ignore job reuse.
* `preserveJobOutputs` **mapping** (nullable) The `preserveJobOutputs` setting, with `preserveJobOutputs.folder` expanded to start with `"/"`.
* `detailedJobMetrics` **boolean** Set to true only if the detailed job metrics collection was enabled for this analysis.
* `costLimit` **number** If the job is a root execution, and has the root execution cost limit, this is the cost limit for the root execution.
* `rank` **integer** The rank of the analysis, with a range from \[-1024 to 1023].

If this job is a root execution, the following fields are included by default (but can be disabled using `fields`):

* `selectedTreeTurnaroundTimeThreshold` **integer** (nullable) The selected turnaround time threshold (in seconds) for this root execution. When `treeTurnaroundTime` reaches the `selectedTreeTurnaroundTimeThreshold`, the system sends an email about this root execution to the `launchedBy` user and the `billTo` profile.
* `selectedTreeTurnaroundTimeThresholdFrom` **string** (nullable) Where `selectedTreeTurnaroundTimeThreshold` is from. `executable` means that `selectedTreeTurnaroundTimeThreshold` is from this root execution's executable's `treeTurnaroundTimeThreshold`. `system` means that `selectedTreeTurnaroundTimeThreshold` is from the system's default threshold.
* `treeTurnaroundTime` **integer** The turnaround time (in seconds) of this root execution, which is the time between its creation time and its terminal-state time (or the current time if it is not in a terminal state. Terminal states for an execution include done, terminated, and failed. See [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle)for information on them). If this root execution can be retried, the turnaround time begins at the creation time of the root execution's first try, so it includes the turnaround times of all tries.

{% hint style="info" %}
A license is required to use the `jobNotifications` feature. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
{% endhint %}

If the requesting user has permissions to view the pricing model of the `billTo` of the analysis, and the price for the analysis has been finalized:

* `currency` **mapping** Information about currency settings, such as `dxCode`, `code`, `symbol`, `symbolPosition`, `decimalSymbol`, and `groupingSymbol`.
* `totalPrice` **number** Price (in `currency`) for how much this job (along with all its subjobs) costs.
* `priceComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `totalPrice` was computed. For billing purposes, the cost of the analysis accrues to the invoice of the month that contains `priceComputedAt` (in UTC).
* `totalEgress` **mapping** Egress (in `Byte`) for how much data amount this job (along with all its subjobs) has egressed.
  * `regionLocalEgress` **integer** Amount in bytes of data transfer between IP in the same cloud region.
  * `internetEgress` **integer** Amount in bytes of data transfer to IP outside of the cloud provider.
  * `interRegionEgress` **integer** Amount in bytes of data transfer to IP in other regions of the cloud provider.
* `egressComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `totalEgress` was computed. For billing purposes, the cost of the analysis accrues to the invoice of the month that contains egressComputedAt (in UTC).

The following field is only returned if the corresponding field in the `fields` input is set to `true`, the requesting user has permissions to view the pricing model of the `billTo` of the job, and the job is a root execution:

* `subtotalPriceInfo` **mapping** Information about the current costs associated with all jobs in the tree rooted at this analysis.
  * `subtotalPrice` **number** Current cost (in `currency`) of the job tree rooted at this analysis.
  * `priceComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `subtotalPrice` was computed.
* `subtotalEgressInfo` **mapping** Information about the aggregated egress amount in bytes associated with all jobs in the tree rooted at this analysis.
  * `subtotalRegionLocalEgress` **integer** Amount in bytes of data transfer between IP in the same cloud region.
  * `subtotalInternetEgress` **integer** Amount in bytes of data transfer to IP outside of the cloud provider.
  * `subtotalInterRegionEgress` **integer** Amount in bytes of data transfer to IP in other regions of the cloud provider.
  * `egressComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `subtotalEgress` was computed.

The following fields are returned if the corresponding field in the `fields` input is set to `true`:

* `runSystemRequirements` **mapping** (nullable) A mapping with the `systemRequirements` values that were passed explicitly to [`/globalworkflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/global-workflows#api-method-globalworkflow-xxxx-yyyy-run) or [`/workflow-xxxx/run`](#api-method-workflow-xxxx-run) when this analysis was created, or `null` if the `systemRequirements` input was not supplied in the API call that created this analysis.
* `runStageSystemRequirements` **mapping** (nullable) Similar to `runSystemRequirements` but for `stageSystemRequirements`.
* `runSystemRequirementsByExecutable` **mapping** (nullable) Similar to `runSystemRequirements` but for `systemRequirementsByExecutable`.
* `mergedSystemRequirementsByExecutable` **mapping** (nullable) A mapping with values of `systemRequirementsByExecutable` supplied to all the ancestors of this analysis and the value supplied to create this analysis, merged as described in the [Requesting Instance Types](https://documentation.dnanexus.com/developer/api/applets-and-entry-points#requesting-instance-types) section. If neither the ancestors of this analysis nor this analysis itself were created with the `systemRequirementsByExecutable` input, returns `null`.

#### Errors

* ResourceNotFound
  * The specified object does not exist
* PermissionDenied
  * User does not have VIEW access to the analysis's project context
* InvalidInput
  * Input is not a hash
  * `fields` (if present) is not a hash or has a non-boolean key (other than `stages`)
  * `fields` has the key `stages` and is not a boolean nor a hash

### API method: `/analysis-xxxx/addTags`

#### Specification

Adds the specified tags to the specified analysis. If any of the tags are already present, no action is taken for those tags.

#### Inputs

* `tags` **array of strings** (required) Tags to be added.

#### Outputs

* `id` **string** ID of the manipulated analysis.

#### Errors

* InvalidInput
* The input is not a hash
* The key `tags` is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag
* ResourceNotFound
  * The specified analysis does not exist
* PermissionDenied
  * CONTRIBUTE access is required for the analysis's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the parent job of the specified analysis

### API method: `/analysis-xxxx/removeTags`

#### Specification

Removes the specified tags from the specified analysis. Ensures that the specified tags are not part of the analysis -- if any of the tags are already missing, no action is taken for those tags.

#### Inputs

* `tags` **array of strings** (required) Tags to be removed.

#### Outputs

* `id` **string** ID of the manipulated analysis.

#### Errors

* InvalidInput
  * The input is not a hash
  * The key `tags` is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag
* ResourceNotFound
  * The specified analysis does not exist
* PermissionDenied
  * CONTRIBUTE access is required for the analysis's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the parent job of the specified analysis

### API method: `/analysis-xxxx/setProperties`

#### Specification

Sets properties on the specified analysis. To remove a property altogether, its value needs to be set to the JSON `null` (instead of a string). This call updates the properties of the analysis by merging any old (previously existing) ones with what is provided in the input, the newer ones taking precedence when the same key appears in the old.

To reset properties, you need to remove all existing key/value pairs and replace them with new ones. First, issue a describe call to get the names of all properties. Then issue a `setProperties` request to set the values of those properties to `null`.

#### Inputs

* `properties` **mapping** (required) Properties to modify.
  * **key** — Name of property to modify.
  * **value** **string or null** — Either a new string value for the property, or `null` to unset the property.

#### Outputs

* `id` **string** ID of the manipulated analysis.

#### Errors

* InvalidInput
  * There exists at least one value in `properties` which is neither a string nor the JSON `null`
  * For each property key-value pair, the size, encoded in UTF-8, of the property key may not exceed 100 bytes and the property value may not exceed 700 bytes
* ResourceNotFound
  * The specified analysis does not exist
* PermissionDenied
  * CONTRIBUTE access is required for the analysis's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the parent job of the specified analysis

### API method: `/analysis-xxxx/terminate`

#### Specification

Terminates an analysis and the stages' origin jobs and/or analyses. This call is only valid from outside the platform.

Analyses can only be terminated by the user who launched the analysis and has at least CONTRIBUTE access or by any user with ADMINISTER access to the project context.

#### Inputs

* None

#### Outputs

* `id` **string** ID of the terminated analysis, such as "analysis-xxxx".

#### Errors

* ResourceNotFound
  * The specified object does not exist
* PermissionDenied
  * ADMINISTER access required to the project context of the job or else the user must match the `launchedBy` entry of the analysis object
* InvalidState
  * The analysis is not in a state from which it can be terminated, for example, it is in a terminal state

### API method: `/analysis-xxxx/update`

#### Specification

Updates an analysis and its stages' jobs and/or analyses. This call is only valid from outside the platform. You can only update the rank of root analyses.

A valid rank field must be provided. To update rank, the organization associated with this analysis must have the license feature `executionRankEnabled` active. The user must also be either the original launcher of the analysis or an administrator of the organization.

When supplying `rank`, the job or analysis being updated must be a `rootExecution`, and must be in a state capable of creating more jobs. `rank` cannot be supplied for terminal states like `terminated`, `done`, `failed`, or `debug_hold`.

#### Inputs

* `rank` **integer** (required) The rank to set the analysis and its children executions to.

{% hint style="info" %}
A license is required to use the Job Ranking feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

#### Outputs

* `id` **string** ID of the updated analysis, such as "analysis-xxxx".

#### Errors

* InvalidInput
  * Input is not a hash
  * Expected input to have property `rank`
  * Expected key `rank` of input to be an integer
  * Expected key `rank` of input to be in range \[-1024, 1023]
  * Not a root execution
* PermissionDenied
  * `billTo` does not have license feature executionRankEnabled
  * Not allowed to change rank
* ResourceNotFound
  * The specified object does not exist


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://documentation.dnanexus.com/developer/api/running-analyses/workflows-and-analyses.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
