# Applets and Entry Points

## About Applets and Entry Points

### Applets

Applets are executable data objects that exist inside projects and can be cloned between projects. Like other data objects on the DNAnexus Platform, applets have VIEW permissions into the projects from which they run. You can use applets to create private, customized scripts for specialized needs, or to test and develop more general apps that may interest the broader community.

When creating an applet, you must include a [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) so the system knows how to execute it. While [input and output specifications](https://documentation.dnanexus.com/developer/api/running-analyses/io-and-run-specifications) are optional, we recommend including them because they provide important benefits:

* The system can validate input arguments when launching an applet
* The system can validate outputs when an applet completes
* The DNAnexus website can automatically generate configuration forms for users
* Other developers can understand how to launch your applet programmatically

If you don't include I/O specifications, users can launch the applet with any input (the system only validates [DNAnexus links](https://documentation.dnanexus.com/developer/api/job-input-and-output#special-values) and [job-based object references](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references)). In this case, you need to handle:

* Documenting what inputs your applet expects
* Explaining what outputs your applet produces
* Providing a user interface for configuring the applet's input

This flexibility allows you to build powerful applets with variable, polymorphic, or complex inputs and outputs.

#### Running an applet: Project context

Running an applet is slightly different depending on whether the applet is launched by a user from outside of the DNAnexus Platform, or by another running job.

Launching an applet outside of the platform requires associating a project with it. As mentioned earlier, this **project context** is important for the following reasons:

* Any charges related to the execution of the applet are associated with that project.
* Jobs (such as the job created by launching the applet, as well as any other jobs created by the applet itself while running) are given VIEW access to that project.
* Any objects output by the applet are placed into that project.

When launching an applet from another running job, this parent job is already associated with a project. This project is carried forward to the launched master job.

* Any charges related to the execution of the master job are associated with that project.
* Jobs (such as the job created by launching the master job, as well as any other descendant jobs of the master job) are given VIEW access to that project.
* Any objects output by the master job are placed into the workspace of the parent job.

When running an applet, the applet object does not need to be inside the project context. Whoever runs the applet needs to have VIEW access to the applet object and CONTRIBUTE access to the project context. However, the generated job has VIEW access to the project context and CONTRIBUTE access to the workspace only.

The system operates with the permissions of that job when accessing any references to file objects required by the applet specification. This includes assets mentioned under `bundledDepends`, resource files, or dependency packages. These objects must be accessible by the job, which is only possible if they are located in the project context or in a public project.

If an applet object's dependencies are not in the project context, running it eventually fails unless it is in a public project. This happens because the system cannot fetch the associated files. For this reason, when an applet is cloned, all objects linked to in the `bundledDepends` of its [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) are also cloned with it.

### Running Entry Points

Any job can call the [`/job/new`](#api-method-job-new) API to create a subjob that runs a specific **entry point**. An entry point is like a function in your code that executes on its own worker in the cloud.

Because each job runs on a separate worker, jobs under the same master job must communicate through:

* Their inputs and outputs
* Stored data objects in their shared temporary workspace

When you first run an applet, it automatically executes the "main" entry point. For more information on creating entry points in your code based on your chosen interpreter, see [Code Interpreters](https://documentation.dnanexus.com/apps/execution-environment#code-interpreters).

### Requesting Instance Types

Different instance types and other system resources can be requested for different entry points of an applet using the `systemRequirements` field. This can be specified in an applet's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) and partially (or fully) overridden at runtime using the `systemRequirementsByExecutable` or `systemRequirements` runtime arguments. The `systemRequirements` argument used at runtime overrides uses the same syntax as in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) for applets and apps. `systemRequirementsByExecutable` argument syntax is described below.

When computing the effective value of `systemRequirements`, the keys are retrieved in the following order (using the first one found):

1. The `systemRequirementsByExecutable` values provided when launching the job's ancestor jobs and the job itself. Child job `systemRequirementsByExecutable` settings merge with parent job settings without overriding the parent values.
2. The `systemRequirements` value requested for the job's master job.
   * If the job was run via [`/app-xxxx/run`](https://documentation.dnanexus.com/developer/api/apps#api-method-app-xxxx-yyyy-run) or [`/applet-xxxx/run`](#api-method-applet-xxxx-run), then this is the value of `systemRequirements` in the API call.
   * If the job was run as a stage in a workflow, then the effective value is a combination of fields provided in [`/workflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/workflows-and-analyses#api-method-workflow-xxxx-run) and of any [default values](https://documentation.dnanexus.com/developer/api/running-analyses/workflows-and-analyses) stored in the workflow itself.
3. The `systemRequirements` field provided to the [`/job/new`](#api-method-job-new) method.
4. The `runSpec.systemRequirements` field in the applet or app.
5. If none of these values are present (or contain a relevant entry point), system-wide defaults are used.

#### Using the `systemRequirements` Argument

The `"*"` entry point in the `systemRequirements` argument applies to all entry points not explicitly named in `systemRequirements`. For example, if an applet's run specification sets instance type X for the "main" entry point, but you run the applet with `systemRequirements` that sets `"*"` to instance type Y, the "main" entry point uses instance type Y. This happens because the runtime `"*"` specification overrides the run specification for all entry points, including "main".

#### Using the `systemRequirementsByExecutable` Argument

The `systemRequirementsByExecutable` argument to [`/app-xxxx/run`](https://documentation.dnanexus.com/developer/api/apps#api-method-app-xxxx-yyyy-run), [`/applet-xxxx/run`](#api-method-applet-xxxx-run), [`/workflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/workflows-and-analyses#api-method-workflow-xxxx-run), [`/globalworkflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/global-workflows#api-method-globalworkflow-xxxx-yyyy-run), and [`/job/new`](#api-method-job-new) allows users to specify `instanceType`, `fpgaDriver`, `nvidiaDriver` and `clusterSpec.initialInstanceCount` fields for all jobs in the resulting execution tree, configurable by executable ID and then by entry point. If present, it includes at least one of the following key-value pairs:

* **key** — Executable id (`applet-xxxx` or `app-xxxx`) or `"*"` to indicate all executables not explicitly specified in this mapping.
* **value** **mapping** — System requirement for the corresponding executable. It includes at least one of the following key-value pairs:
  * **key** — Entry point name or `"*"` to indicate all entry points not explicitly specified in this mapping.
  * **value** **mapping** — Requested resources for the entry point:
    * `instanceType` **string** (optional) A string specifying the instance type used to execute jobs running the specified entry point of the specified executable. See [Instance Types](https://documentation.dnanexus.com/developer/api/running-analyses/instance-types) for a list of possible values.
    * `clusterSpec` **mapping** (optional) If specified, must contain:
      * `initialInstanceCount` **integer** (required) The number of nodes in the cluster including the driver node. Value of 1 indicates a cluster with no worker nodes.
    * `fpgaDriver` **string** (optional) Specifies the FPGA driver to install on the FPGA-enabled cloud host instance before app's code execution. Accepted values depend on instance type:
      * `mem3_ssd2_fpga1_x24`, `mem3_ssd2_fpga2_x48`, `mem3_ssd2_fpga8_x192`: `edico-1.4.9.2` (default),
      * `mem3_ssd2_fpga1_x8`, `mem3_ssd2_fpga1_x16`, `mem3_ssd2_fpga1_x64`: `edico-1.4.2` (default), `edico-1.4.5`, and `edico-1.4.7`.
    * `nvidiaDriver` **string** (optional) Specifies the NVIDIA driver to install on the GPU-enabled cloud host instance before app's code execution. Accepted values are:
      * `R470` (default) uses the driver version [470.256.02](https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-470-256-02/index.html) and supports CUDA 11.4.
      * `R535` uses the driver version [535.247.01](https://docs.nvidia.com/datacenter/tesla/tesla-release-notes-535-247-01/index.html) and supports CUDA 12.2.

The `systemRequirementsByExecutable` argument applies to the entire resulting job execution tree and merges with all downstream runtime inputs, with explicit upstream inputs taking precedence.

Second-level `systemRequirementsByExecutable` keys (`instanceType`, `fpgaDriver`, `nvidiaDriver`, `clusterSpec`) are resolved independently from each other.

For example, calling

```
/executable-xxxx/run(
      systemRequirementsByExecutable = {
        "applet-1":{
           "*":{
             "instanceType":"mem2_ssd1_v2_x2"}}})
```

forces `mem2_ssd1_v2_x2` for all jobs running all entry points of `applet-1`, but allows overrides of `fpgaDriver`, `nvidiaDriver` and `clusterSpec.initialInstanceCount` for `applet-1` via downstream specification of `systemRequirementsByExecutable`, `systemRequirements` / `stageSystemRequirements` specified at runtime, system requirements embedded in workflow stages, and `systemRequirements` supplied to `/executable/new`.

`systemRequirementsByExecutable` specified at the root level can not be overridden by later specifications within the execution tree. Children's `systemRequirementsByExecutable` are merged into parent's `systemRequirementsByExecutable` without overriding parental values. Specification of `"*"` at a higher level (both for executable key and entry point key) precludes overrides at lower levels of execution subtree. `"*"` at either executable or entry point level refers to "everything else" and does not take precedence over any sibling fields in the `systemRequirementsByExecutable` object that mention specific executable IDs or entry points, but precluding overrides at lower level:

* If a parent has `"*"`, the child's specification of any key in that hash is ignored because otherwise the child's key (either `"*"` or other specific key) would override the parental specification.
* If a parent does not have `"*"`, the child's specification of `"*"` applies to all keys not specified at the parent level or by an explicit non-`"*"` key inside the child. The child's non-`"*"` keys that are already mentioned in the parent are ignored, while non-`"*"` keys that are not mentioned in any of the parents take effect.

Invoking [`/executable-xxxx/run`](https://documentation.dnanexus.com/developer/api/api-directory) with "detach": `true` argument causes the new detached execution tree to disregard `detachedFrom`'s `systemRequirementsByExecutable`. A detached execution tree is treated like a new root execution: detached execution tree does not inherit `detachedFrom`'s `systemRequirementsByExecutable` setting, but honors an optional `systemRequirementsByExecutable` S argument if it is supplied to `/executable-xxxx/run(detached=true, systemRequirementsByExecutable=S)` call that created the detached execution tree.

Examples:

Force all jobs in the execution tree to use `mem2_ssd1_v2_x2`. Can not be overridden at either executables or entry point level of any downstream job in the resulting execution tree by either `systemRequirementsByExecutable` specifications or by `systemRequirements` specifications.

```shell
/executable-xxxx/run({"systemRequirementsByExecutable":
      {"*":
          {"*": {"instanceType": "mem2_ssd1_v2_x2"}}}})
```

Force all jobs in the execution tree executing `applet-1` to run on `mem2_ssd1_v2_x2`. Downstream jobs in the resulting execution tree can override `instanceType` for executables other than `applet-1`, but cannot override `applet-1`'s instance types for a specific entry point because of the `"*"` entry point in the below specification.

```shell
/executable-xxxx/run( {"systemRequirementsByExecutable":
      {"applet-1":
          {"*": {"instanceType": "mem2_ssd1_v2_x2"}}}})
```

Force all jobs in the execution tree that are executing the `applet-1` `main` entry point to run on `mem2_ssd1_v2_x2`. Downstream jobs can override `instanceType` for executables other than `applet-1` and for entry points of `applet-1` other than `main`.

```shell
/executable-xxxx/run( {"systemRequirementsByExecutable":
      {"applet-1":
          {"main": {"instanceType": "mem2_ssd1_v2_x2"}}}})
```

Force all jobs in the execution tree that are executing the `applet-1` `main` entry point to run on `mem2_ssd1_v2_x2` and jobs executing all other entry points of `applet-1` to run on `mem2_ssd1_v2_x4`. Also force all jobs in the execution tree executing the `main` entry point of `applet-2` to run on `mem2_ssd1_v2_x8`. Downstream jobs can override `instanceType` for executables other than `applet-1`, except for the `main` entry point of `applet-2`.

```shell
/executable-xxxx/run( {"systemRequirementsByExecutable":
      {"applet-1":
          {"main": {"instanceType": "mem2_ssd1_v2_x2"},
            "*":       {"instanceType": "mem2_ssd1_v2_x4"}}},
    {"applet-2":
          {"main": {"instanceType": "mem2_ssd1_v2_x8"}}}})
```

For more examples of how system requirements are resolved, see `dx run --instance-type-help`.

### Specifying Job Timeouts

You can configure job timeout policies to set time limits for jobs running specific apps or applets at specific entry points. When a job exceeds its timeout, the system either terminates or restarts it (based on the job's restart policy).

You can specify timeout policies in two ways:

1. **When creating the executable**: Set timeouts in the `timeoutPolicy` field of the executable's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification)
2. **At runtime**: Override timeouts when launching jobs using:
   * [`/app-xxxx/run`](https://documentation.dnanexus.com/developer/api/apps#api-method-app-xxxx-yyyy-run)
   * [`/applet-xxxx/run`](#api-method-applet-xxxx-run)
   * [`/workflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/workflows-and-analyses#api-method-workflow-xxxx-run)
   * [`/globalworkflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/global-workflows#api-method-globalworkflow-xxxx-yyyy-run)
   * [`/job/new`](#api-method-job-new)

The system determines the effective timeout for a job by checking these sources in order of priority:

* The runtime input `timeoutPolicyByExecutable.<executable_id>.<entry_point>` field. This field overrides both user-specified and system default timeout policies, and it propagates down the entire resulting job execution tree and merges with all downstream runtime inputs, with explicit upstream inputs taking precedence.
* The `timeoutPolicy.<entry_point>` field provided in the executable's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) on executable creation. This field serves as the user-specified default timeout policy and overrides the system default timeout policy.
* The system default of 30 days. This limit is enforced for all jobs billed to orgs that do not have the `allowJobsWithoutTimeout` license.

The `'*'` entry point refers to all other entry points that are not named in `timeoutPolicyByExecutable.<executable_id>` or the `timeoutPolicy` field of the executable's [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification).

Setting the timeout of a specific executable at a specific entry point to 0 has the same effect as not setting a timeout for that executable at that entry point at all. No runtime or default run specification entry exists for that executable at that entry point.

Example: Here's how timeout policies work in practice:

1. An applet with ID `<applet_id>` has a [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) that sets a timeout of 5 hours for entry point `'*'` (meaning all entry points by default).
2. When you run this applet, you provide a runtime policy that sets a 2-hour timeout specifically for the `'main'` entry point of `<applet_id>`.
3. The main job runs with a 2-hour timeout because the runtime policy overrides the applet's default setting.
4. The main job then creates a subjob that runs at entry point `'foo'` with no additional runtime policies specified.
5. The subjob uses the 5-hour timeout from the original applet specification because no runtime override applies to the `'foo'` entry point.

## Applet API Method Specifications

### API Method: `/applet/new`

#### Specification

Creates a new applet object with the given applet specification.

Links specified in `bundledDepends` contribute to the "links" array returned by a describe call and are always cloned together with the applet regardless of their visibility.

An applet that is not written in a supported interpreted language can still run if you create a separate file object that contains the compiled code. To bundle the compiled code with the applet, create an applet object that links to the file object in `bundledDepends` and includes code that specifically runs the compiled binary. You do not need to mark the file object as `hidden` for it to be bundled with the applet object, but hiding it may be more useful if it is unlikely to be used in isolation. See the [Execution Environment Reference](https://documentation.dnanexus.com/developer/apps/execution-environment) for more details on how the contents of `bundledDepends` are handled.

The applet object does not receive special permissions for any referenced data objects (such as `id` in `bundledDepends`). These entries are accessed every time the applet is run, with the same permissions as the job. This means that if some referenced file is later deleted or is not present in the project context, the applet cannot run. However, the system does not automatically "invalidate" the applet object for any broken links. If the referenced file is reinstated from a copy existing in another project, the applet can then be run.

#### Inputs

* `project` **string** (required) ID of the project or container to which the applet should belong, such as "project-xxxx".
* `name` **string** (optional) The name of the object. Defaults to the new ID.
* `title` **string** (optional) Title of the applet, for example, "Micro Map". Defaults to `""`.
* `summary` **string** (optional) A short description of the applet. Defaults to `""`.
* `description` **string** (optional) A longer description about the applet. Defaults to `""`.
* `developerNotes` **string** (optional) More detailed notes about the applet. Defaults to `""`.
* `tags` **array of strings** (optional) Tags to associate with the object.
* `types` **array of strings** (optional) Types to associate with the object.
* `hidden` **boolean** (optional) Whether the object should be hidden. Defaults to `false`.
* `properties` **mapping** (optional) Properties to associate with the object.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** (optional) JSON object or array that is to be associated with the object. See the [Object Details](https://documentation.dnanexus.com/developer/api/data-object-lifecycle/details-and-links) section for details on valid input. Defaults to `{}`.
* `folder` **string** (optional) Full path of the folder that is to contain the new object. Defaults to `"/"`.
* `parents` **boolean** (optional) Whether all folders in the path provided in `folder` should be created if they do not exist. Defaults to `false`.
* `ignoreReuse` **boolean** (optional) If true, [Smart Reuse](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-reuse) is disabled for this applet. When false or omitted, the applet allows Smart Reuse provided the organization has the feature enabled. Defaults to `false`.
* `inputSpec` **array of mappings** (optional) An input specification as described in the [Input Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#input-specification) section.
* `outputSpec` **array of mappings** (optional) An output specification as described in the [Output Specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#output-specification) section.
* `runSpec` **mapping** (required) A run specification as described in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) section.
* `dxapi` **string** (required) The version of the API that the applet was developed with, for example, "1.0.0".
* `access` **mapping** (optional) Access requirements as described in the [I/O and Run Specifications](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#access-requirements).
* `httpsApp` **mapping** (optional) HTTPS app configuration.
  * `ports` **array of integers** (required) Array of ports open for inbound access.
    * Allowed ports are `443`, `8080`, and `8081`.
  * `shared_access` **string** (required) HTTPS access restriction for jobs run from this executable. VIEW, CONTRIBUTE, ADMINISTER require the specified permission level or greater for the project in which the job executes. Most restrictive NONE setting limits access to only the user who launched the job.
    * Must be one of `"VIEW"`, `"CONTRIBUTE"`, `"ADMINISTER"`, or `"NONE"`.
  * `dns` **mapping** (optional) DNS configuration for the job.
    * `hostname` **string** (optional) If specified, the URL to access this job in the browser is `https://hostname.dnanexus.cloud` instead of the default `https://job-xxxx.dnanexus.cloud`. The hostname must consist of lower case alphanumeric characters and a hyphen (`-`) character and must match `/^[a-z][a-z0-9]{2,}-[a-z][a-z0-9-]{2,}[a-z0-9]$/` regular expression. A user may run an app with custom hostname subject to these additional restrictions:

      * The effective `billTo` of the running app must be an organization.
      * The user must be an admin of that organization.
      * If the ID of the `billTo` organization is `org-some_name` then the hostname must resemble `hostprefix-some-name`. This means the hostname must end in the `handle` part of the org ID with `_` and `.` replaced by hyphen (`-`) and start with a prefix that begins with a lowercase letter (in this case `hostprefix`).

      For an org with ID of `org-some_name`, valid hostnames include `myprefix-some-name` and `ab5-some-name`. The prefix must start with a lowercase letter and be at least 3 characters. The suffix derived from the org handle must be at least 4 characters to satisfy the hostname regex. For example, `myprefix-some-other-org` is not a valid hostname for `org-some_name` because the suffix does not match the org handle.

      When two jobs attempt to use the same URL, the newer job takes over the hostname from an already running job. We recommend having a single job with any given URL to avoid the risk of the URL being reassigned to another job if that job restarts.
* `nonce` **string** (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single applet is created. For more information, see [Nonces](https://documentation.dnanexus.com/developer/api/nonces).
* `treeTurnaroundTimeThreshold` **integer** (optional) The turnaround time threshold (in seconds) for trees, specifically, root executions, that run this executable. See [Job Notifications](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-notifications) for more information about turnaround time and managing job notifications.

{% hint style="info" %}
A license is required to use the Job Notifications feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

#### Outputs

* `id` **string** ID of the created applet object, such as "applet-xxxx".

#### Errors

* InvalidInput
  * A reserved linking string (`$dnanexus_link`) appears as a key in a mapping in `details` but is not the only key in the hash.
  * A reserved linking string (`$dnanexus_link`) appears as the only key in a hash in `details` but has value other than a string.
  * The spec is invalid.
  * All specified bundled dependencies must be in the same region as the specified project.
  * A `nonce` was reused in a request but other inputs had changed signifying a new and different request
  * A `nonce` may not exceed 128 bytes
  * `treeTurnaroundTimeThreshold` must be a non-negative integer less than 2592000
  * `timeoutPolicy` for all entry points should not exceed 30 days
  * `instanceType` and `instanceTypeSelector` keywords are mutually exclusive
  * `instanceTypeSelector` and `clusterSpec` keywords are mutually exclusive
  * The requested instance types are unavailable (no price has been contractually set for the `billTo` organization)
* SpendingLimitExceeded
  * The `billTo` has reached its spending limit.
* OrgExpired
  * The `billTo` organization has expired.
* PermissionDenied
  * UPLOAD access is required to the specified project.
  * The `billTo` of the project is not licensed to use `jobNotifications`. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
  * `instanceTypeSelector` keyword requires the `billTo` to have the `instanceTypeSelector` license feature
* ResourceNotFound
  * The path in `folder` does not exist while `parents` is false.

### API Method: `/applet-xxxx/describe`

#### Specification

Describes an applet object.

Alternatively, you can use the [`/system/describeDataObjects`](https://documentation.dnanexus.com/developer/system-methods#api-method-system-describedataobjects) method to describe many data objects at once.

#### Inputs

* `project` **string** (optional) Project or container ID to be used as a hint for finding the object in an accessible project.
* `defaultFields` **boolean** (optional) Whether to include the default set of fields in the output (the default fields are described in the "Outputs" section below). The selections are overridden by any fields explicitly named in `fields`.
  * Defaults to `false` if `fields` is supplied, `true` otherwise.
* `fields` **mapping** (optional) Include or exclude the specified fields from the output. These selections override the settings in `defaultFields`.
  * **key** — Desired output field. See the "Outputs" section below for valid values.
  * **value** **boolean** — Whether to include the field.

The following options are deprecated (and are not respected if `fields` is present):

* `properties` **boolean** (optional) Whether the properties should be returned. Defaults to `false`.
* `details` **boolean** (optional) Whether the details should also be returned. Defaults to `false`.

#### Outputs

* `id` **string** The object ID, such as "applet-xxxx".

The following fields are included by default (but can be disabled using `fields` or `defaultFields`):

* `project` **string** ID of the project or container in which the object was found.
* `class` **string** The value "applet".
* `types` **array of strings** Types associated with the object.
* `created` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this object was created.
* `state` **string** The current state of the object.
  * Must be one of `"open"` or `"closed"`.
* `hidden` **boolean** Whether the object is hidden or not.
* `links` **array of strings** The object IDs that are pointed to from this object, including links found in both the `details` and in `bundledDepends` (if it exists) of the applet.
* `name` **string** The name of the object.
* `folder` **string** The full path to the folder containing the object.
* `sponsored` **boolean** Whether the object is sponsored by DNAnexus.
* `tags` **array of strings** Tags associated with the object.
* `modified` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the user-provided metadata of the object was last modified.
* `createdBy` **mapping** How the object was created.
  * `user` **string** ID of the user who created the object or launched an execution which created the object.
  * `job` **string** ID of the job that created the object.
    * Only present when a job created the object.
  * `executable` **string** ID of the app or applet that the job was running.
    * Only present when a job created the object.
* `runSpec` **mapping** The run specification of the applet but without the `code` field (use the [`/applet-xxxx/get`](#api-method-applet-xxxx-get) method for obtaining the source code).
* `dxapi` **string** The version of the API used.
* `access` **mapping** The access requirements of the applet.
* `title` **string** The title of the applet.
* `summary` **string** The summary of the applet.
* `description` **string** The description of the applet.
* `developerNotes` **string** The developer notes of the applet.
* `ignoreReuse` **boolean** Whether job reuse is disabled for this applet.
* `httpsApp` **mapping** HTTPS app configuration.
  * `shared_access` **string** HTTPS access restriction for this job.
  * `ports` **array of integers** Ports that are open for inbound access.
  * `dns` **mapping** DNS configuration for the job.
    * `hostname` **string** The URL to access this job in the browser is `https://hostname.dnanexus.cloud` instead of the default `https://job-xxxx.dnanexus.cloud`.
* `treeTurnaroundTimeThreshold` **integer** (nullable) The turnaround time threshold (in seconds) for trees (specifically, root executions) that run this executable. See [Job Notifications](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-notifications) for more information about turnaround time and managing job notifications.

{% hint style="info" %}
A license is required to use the `jobNotifications` feature. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
{% endhint %}

The following field (included by default) is available if an input specification is specified for the applet:

* `inputSpec` **array of mappings** The input specification of the applet.

The following field (included by default) is available if an output specification is specified for the applet:

* `outputSpec` **array of mappings** The output specification of the applet.

The following field (included by default) is available if the object is sponsored by a third party:

* `sponsoredUntil` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Indicates the expiration time of data sponsorship (this field is only set if the object is sponsored, and if set, the specified time is always in the future).

The following fields are only returned if the corresponding field in the `fields` input is set to `true`:

* `properties` **mapping** Properties associated with the object.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** Contents of the object's details.

#### Errors

* ResourceNotFound
  * the specified object does not exist or the specified project does not exist
* InvalidInput
  * the input is not a hash, `project` (if supplied) is not a string, or the value of `properties` (if supplied) is not a boolean
* PermissionDenied
  * VIEW access required for the `project` provided (if any), and VIEW access required for some project containing the specified object (not necessarily the same as the hint provided)

### API Method: `/applet-xxxx/get`

#### Specification

Returns the full specification of the applet, that is, the same output as [`/applet-xxxx/describe`](#api-method-applet-xxxx-describe) but with the `runSpec` field left intact.

#### Inputs

* None

#### Outputs

* `project` **string** ID of the project or container in which the object was found.
* `id` **string** The object ID, such as "applet-xxxx".
* `class` **string** The value "applet".
* `types` **array of strings** Types associated with the object.
* `created` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this object was created.
* `state` **string** The current state of the object.
  * Must be one of `"open"` or `"closed"`.
* `hidden` **boolean** Whether the object is hidden or not.
* `links` **array of strings** The object IDs that are pointed to from this object, including links found in both the `details` and in `bundledDepends` (if it exists) of the applet.
* `name` **string** The name of the object.
* `folder` **string** The full path to the folder containing the object.
* `sponsored` **boolean** Whether the object is sponsored by DNAnexus.
* `tags` **array of strings** Tags associated with the object.
* `modified` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the user-provided metadata of the object was last modified.
* `createdBy` **mapping** How the object was created.
  * `user` **string** ID of the user who created the object or launched an execution which created the object.
  * `job` **string** ID of the job that created the object.
    * Only present when a job created the object.
  * `executable` **string** ID of the app or applet that the job was running.
    * Only present when a job created the object.
* `runSpec` **mapping** The run specification of the applet.
* `dxapi` **string** The version of the API used.
* `access` **mapping** The access requirements of the applet.
* `title` **string** The title of the applet.
* `summary` **string** The summary of the applet.
* `description` **string** The description of the applet.
* `developerNotes` **string** The developer notes of the applet.

If an input specification is specified for the applet:

* `inputSpec` **array of mappings** The input specification of the applet.

If an output specification is specified for the applet:

* `outputSpec` **array of mappings** The output specification of the applet.

#### Errors

* ResourceNotFound
  * the specified object does not exist
* PermissionDenied
  * VIEW access required

### API Method: `/applet-xxxx/run`

#### Specification

Creates a new job which executes the code of this applet. The default entry point for the applet's interpreter (given in the **runSpec.interpreter** field of the applet spec) is called.

| Interpreter | Entry point                                                                         |
| ----------- | ----------------------------------------------------------------------------------- |
| bash        | `main()` in top level scope with no args, if it exists. Also, `$1` is set to `main` |
| python3     | Any function decorated with `@dxpy.app_entry(func="main")` with no args             |

If constraints on inputs are specified in the applet spec, and the given inputs do not satisfy those constraints at the time the API call is performed, an InvalidInput error results. This error also occurs if the names of inputs given do not exactly match the inputs listed in the applet object, or if an input is omitted and no default is listed in the applet object. For inputs given as [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references), an equivalent error may result at job dispatch time, in which case the job fails.

The job might fail for the following reasons (this list is non-exhaustive):

* A reference such as one mentioned in `bundledDepends` could not be accessed using the job's credentials (VIEW access to project context, CONTRIBUTE access to a workspace, VIEW access to public projects)
* A [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references) did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
* An input object does not exist.
* Permission denied accessing an input object.
* An input object is not a data object (things like users, projects, or jobs are not data objects)
* An input object does not satisfy the class constraints.
* An input object does not satisfy the type constraints.
* An input object is not in the "closed" state.
* Insufficient credits.
* The user has too many jobs that are in a non-terminal state.

#### Inputs

* `name` **string** (optional) Name for the resulting job.
  * Defaults to the applet's title if set, otherwise the applet's name.
* `input` **mapping** (required) Input that the applet is launched with.
  * **key** — Input field name. If the applet has an input specification, it must be one of the names of the inputs. Otherwise, it can be any valid input field name.
  * **value** — Input field value.
* `dependsOn` **array of strings** (optional) List of job, analysis and/or data object IDs. The applet does not begin running any of its entry points until all jobs listed have transitioned to the "done" state, and all data objects listed are in the "closed" state.
* `project` **string** (required if invoked by a user. Optional if invoked from a job with `detach: true` option. Prohibited when invoked from a job with `detach: false`) The ID of the project in which this applet runs, also known as the *project context*. If invoked with the `detach: true` option, then the detached job runs under the provided `project` (if provided), otherwise project context is inherited from that of the invoking job. If invoked by a user or run as detached, all output objects are cloned into the project context. Otherwise, all output objects are cloned into the temporary workspace of the invoking job. For more information on project context on the DNAnexus Platform, see [project context and temporary workspace](https://documentation.dnanexus.com/developer/api/running-analyses/..#project-context-and-temporary-workspace).
* `folder` **string** (optional) The folder into which objects output by the job are placed. If the folder does not exist when the job completes, the folder (and any parent folders necessary) are created. The folder structure that output objects reside in is replicated within the target folder, for example, if `folder` is set to "/myJobOutput" and the job outputs an object which is in the folder "/mappings/mouse" in the workspace, the object is placed into "/myJobOutput/mappings/mouse". Defaults to `"/"`.
* `tags` **array of strings** (optional) Tags to associate with the resulting job.
* `properties` **mapping** (optional) Properties to associate with the resulting job.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** (optional) JSON object or array that is to be associated with the job. Defaults to `{}`.
* `systemRequirements` **mapping** (optional) Request specific resources for each of the executable's entry points. See the [Requesting Instance Types](#requesting-instance-types) section above for more details.
* `systemRequirementsByExecutable` **mapping** (optional) Request system requirements for all jobs in the resulting execution tree, configurable by executable and by entry point, described in more detail in the [Requesting Instance Types](#requesting-instance-types) section.
* `timeoutPolicyByExecutable` **mapping** (optional) The timeout policies for jobs in the resulting job execution tree, configurable by executable and the entry point within that executable. If unspecified, it indicates that all jobs in the resulting job execution tree have the default timeout policies present in the [run specifications](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) of their executables. `timeoutPolicyByExecutable` (keyed first by app or applet ID and then entry point name) propagates down the entire job execution tree, and that explicitly specified upstream policies always take precedence. If present, includes at least one of the following key-value pairs:
  * **key** — App or applet ID. If an executable is not explicitly specified in `timeoutPolicyByExecutable`, then any job in the resulting job execution tree that runs that executable has the default timeout policy present in the run specification of that executable.
  * **value** **mapping** — Timeout policy for the corresponding executable. Includes at least one of the following key-value pairs:
    * **key** — Entry point name or `"*"` to indicate all entry points not explicitly specified in this mapping. If an entry point name is not explicitly specified and `"*"` is not present, then any job in the resulting job execution tree that runs the corresponding executable at that entry point has the default timeout policy present in the run specification of the corresponding executable.
    * **value** **mapping** — Timeout for a job running the corresponding executable at the corresponding entry point. Includes at least one of the following key-value pairs:
      * **key** — Unit of time.
        * Must be one of `"days"`, `"hours"`, or `"minutes"`.
      * **value** **number** — Amount of time for the corresponding time unit. Must be non-negative. The effective timeout is the sum of the units of time represented in this mapping.
* `executionPolicy` **mapping** (optional) A collection of options that govern automatic job restart on certain types of failures. The format of this field is identical to that of the `executionPolicy` field in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) supplied to [`/applet/new`](#api-method-applet-new). It can partially or fully override the `executionPolicy` found in the applet's run specification (if present).
* `delayWorkspaceDestruction` **boolean** (optional) If set to true, the temporary workspace created for the resulting job is preserved for 3 days after the job either succeeds or fails.
  * Defaults to `false` for root executions (launched by a user or detached from another job), or to the parent's `delayWorkspaceDestruction` setting.
* `allowSSH` **array of strings** (optional) Array of IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks (up to /16) from which SSH access is allowed to the user by the worker running this job. Array may also include `"*"` which is interpreted as the IP address of the client issuing this API call as seen by the API server. See [Connecting to Jobs](https://documentation.dnanexus.com/developer/apps/execution-environment/connecting-to-jobs) for more information. Defaults to `[]`.
* `debug` **mapping** (optional) Specify debugging options for running the executable. This field is only accepted when this call is made by a user (and not a job). Defaults to `{}`.
  * `debugOn` **array of strings** (optional) Array of job errors after which the job's worker should be kept running for debugging purposes, offering a chance to SSH into the worker before worker termination (assuming SSH has been enabled). This option applies to all jobs in the execution tree. Jobs in this state for longer than 2 days are automatically terminated but can be terminated earlier. Defaults to `[]`.
    * Must be one of `"ExecutionError"`, `"AppError"`, `"AppInternalError"`, or `"AppInsufficientResourceError"`. For a description of each error type, see [Types of Errors](https://documentation.dnanexus.com/developer/apps/error-information).
* `singleContext` **boolean** (optional) If true then the resulting job and its descendants can only use the authentication token given to it at the onset. Use of any other authentication token results in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, and `singleContext: true` setting is used instead.
* `ignoreReuse` **boolean** (optional) If true, [Smart Reuse](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-reuse) is disabled for this execution. If false, Smart Reuse is enabled for this execution (if the organization has the feature enabled). Takes precedence over the value supplied for `applet-xxxx/new`.
* `nonce` **string** (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see [Nonces](https://documentation.dnanexus.com/developer/api/nonces).
* `detach` **boolean** (optional) This option has no impact when the API is invoked by a user. If invoked from a job with detach set to true, the new job is detached from the creator job and appears as a typical root execution. A failure in the detached job does not cause a termination of the job from which it was created and vice versa. Detached job inherits neither the access to the workspace of its creator job nor the creator job's priority. Detached job's access permissions are the intersection (most restricted) of access permissions of the creator job and the permissions requested by the detached job's executable. To launch the detached job, creator job must have CONTRIBUTE or higher access to the project in which the detached job is launched. The billTo of the project in which the creator job is running must have a license to launch detached executions.

{% hint style="info" %}
For more information on a license that supports launching detached executions, [contact DNAnexus Sales](mailto:sales@dnanexus.com).
{% endhint %}

* `rank` **integer** (optional) The rank indicates the priority in which the executions generated from this executable are processed. The higher the rank, the more prioritized it is. If the execution is not a root execution, it inherits its parent's rank. Defaults to `0`. Must be between -1024 and 1023.

{% hint style="info" %}
A license is required to use the Job Ranking feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `costLimit` **number** (optional) The limit of the cost that this execution tree should accrue before termination. This field is ignored if this is not a root execution.
* `headJobOnDemand` **boolean** (optional) If true, then the resulting master job is allocated to an on-demand instance, regardless of its scheduling priority. Its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app's `headJobOnDemand` (if any).
* `preserveJobOutputs` **mapping** (optional, nullable) Preserves all cloneable outputs of every completed, non-jobReused job in the execution tree launched by this API call in the root execution project, even if root execution ends up failing. Preserving the job outputs in the project trades off higher costs of storage for the possibility of subsequent job reuse. Defaults to `null`.

  \
  When a non-jobReused job in the root execution tree launched with non-null `preserveJobOutputs` enters the "done" state, all cloneable objects referenced by the `$dnanexus_link` in the job's `output` field are cloned to the project folder described by `preserveJobOutputs.folder`. This happens unless the output objects already appear elsewhere in the project. Cloneable objects include files, records, applets, and closed workflows, but not databases. If the folder specified by `preserveJobOutputs.folder` does not exist in the project, the system creates the folder and its parents.\
  \
  As the root job or root analysis' stages complete, the regular outputs of the root execution are moved from `preserveJobOutputs.folder` to the regular output folders of the root execution. When you run your root execution without the `preserveJobOutputs` option to completion, some root execution outputs appear in the project in the root execution's output folders. If you had run the same execution with `preserveJobOutputs.folder` set to `"/pjo_folder"`, the same set of outputs would appear in the same set of root execution folders as in the first case at completion of the root execution. Some additional job outputs that are not outputs of the root execution would appear in `"/pjo_folder"`.\
  \
  `preserveJobOutputs` argument can be specified only when starting a root execution or a detached job.\
  \
  `preserveJobOutputs` value, if not **null**, should be a mapping that may contain the following:

  * **key** — `"folder"` **string** (optional).
  * **value** **string** — `path_to_folder`. Specifies a folder in the root execution project where the outputs of jobs that are a part of the launched execution are stored. `path_to_folder` starting with `/` is interpreted as absolute folder path in the project the job is running in. `path_to_folder` not starting with `/` is interpreted as a path relative to the root execution's `folder` field. An empty string `path_to_folder` value (`""`) preserves job outputs in the folder described by root execution's `folder` field.\
    If the `preserveJobOutputs` mapping does not have a `folder` key, the system uses the default folder value of `"intermediateJobOutputs"`. For example, `"preserveJobOutputs": {}` is equivalent to `"preserveJobOutputs": {"folder":"intermediateJobOutputs"}`.

    It is recommended to place preserveJobOutputs outputs for different root executions into different folders so as not to create a single folder with a large (>450K) number of files.

{% hint style="info" %}
A license is required to use preserveJobOutputs. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `detailedJobMetrics` **boolean** (optional) Requests detailed metrics collection for jobs if set to true. This flag can be specified for root executions and applies to all jobs in the root execution. The list of detailed metrics collected every 60 seconds and viewable for 15 days from the start of a job is available using [`dx watch --metrics`](https://documentation.dnanexus.com/user/helpstrings-of-sdk-command-line-utilities#watch-metrics-help).
  * Defaults to the project `billTo`'s `detailedJobMetricsCollectDefault` policy setting or `false` if org default is not set.

{% hint style="info" %}
A license is required to use `detailedJobMetrics`. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

#### Outputs

* `id` **string** ID of the created job, which is a string in the form `job-xxxx`.

#### Errors

* ResourceNotFound
  * The specified applet object or project context does not exist.
  * One of the IDs listed in `dependsOn` does not exist.
* PermissionDenied
  * The requesting user must have VIEW access to all objects listed in `dependsOn`, and to all project contexts of all jobs listed in `dependsOn`.
  * The requesting user must have VIEW access to the applet object.
  * If invoked by a user, then the requesting user must have CONTRIBUTE access to the project context.
  * The requesting user must be able to describe all jobs used in a [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references) -- see [`/job-xxxx/describe`](#api-method-job-xxxx-describe).
  * The requesting user has too many non-terminal jobs (65536, by default) and must wait for some to finish before creating more. Non-terminal jobs include those in `running` or `runnable` states.
  * The `billTo` of the job's project must be licensed to start detached executions when invoked from the job with `detach: true` argument.
  * If rank is provided and the billTo does not have license feature executionRankEnabled set to true.
  * if `preserveJobOutputs` is not **null** and `billTo` of the project where execution is attempted does not have preserveJobOutputs license.
  * `detailedJobMetrics` setting of true requires project's `billTo` to have `detailedJobMetrics` license feature set to true.
  * `app{let}-xxxx` can not run in `project-xxxx` because executable's `httpsApp.shared_access` should be `NONE` to run with isolated browsing.
  * The project is associated with a [TRE](https://documentation.dnanexus.com/developer/api/trusted-research-environments) and `allowSSH` was specified. SSH access is not allowed in TRE projects. See [Execution Restrictions](https://documentation.dnanexus.com/developer/trusted-research-environments#execution-restrictions).
  * The project is associated with a TRE and the executable's `httpsApp` settings are not permitted. In TRE projects, only allowlisted HTTPS apps may expose HTTPS, and only on port `443`. See [Execution Restrictions](https://documentation.dnanexus.com/developer/trusted-research-environments#execution-restrictions).
  * The project is associated with a TRE and the executable is not in the project's `allowedExecutables` list, if the TRE policy restricts allowed executables.
* InvalidInput
  * `input` does not satisfy the input specification of this applet. An additional field is provided in the error JSON for this error that looks like

    ```json
    {
      "error": {
        "type": "InvalidInput",
        "message": "i/o value for fieldname is not int",
        "details": {
          "field": "fieldname",
          "reason": "class",
          "expected": "int"
        }
      }
    }
    ```

    The possible error reasons are described in [Input Specification Errors](#api-method-applet-xxxx-run).
  * If invoked by a user, then `project` must be specified.
  * If invoked by a job, then `project` must not be specified.
  * The project context must be in the same region as this applet.
  * All data object inputs that are specified directly must be in the same region as this applet.
  * All inputs that are job-based object references must refer to a job that was run in the same region as this applet.
  * `allowSSH` accepts only IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks up to /16.
  * A `nonce` was reused in a request but other inputs had changed signifying a new and different request.
  * A `nonce` may not exceed 128 bytes.
  * `preserveJobOutputs.folder` value is a syntactically invalid path to a folder.
  * `preserveJobOutputs` is specified when launching a non-detached execution from a job.
  * `detailedJobMetrics` can not be specified when launching a non-detached execution from a job.
  * `timeoutPolicyByExecutable` for all executables should not be `null`
  * `timeoutPolicyByExecutable` for all entry points of all executables should not be `null`
  * `timeoutPolicyByExecutable` for all entry points of all executables should not exceed 30 days
  * Expected key `timeoutPolicyByExecutable.*` of input to match `/^(app|applet)-[0-9A-Za-z]{24}$/`
  * The requested instance types are unavailable (no price has been contractually set for the `billTo` organization)
  * `systemRequirements.*.instanceTypeSelector` keyword is not allowed at runtime
  * `systemRequirementsByExecutable.*.*.instanceTypeSelector` keyword is not allowed at runtime
* InvalidState
  * Some specified input is not in the `closed` state.
  * Some job in `dependsOn` has failed or has been terminated.

The following list describes the possible error reasons and what the fields mean:

* `class`: the specified `field` was expected to have class `expected`. If the input spec required an array but it was not an array, the value for `expected` is `array`. If the input spec required an array but an element was of the wrong class, then the value for `expected` is the actual class the entry was expected to be, for example, `record`.
* `type`: the specified `field` either needs to have the type `expected` or does not satisfy the or-condition in `expected`
* `missing`: the specified `field` was not provided but is required in the input specification
* `unrecognized`: the given `field` is not present in the input specification
* `malformedLink`: incorrect syntax was given either for a [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references) or for a link to a data object. Possible values for `expected` include:
  * `key field`: the key `field` was missing in a job-based object reference
  * `only two keys`: exactly two keys were expected in the hash for the job-based object reference
  * `key $dnanexus_link`: the key `$dnanexus_link` was missing in a link for specifying a data object
* `choices`: the specified `field` must be one of the values in `expected`

### API Method: `/applet-xxxx/validateBatch`

#### Specification

This API call verifies that a set of input values for a particular applet can be used to launch a batch of jobs in parallel. The applet must have input specification defined.

Batch and common inputs:

`batchInput`: mapping of inputs corresponding to batches. The nth value of each array corresponds to nth execution of the applet. Including a `null` value in an array at a given position means that the corresponding applet input field is optional and the default value, if defined, should be used. E.g.:

```json
{
  "a": [{$dnanexus_link: "file-xxxx"}, {$dnanexus_link: "file-yyyy"}, ....],
  "b": [1,null, ...]
}
```

`commonInput`: mapping of non-batch, constant inputs common to all batch jobs, e.g.:

```json
{
  "c": "foo"
}
```

File references:

`files`: list of files (passed as `$dnanexus_link` references), must be a superset of files included in `batchInput` and/or `commonInput` e.g.:

```json
[
  {$dnanexus_link: "file-xxxx"},
  {$dnanexus_link: "file-yyyy"}
]
```

Output: list of mappings, each mapping corresponds to an expanded batch call. Nth mapping contains the input values with which the nth execution of the applet runs, e.g.:

```json
[
  {"a": {$dnanexus_link: "file-xxxx"}, b: 1, c: "foo"},
  {"a": {$dnanexus_link: "file-yyyy"}, b: null, c: "foo"}
]
```

It performs the following validation:

* the input types match the expected applet input field types,
* provided inputs are sufficient to run the applet,
* `null` values are only among values for inputs that are optional or have no specified default values,
* all arrays of `batchInput` are of equal size,
* every file referred to in `batchInputs` exists in `files` input.

#### Inputs

* `batchInput` **mapping** (required) Input that the applet is launched with.
  * **key** — Input field name. It must be one of the names of the inputs defined in the applet input specification.
  * **value** — Input field values. It must be an array of fields.
* `commonInput` **mapping** (optional) Input that the applet is launched with.
  * **key** — Input field name. It must be one of the names of the inputs defined in the applet input specification.
  * **value** — Input field values. It must be an array of fields.
* `files` **list** (optional) Files that are needed to run the batch jobs, they must be provided as `$dnanexus_links`. They must correspond to all the files included in `commonInput` or `batchInput`.

#### Outputs

* `expandedBatch` **list of mappings** Each mapping contains the input values for one execution of the applet in batch mode.

#### Errors

* InvalidInput
  * `inputSpec` must be specified for the applet
  * Expected `batchInput` to be a JSON object
  * Expected `commonInput` to be a JSON object
  * Expected `files` to be an array of `$dnanexus_link` references to files
  * The `batchInput` field is required but empty array was provided
  * Expected the value of `batchInput` for an applet input field to be an array
  * Expected the length of all arrays in `batchInput` to be equal
  * The applet input field value must be specified in `batchInput`
  * The applet input field is not defined in the input specification of the applet
  * All the values of a specific `batchInput` field must be provided (cannot be `null`) since the field is required and has no default value
  * Expected all the files in `batchInput` and `commonInput` to be referenced in the `files` input array

## Job API Method Specifications

### API Method: `/job/new`

#### Specification

This API call may only be made from within an executing job. This call creates a new job which executes a particular function (from the same applet as the one the current job is running) with a particular input. The input is checked for links, and any [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references) is honored. However, the input is not checked against the applet spec. Since this is done from inside another job, the new job inherits the same workspace and project context -- no objects are cloned, and no other modification takes place in the workspace.

The entry point for the job's execution is determined as follows, where `f` is the string given in the `function` parameter in the input:

| Interpreter | Entry point                                                                   |
| ----------- | ----------------------------------------------------------------------------- |
| bash        | `f()` in top level scope with no args, if it exists. Also, `$1` is set to `f` |
| python3     | Any function decorated with `@dxpy.entry_point("f")`                          |

See [Execution Environment Reference](https://documentation.dnanexus.com/developer/apps/execution-environment) for more info.

This call fails if the specified OAuth2 token does not internally represent an executing job.

The system tracks the parent-child relationship between jobs. When you create a new job, the system records its parent job, which is visible in the `parent` field when describing the new job. This relationship affects job state progression. A parent job that has finished its execution remains in the `waiting_on_output` state until all its child jobs reach the `done` state. For more information about job states and transitions, see [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle).

The new job may fail for at least the following reasons:

* A reference such as one mentioned in `bundledDepends` could not be accessed using the job's credentials (VIEW access to project context, CONTRIBUTE access to workspace, VIEW access to public projects)
* A [job-based object reference](https://documentation.dnanexus.com/developer/api/job-input-and-output#analysis-and-job-based-object-references) did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
* Insufficient credits.

#### Inputs

* `name` **string** (optional) Name for the resulting job. Defaults to `"<parent job's name>:<function>"`.
* `input` **mapping** (required) Input that the job is launched with. No syntax checking occurs, but the mapping is checked for links and create dependencies on any open data objects or unfinished jobs accordingly.
  * **key** — Input field name.
  * **value** — Input field value.
* `dependsOn` **array of strings** (optional) List of job, analysis and/or data object IDs. The newly created job does not run until all executions listed in `dependsOn` have transitioned to the `done` state, and all data objects listed are in the `closed` state.
* `function` **string** (required) The name of the entry point or function of the applet's code that is executed.
* `tags` **array of strings** (optional) Tags to associate with the resulting job.
* `properties` **mapping** (optional) Properties to associate with the resulting job.
  * **key** — Property name.
  * **value** **string** — Property value.
* `details` **mapping or array** (optional) JSON object or array that is to be associated with the job. Defaults to `{}`.
* `systemRequirements` **mapping** (optional) Request specific resources for each of the executable's entry points. See the [Requesting Instance Types](#requesting-instance-types) section above for more details.
* `systemRequirementsByExecutable` **mapping** (optional) Request system requirements for all jobs in the resulting execution subtree, configurable by executable and by entry point, described in more detail in the [Requesting Instance Types](#requesting-instance-types) section.
* `timeoutPolicyByExecutable` **mapping** (optional) Similar to the `timeoutPolicyByExecutable` field supplied to [`/applet-xxxx/run`](#api-method-applet-xxxx-run).
* `ignoreReuse` **boolean** (optional) If true, [Smart Reuse](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-reuse) is disabled for this execution. If false, Smart Reuse is enabled for this execution (if the organization has the feature enabled). Takes precedence over the value supplied for `applet-xxxx/new`.
* `singleContext` **boolean** (optional) If true then the resulting job and its descendants are only allowed to use the authentication token given to it at the onset. Use of any other authentication token results in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, and `singleContext: true` setting is used instead.
* `nonce` **string** (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see [Nonces](https://documentation.dnanexus.com/developer/api/nonces).
* `headJobOnDemand` **boolean** (optional) If true, then the resulting root job is allocated to an on-demand instance, regardless of its scheduling priority. Its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app's `headJobOnDemand` (if any).

#### Outputs

* `id` **string** ID of the created job, which is a string in the form `job-xxxx`.

#### Errors

* InvalidAuthentication (the usual reasons InvalidAuthentication is thrown, or the auth token used is not a token issued to a job)
* ResourceNotFound (one of the IDs listed in `dependsOn` does not exist)
* PermissionDenied (VIEW access required for any objects listed in `dependsOn` and for the project contexts of any jobs listed in `dependsOn`. Ability to describe any job used in a job-based object reference
* InvalidInput
  * The input is not a hash
  * `input` is missing or is not a hash
  * an invalid link syntax appears in the `input`
  * `dependsOn`, if given, is not an array of strings
  * `details`, if given, is not a conformant JSON object
  * For each property key-value pair, the size, encoded in UTF-8, of the property key may not exceed 100 bytes and the property value may not exceed 700 bytes
  * A `nonce` was reused in a request but other inputs had changed signifying a new and different request
  * A `nonce` may not exceed 128 bytes
  * The requested instance types are unavailable (no price has been contractually set for the `billTo` organization)
  * `runSpec.systemRequirements.*.instanceTypeSelector` keyword is not allowed at runtime
  * `runSpec.systemRequirementsByExecutable.*.*.instanceTypeSelector` keyword is not allowed at runtime
* InvalidState
  * some job in `dependsOn` has already failed or been terminated

### API Method: `/job-xxxx/describe`

#### Specification

Describes a job object. Jobs are created when you run an applet or app, or when a running job launches another job using the `new` job class method. Every job is associated with an applet or app and runs within a project context (shown in the `project` field). The `launchedBy` field identifies the user who started the execution and is inherited by all child jobs, including any executables they launch. Jobs always have a specific state. See the [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle) section for details on possible states.

If you're developing reorganization apps that need to check the status of the running job, you can examine the `dependsOn` field before the complete analysis description becomes available. Use `dx describe analysis-xxx --json | jq -r .dependsOn` or equivalent `dxpy` code. An empty array `[]` indicates the job no longer depends on anything. This typically means it has reached `done` status and you can proceed. If the output contains job or subanalysis IDs, the job is not ready and your script should wait.

#### Inputs

* `defaultFields` **boolean** (optional) Whether to include the default set of fields in the output (default fields described in the "Outputs" section below). The selections are overridden by any fields explicitly named in fields.
  * Defaults to `false` if `fields` is supplied, `true` otherwise.
* `fields` **mapping** (optional) Include or exclude the specified fields from the output. These selections override the settings in `defaultFields`.
  * **key** — Desired output field. See the "Outputs" section below for valid values.
  * **value** **boolean** — Whether to include the field.
* `try` **non-negative integer** (optional) Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try for the specified job ID. This is the try with the largest `try` attribute. See [Restartable Jobs](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle#restartable-jobs) section for details.

The following option is deprecated (and is not respected if `fields` is present):

* `io` **boolean** (optional) Whether the input and output fields (`runInput`, `originalInput`, `input`, and `output`) should be returned. Defaults to `true`.

#### Outputs

* `id` **string** The job ID, which is the string `job-xxxx`.

The following fields are included by default (but can be disabled using `fields`):

* `try` **non-negative integer** (nullable) Returns the try for this job, with 0 corresponding to the first try, 1 corresponding to the second try for restarted jobs and so on. Returns `null` for jobs belonging to root executions launched before July 12, 2023 00:13 UTC, in which case information for the latest job try is returned.
* `class` **string** The value `job`.
* `name` **string** The name of the job try.
* `executableName` **string** The name of the executable (applet or app) that the job was created to run.
* `created` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this job was created. All tries of this job have the same `created` value corresponding to creation time of the first job try.
* `tryCreated` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) (nullable) Time at which this job's `try` was created, or `null` for jobs belonging to root executions launched before July 12, 2023 00:13 UTC. For job try 0, this field has the same value as the `created` field.
* `modified` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this job try was last updated.
* `startedRunning` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this job try transitioned into the `running` state (see [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle)).
  * Only present when the transition has occurred.
* `stoppedRunning` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which this job try transitioned out of the `running` state (see [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle)).
  * Only present when the transition has occurred.
* `egressReport` **mapping or undefined** A mapping detailing the total bytes of egress for a particular job try.
  * `regionLocalEgress` **integer** Amount in bytes of data transfer between IP in the same cloud region.
  * `internetEgress` **integer** Amount in bytes of data transfer to IP outside of the cloud provider.
  * `interRegionEgress` **integer** Amount in bytes of data transfer to IP in other regions of the cloud provider.
* `billTo` **string** ID of the account to which any costs associated with this job are billed.
* `project` **string** The project context associated with this job.
* `folder` **string** The output folder in which the outputs of this job's master job are placed.
* `rootExecution` **string** ID of the top-level job or analysis in the execution tree.
* `parentJob` **string** (nullable) ID of the job which created this job, or `null` if this job is an origin job.
* `parentJobTry` **non-negative integer** (nullable) Returns `null` if the job try being described had no parent, or if the parent itself had a `null` `try` attribute. Otherwise, this `job-xxxx` with the try attributes specified in the method's input was launched from the `parentJobTry` try of the `parentJob`.
* `originJob` **string** The closest ancestor job whose `parentJob` is `null`, either because it was run by a user directly or was run as a stage in an analysis.
* `detachedFrom` **string** (nullable) The ID of the job this job was detached from via the `detach` option, or `null` if not detached.
* `detachedFromTry` **non-negative integer** (nullable) If this job was detached from a job, `detachedFrom` and `detachedFromTry` describe the specific try of the job this job was detached from. Returns `null` if this job was not detached from another job or if the `detachedFrom` had a `null` `try` attribute.
* `parentAnalysis` **string** (nullable) ID of the analysis, present for an origin job that is run as a stage in an analysis, or `null` otherwise.
* `analysis` **string** (nullable) ID of the nearest ancestor analysis in the execution tree, if one exists, or `null` otherwise.
* `stage` **string** (nullable) The ID of the stage this job is part of, or `null` if this job was not run as part of a stage in an analysis.
* `state` **string** The job state. See [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle) for more details on job states.
  * Must be one of `"idle"`, `"waiting_on_input"`, `"runnable"`, `"running"`, `"waiting_on_output"`, `"done"`, `"debug_hold"`, `"restartable"`, `"failed"`, `"terminating"`, or `"terminated"`.
* `stateTransitions` **array of mappings** Each element in the list indicates a time at which the state of the job try changed. The initial state of a job try is always `idle` when it is created and is not included in the list.
  * `newState` **string** The new state, for example `runnable`.
  * `setAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the new state was set for the job try.
* `workspace` **string** ID of the temporary workspace, such as `container-xxxx`, assigned to the job try.
  * Only present when the workspace has been allocated.
* `launchedBy` **string** ID of the user who launched the original job.
* `function` **string** Name of the function, or entry point, that this job is running.
* `tags` **array of strings** Tags associated with the job try.
* `properties` **mapping** Properties associated with the job try.
  * **key** — Property name.
  * **value** **string** — Property value.
* `finalPriority` **string** The final priority this job try was run at. If the job try was run on an on-demand instance, `finalPriority` is set to `high` regardless the original priority setting.
* `rank` **integer** The rank assigned to the job and its child executions. The range is from \[-1024 to 1023].

{% hint style="info" %}
A license is required to use the Job Ranking feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

* `details` **mapping or array** JSON details that were stored with this job.
* `systemRequirements` **mapping** Resolved resources requested for each of the executable's entry points based on `mergedSystemRequirementsByExecutable`, `runStageSystemRequirements`, `runSystemRequirements`, system requirements embedded in workflow stages, and `systemRequirements` supplied to `/executable/new`. See the [Requesting Instance Types](#requesting-instance-types) section above for more details.
* `executionPolicy` **mapping** Options specified for this job to perform automatic restart on certain types of failures. The format of this field is identical to that of the `executionPolicy` field in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification) supplied to [`/applet/new`](#api-method-applet-new). This hash is computed from the following sources, in order of precedence:
  * Values given to [`/app-xxxx/run`](https://documentation.dnanexus.com/developer/api/apps#api-method-app-xxxx-yyyy-run), [`/applet-xxxx/run`](#api-method-applet-xxxx-run), or [`/workflow-xxxx/run`](https://documentation.dnanexus.com/developer/api/workflows-and-analyses#api-method-workflow-xxxx-run)
  * Values given to [`/workflow-xxxx/addStage`](https://documentation.dnanexus.com/developer/api/workflows-and-analyses#api-method-workflow-xxxx-addstage)
  * Values given to [`/applet/new`](#api-method-applet-new) in the [run specification](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#run-specification)
* `timeout` **integer** The effective timeout, in milliseconds, for this job. This number is smaller or equal to 30 days (2592000000 milliseconds) for most jobs. Most jobs launched before December 19, 2024 or jobs billed to orgs with the `allowJobsWithoutTimeout` license may have this field set to `undefined`, indicating that no timeout policy was specified for the executable being run by this job, or that the effective timeout resolved to 0ms.
* `instanceType` **string** Instance type for this job try, computed from the `systemRequirements` specification or the system-wide default if no instance type was requested.
* `instanceTypeTransitions` **array of mappings** (nullable) Timestamped history of instance type selections for this job try, starting with the earliest. Returns `null` for jobs created before 2026-01-09. Each element in the array describes a step in the instance selection process with the following fields:
  * `instanceType` **string** The instance type that the system attempted to assign to the job at this step.
  * `spot` **boolean** Whether the system attempted to provision a spot instance (`true`) or an on-demand instance (`false`).
  * `setAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the job became assignable with this instance type.
  * `reason` **string** Explains why this instance type selection was made.
    * `single` — Instance type selected from `instanceType` keyword.
    * `selector` — Instance type selected from `instanceTypeSelector` keyword in the executable specification.
    * `singleSpotTimeout` — Instance type selected after spot provisioning timeout when using `instanceType` keyword.
    * `selectorSpotTimeout` — Instance type selected after spot provisioning timeout when using `instanceTypeSelector` keyword.
* `networkAccess` **array of strings** The computed network access list that is available to the job. This may be a subset of the requested list in the [access requirements](https://documentation.dnanexus.com/developer/api/io-and-run-specifications#access-requirements) of the executable if an ancestor job had access to a more restricted list of domains.
* `delayWorkspaceDestruction` **boolean** Whether the job's temporary workspace is kept around for 3 days after the job either succeeds or fails.
  * Only present for origin and master jobs.
* `dependsOn` **array of strings** If the job is in a waiting state (`waiting_on_input` or `waiting_on_output`), an array of job IDs and data object IDs that must transition to the done or closed state for the job to transition out of the waiting state to either `runnable` or `done`.
* `failureReason` **string** A short string describing where the error occurred, for example `AppError` for errors thrown in the execution of the applet or app. When a job fails with an `AppError` or `AppInternalError` caused by insufficient resources (for example, out of memory or out of storage), the platform remaps the failure reason to `AppInsufficientResourceError`.
  * Only present when the job try is in a failing, failed, or restarted state.
  * When `failureReason` is `AppInsufficientResourceError`, the organization has the `allowInstanceUpgradeOnJobRestart` policy enabled, and the job's execution policy is configured to restart on `AppInsufficientResourceError` (or `"*"`), then when a job restart is triggered the platform automatically retries the job on an instance with increased memory or storage capacity by upgrading one step within the same instance family.
* `failureMessage` **string** A more detailed message describing why the error occurred.
  * Only present when the job try is in a failing, failed, or restarted state.
* `failureFrom` **mapping** Metadata describing the job try which caused the failure of this job try (which may be the same job and job try as the one being described).
  * Only present when the job try is in a failing, failed, or restarted state.
  * `id` **string** ID of the failed job.
  * `try` **non-negative integer** or **undefined** `try` of the failure-causing job for this `job-xxxx`'s `try`.
    * Only present when the failure-causing job had the try attribute.
  * `name` **string** Name of the job.
  * `executable` **string** ID (of the form `applet-xxxx` or `app-xxxx`) of the executable the job was running.
  * `executableName` **string** Name of the executable the job was running.
  * `function` **string** Name of the function, or entry point, the job was running.
  * `failureReason` **string** `failureReason` of the failed job.
  * `failureMessage` **string** `failureMessage` of the failed job.
* `failureReports` **array of mappings** Each item in the list has the following fields.
  * Only present when this job failure was reported through a support mechanism.
  * `to` **string** Email address the failure was reported to.
  * `by` **string** ID of the user who reported the failure.
  * `at` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which the report was made.
* `failureCounts` **mapping** A mapping from failure types to the number of times that type occurred and caused the job to be restarted before the job try being described. Failure types include categories such as `AppError`. See [Types of Errors](https://documentation.dnanexus.com/apps/error-information#types-of-errors) for a complete list.
* `runInput` **mapping** The `input` field that was provided when launching this job.
* `originalInput` **mapping** The same as `runInput` but with default values filled in for any optional inputs that were not provided.
* `input` **mapping** The same as `originalInput`, except that if any job-based references have since been resolved, they are replaced with the resulting object IDs. Once the job's state has transitioned to `runnable`, this represents exactly the input that is given to the job.
* `output` **mapping** (nullable) The output that this job has generated, or `null` if the job is not in the `done` state. The output may contain unresolved job-based object references.
* `region` **string** The region in which the job is running, such as `aws:us-east-1`.
* `singleContext` **boolean** Whether the job was specified, at run time, to be locked down to only issue requests from its own job token.
* `ignoreReuse` **boolean** Whether job reuse was disabled for this job.
* `httpsApp` **mapping** HTTPS app configuration.
  * `enabled` **boolean** Whether HTTPS app configuration is enabled for this job.
  * `shared_access` **string** HTTPS access restriction for this job.
  * `ports` **array of integers** If `enabled` is true, which ports are open for inbound access.
  * `dns` **mapping** DNS configuration for the job.
    * `url` **string** Defaults to `"https://job-xxxx.dnanexus.cloud"` unless overridden by `hostname` in the executable's `dxapp.json`.
  * `isolatedBrowsing` **boolean** Whether httpsApp access to this job is wrapped in [Isolated Browsing](https://documentation.dnanexus.com/developer/apps/https-applications/isolated-browsing-for-https-apps).
  * `isolatedBrowsingOptions` **mapping** A mapping with [Isolated Browsing](https://documentation.dnanexus.com/developer/apps/https-applications/isolated-browsing-for-https-apps) options.
    * `pasteFromLocalClipboardMaxBytes` **integer** In \[0-262144] range, specifying the maximum size of the locally copied text that is pasteable in the remote browser. Value of `0` disables pasting of any local text into the remote browser and is the default when `pasteFromLocalClipboardMaxBytes` is not explicitly specified.
* `preserveJobOutputs` **mapping** (nullable) The `preserveJobOutputs` setting, with `preserveJobOutputs.folder` expanded to start with `"/"`.
* `detailedJobMetrics` **boolean** Set to true only if the detailed job metrics collection was enabled for this job.
* `clusterSpec` **mapping** A copy of the `clusterSpec` (if present) in the executable used to launch this job.
* `clusterID` **string** Unique ID used to identify the cluster of workers running this job try.
  * Only present for jobs with a `clusterSpec`.
* `costLimit` **number** If the job is a root execution, and has the root execution cost limit, this is the cost limit for the root execution.

If this job is a root execution, the following fields are included by default (but can be disabled using `fields`):

* `selectedTreeTurnaroundTimeThreshold` **integer** (nullable) The selected turnaround time threshold (in seconds) for this root execution. When `treeTurnaroundTime` reaches `selectedTreeTurnaroundTimeThreshold`, the system sends an email about this root execution to the `launchedBy` user and the `billTo` profile.
* `selectedTreeTurnaroundTimeThresholdFrom` **string** (nullable) Where `selectedTreeTurnaroundTimeThreshold` is from. `executable` means that `selectedTreeTurnaroundTimeThreshold` is from this root execution's executable's `treeTurnaroundTimeThreshold`. `system` means that `selectedTreeTurnaroundTimeThreshold` is from the system's default threshold.
* `treeTurnaroundTime` **integer** The turnaround time (in seconds) of this root execution, which is the time between its creation time and its terminal state time (or the current time if it is not in a terminal state. Terminal states for an execution include done, terminated, and failed. See [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle)for information on them). If this root execution can be retried, the turnaround time begins at the creation time of the root execution's first try, so it includes the turnaround times of all tries.

{% hint style="info" %}
A license is required to use the `jobNotifications` feature. Contact [DNAnexus Sales](mailto:sales@dnanexus.com) to enable `jobNotifications`.
{% endhint %}

The following fields (included by default) are only available if the requesting user has permissions to view debug options for the job (either the user launched the job, or the user has ADMINISTER permissions in the job's project context. See [Connecting to Jobs](https://documentation.dnanexus.com/developer/apps/execution-environment/connecting-to-jobs) for more information):

* `debugOn` **array of strings** Array of error types on which the job is held for debugging, such as `AppError`. See [Types of Errors](https://documentation.dnanexus.com/apps/error-information#types-of-errors) for a complete list.

The following fields (included by default) are only available if the requesting user has permissions to view the pricing model of the `billTo` of the job, the job is the last try of an origin job, and the job's price has been computed:

* `isFree` **boolean** Whether this job is charged to the `billTo` (set to true if the job has failed for failure reasons which are indicative of some system error rather than user error).
* `currency` **mapping** Information about currency settings, such as `dxCode`, `code`, `symbol`, `symbolPosition`, `decimalSymbol` and `groupingSymbol`.
* `totalPrice` **number** Price (in `currency`) for how much this job (along with all its subjobs) costs (or would cost if `isFree` is true).
* `priceComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `totalPrice` was computed. For billing purposes, the cost of the job accrues to the invoice of the month that contains `priceComputedAt` (in UTC).
* `totalEgress` **mapping** Egress (in `Byte`) for how much data amount this job (along with all its subjobs) has egressed.
  * `regionLocalEgress` **integer** Amount in bytes of data transfer between IP in the same cloud region.
  * `internetEgress` **integer** Amount in bytes of data transfer to IP outside of the cloud provider.
  * `interRegionEgress` **integer** Amount in bytes of data transfer to IP in other regions of the cloud provider.
* `egressComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `totalEgress` was computed. For billing purposes, the cost of the analysis accrues to the invoice of the month that contains egressComputedAt (in UTC).

The following fields (included by default) are only available if the requesting user has permissions to view worker information for the job (either the user launched the job, or the user has CONTRIBUTE permissions in the job's project context project):

* `allowSSH` **array of strings** Array of IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks from which SSH access is allowed to the user by the worker running this job try.

The following fields (included by default) are only available if the requesting user has permissions to view worker information for the job (either the user launched the job, or the user has CONTRIBUTE permissions in the job's project context project) and a worker has started running the job try:

* `sshHostKey` **string** The worker's SSH host key.
* `host` **string** The worker's public FQDN or IP address.
* `sshPort` **string** TCP port that can be used to connect to the SSH daemon for monitoring or debugging the job.
* `clusterSlaves` **array of mappings** For jobs running on a cluster of instances, an array describing the slave nodes.
  * `host` **string** The public hostname of the worker, can be used to SSH to the node if allowSSH was enabled for the job.
  * `sshPort` **string** TCP port to use when connecting via SSH to this node.
  * `internalIp` **string** The private IP address for this node, only accessible from other hosts in this cluster.

The following fields (included by default) are only available if this job try is running an applet:

* `applet` **string** ID of the applet from which the job was run.

The following fields (included by default) are only available if this job try is running an app:

* `app` **string** ID of the app from which the job was run.
* `resources` **string** ID of the app's resources container.
* `projectCache` **string** ID of the project cache.

The following field is only returned if the corresponding field in the `fields` input is set to `true`:

* `headJobOnDemand` **boolean** The value of `headJobOnDemand` that the job was started with.
* `runSystemRequirements` **mapping** (nullable) A mapping with the `systemRequirements` values that were passed explicitly to [`/executable-xxxx/run`](https://documentation.dnanexus.com/developer/api/api-directory) or [`/job/new`](#api-method-job-new) when the job was created, or `null` if the `systemRequirements` input was not supplied the API call that created the job.
* `runSystemRequirementsByExecutable` **mapping** (nullable) Similar to `runSystemRequirements` but for `systemRequirementsByExecutable`.
* `mergedSystemRequirementsByExecutable` **mapping** (nullable) A mapping with values of `systemRequirementsByExecutable` supplied to all the ancestors of this job and the value supplied to create this job, merged as described in the [Requesting Instance Types](#requesting-instance-types) section. If neither the ancestors of this job nor this job were created with the `systemRequirementsByExecutable` input, `mergedSystemRequirementsByExecutable` value of `null` is returned.

The following field is only returned if the corresponding field in the `fields` input is set to `true` and the caller meets all these requirements:

1. The caller must be an ADMIN of the org that the project of `job-xxxx` is billed to
2. The caller must have access to the project, as required by `/job-xxxx/describe`
3. The `billTo` of the project in which the job ran (at the time of the `/job-xxxx/describe` call) must be licensed to collect and view job's internet usage IPs

If any of these conditions are not met, `/job-xxxx/describe` omits `internetUsageIPs` from its output, while returning other valid requested output fields.

{% hint style="info" %}
A license is required to use the `Internet Usage IPs` feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.\
**This information should only be used by org admins for forensic investigation and fraud prevention purposes.**
{% endhint %}

* `internetUsageIPs` **array of strings** Unique string-encoded IP addresses that the job code communicated with, in no specific order, subject to the following conditions.
  * httpsApp and ssh-to-worker connections are included
  * IP traffic involving Application Execution Environment (AEE)
    1. IP traffic passing through DNAnexus proxies running on the worker is excluded
    2. All other IP traffic involving AEE is included
  * `internetUsageIPs` includes IPs that were communicated with over multiple IP protocols, not only TCP. This includes protocols such as UDP and ICMP.
  * IP addresses accessed by a restarted job are rolled up into internetUsageIPs field of the restarted job's closest visible ancestor job for jobs whose root execution was created before July 12, 2023 00:13 UTC. `internetUsageIPs` for restarted jobs in root executions created after July 12, 2023 00:13 UTC can be described using the `try` input argument to this API method.
  * `internetUsageIPs` for a cluster job includes IP addresses that were communicated with from the cluster worker nodes as well as the main node, including from restarted cluster nodes.

The following field is only returned if the corresponding field in the `fields` input is set to `true`, the requesting user has permissions to view the pricing model of the `billTo` of the job, and the job is the last try of a root execution:

* `subtotalPriceInfo` **mapping** Information about the current costs associated with all jobs in the tree rooted at this job.
  * `subtotalPrice` **number** Current cost (in `currency`) of the job tree rooted at this job.
  * `priceComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `subtotalPrice` was computed.
* `subtotalEgressInfo` **mapping** Information about the aggregated egress amount in bytes associated with all jobs in the tree rooted at this analysis.
  * `subtotalRegionLocalEgress` **integer** Amount in bytes of data transfer between IP in the same cloud region.
  * `subtotalInternetEgress` **integer** Amount in bytes of data transfer to IP outside of the cloud provider.
  * `subtotalInterRegionEgress` **integer** Amount in bytes of data transfer to IP in other regions of the cloud provider.
  * `egressComputedAt` [**timestamp**](https://documentation.dnanexus.com/developer/api/..#data-types) Time at which `subtotalEgress` was computed.

The following field is included only if explicitly requested in the fields input, by a user with VIEW access to a job that is billed to an org with the job logs forwarding feature enabled:

* `jobLogsForwardingStatus` **mapping** (nullable) Information on the status of job logs for the job, or `null` if `jobLogsForwarding` has not been configured for this job or if `jobLogsForwardingStatus` has not been updated yet.
  * `linesDropped` **integer** The number of job log lines whose delivery to Splunk failed from the start of this job's try.

{% hint style="info" %}
A license is required to use the [Forwarding Job Logs to Customer's Splunk feature](https://documentation.dnanexus.com/admin/org-management#forwarding-job-logs-to-customers-splunk). [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information
{% endhint %}

#### Errors

* ResourceNotFound
  * The specified object does not exist
  * `try` input `T` is specified, but there is no try `T` for `job-xxxx.` Also returned if `try` input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
* PermissionDenied
  * Require either VIEW access to the job's temporary workspace, or VIEW access to the parent job's temporary workspace
* InvalidInput
  * Input is not a hash, or `fields` if present, is not a hash or has a non-boolean key
  * `try` input should be a non-negative integer.

### API Method: `/job-xxxx/update`

#### Specification

Updates a job. Most runtime options for jobs are immutable for reproducibility reasons.

{% hint style="info" %}
A license is required to use the Job Ranking feature. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

If a rank field is present, a valid rank must be provided. Two requirements must be met to update rank:

1. The organization associated with this job must have the license feature `executionRankEnabled` active
2. The user must be either the original launcher of the analysis or an administrator of the organization

When supplying `rank`, the job or analysis being updated must be a `rootExecution`, and must be in a state capable of creating more jobs. `rank` cannot be supplied for terminal states like `terminated`, `done`, `failed`, or `debug_hold`.

#### Inputs

* `allowSSH` **array of strings** (optional) Array of IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks (up to /16) from which SSH access is allowed to the user by the worker running this job. Array may also include `"*"` which is interpreted as the IP address of the client issuing this API call as seen by the API server. See [Connecting to Jobs](https://documentation.dnanexus.com/developer/apps/execution-environment/connecting-to-jobs) for more information. Changing this value after a job has started running may not be taken into account for the running job, but it may be passed on to new child jobs created after the update. Defaults to `[]`.
* `rank` **integer** (optional) The rank to set the job and its children executions to.

#### Outputs

* `id` **string** ID of the updated job, which is the string `job-xxxx`.

#### Errors

* InvalidInput
  * Input is not a hash
  * Expected key `rank` of input to be an integer
  * Expected key `rank` of input to be in range \[-1024, 1023]
  * Not a root execution
  * allowSSH accepts only IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) blocks up to /16
* ResourceNotFound
  * The specified job does not exist
* PermissionDenied
  * billTo does not have license feature executionRankEnabled
  * Not permitted to change rank
  * The supplied token does not belong to the user who started the job or who has ADMINISTER access to the job's parent project.
  * The project is associated with a [TRE](https://documentation.dnanexus.com/developer/api/trusted-research-environments) and `allowSSH` was specified. SSH access is not allowed in TRE projects. See [Execution Restrictions](https://documentation.dnanexus.com/developer/trusted-research-environments#execution-restrictions).

### API Method: `/job-xxxx/addTags`

#### Specification

Adds the specified tags to the specified job. If any of the tags are already present, no action is taken for those tags.

#### Inputs

* `tags` **array of strings** (required) Tags to be added.
* `try` **non-negative integer** (optional) Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try for the specified job ID. This is the try with the largest `try` attribute. See [Restartable Jobs](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle#restartable-jobs) section for details.

#### Outputs

* `id` **string** ID of the manipulated job.

#### Errors

* InvalidInput
  * The input is not a hash, the key `tags` is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag.
  * `try` input should be a non-negative integer
* ResourceNotFound
  * The specified job does not exist
  * `try` input `T` is specified, but there is no try `T` for `job-xxxx.` Also returned if `try` input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
* PermissionDenied
  * CONTRIBUTE access is required for the job's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job.

### API Method: `/job-xxxx/removeTags`

#### Specification

Removes the specified tags from the specified job. Ensures that the specified tags are not part of the job -- if any of the tags are already missing, no action is taken for those tags.

#### Inputs

* `tags` **array of strings** (required) Tags to be removed.
* `try` **non-negative integer** (optional) Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try for the specified job ID. This is the try with the largest `try` attribute. See [Restartable Jobs](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle#restartable-jobs) section for details.

#### Outputs

* `id` **string** ID of the manipulated job.

#### Errors

* InvalidInput
  * The input is not a hash, or the key `tags` is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag
  * `try` input should be a non-negative integer
* ResourceNotFound
  * The specified job does not exist
  * `try` input `T` is specified, but there is no try `T` for `job-xxxx.` Also returned if `try` input was specified for jobs in root executions created before July 12, 2023 00:13 UTC
* PermissionDenied
  * CONTRIBUTE access is required for the job's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job

### API Method: `/job-xxxx/setProperties`

#### Specification

Sets properties on the specified job. To remove a property altogether, its value needs to be set to the JSON `null` (instead of a string). This call updates the properties of the job by merging any old (previously existing) ones with what is provided in the input, the newer ones taking precedence when the same key appears in the old.

To reset properties, you need to remove all existing key/value pairs and replace them with new ones. First, issue a describe call to get the names of all properties. Then issue a `setProperties` request to set the values of those properties to `null`.

#### Inputs

* `properties` **mapping** (required) Properties to modify.
  * **key** — Name of property to modify.
  * **value** **string or null** — Either a new string value for the property, or `null` to unset the property.
* `try` **non-negative integer** (optional) Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try for the specified job ID. This is the try with the largest `try` attribute. See [Restartable Jobs](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle#restartable-jobs) section for details.

#### Outputs

* `id` **string** ID of the manipulated job.

#### Errors

* InvalidInput
  * The input is not a hash, `properties` is missing or is not a hash, or there exists at least one value in `properties` which is neither a string nor the JSON `null`.
  * `try` input should be a non-negative integer.
* ResourceNotFound
  * The specified job does not exist
  * `try` input `T` is specified, but there is no try `T` for `job-xxxx.` Also returned if `try` input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
* PermissionDenied
  * CONTRIBUTE access is required for the job's project context. Otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job.

### API Method: `/job-xxxx/terminate`

#### Specification

Terminates a job and its job tree. If the job is already in a terminal state such as `terminated`, `failed`, or `done`, no action is taken. Otherwise, all jobs left in the job tree that have not reached a terminal state eventually are put into the `terminated` state with failure reason `Terminated`. Any authentication tokens generated for this execution are invalidated, and any running jobs are stopped. See [Job Lifecycle](https://documentation.dnanexus.com/user/running-apps-and-workflows/job-lifecycle) for more details on job states.

Jobs can only be terminated by the user who launched the job or by any user with ADMINISTER access to the project context.

#### Inputs

* None

#### Outputs

* `id` **string** ID of the terminated job, which is the string `job-xxxx`.

#### Errors

* ResourceNotFound
  * The specified object does not exist
* PermissionDenied
  * Either (1) the user must match the `launchedBy` entry of the job object, and CONTRIBUTE access is required to the project context of the job, or (2) ADMINISTER access is required to the project context of the job

### API Method: `/job-xxxx/getIdentityToken`

#### Specification

Get a signed DNAnexus JSON Web Token (JWT) that establishes a security-hardened and verifiable identity linked to the specific DNAnexus jobs. This job identity token can be provided to a 3rd party service, such as AWS, which validates the job identity token and exchange it for temporary access credentials. These credentials (such as an AWS token) allow job code to securely access specific 3rd party resources, including external storage buckets, external lambda functions, external databases, or external secrets vaults. See [Job Identity Tokens for Access to Clouds and 3rd-Party Services](https://documentation.dnanexus.com/developer/apps/job-identity-tokens-for-access-to-clouds-and-third-party-services) for details.

This API method must be called from a DNAnexus job with the DNAnexus job token corresponding to the job.

Internally DNAnexus uses the `RSASSA_PSS_SHA_256` algorithm to sign the retrieved JWT. This algorithm follows the following spec:

| Algorithm            | Algorithm description                                                         |
| -------------------- | ----------------------------------------------------------------------------- |
| `RSASSA_PSS_SHA_256` | PKCS #1 v2.2, Section 8.2, RSA signature with PKCS #1v1.5 Padding and SHA-256 |

#### Inputs

* `audience` **string** (required) The intended audience claim from the token. This value corresponds to the audience IdP setting in the 3rd party service, such as AWS.
  * Must be between 1 and 255 characters containing only alphanumeric characters and `.` (period), `_` (underscore), and `-` (dash) characters.
* `subject_claims` **array of strings** (required) An array of unique, valid DNAnexus claims that are joined together to overwrite the default "sub" claim. Defaults to `["launched_by", "job_worker_ipv4"]`.
  * Must be one of `"job_id"`, `"root_execution_id"`, `"root_executable_id"`, `"root_executable_name"`, `"root_executable_version"`, `"executable_id"`, `"app_name"`, `"app_version"`, `"project_id"`, `"bill_to"`, `"launched_by"`, `"region"`, `"job_worker_ipv4"`, or `"job_try"`.

#### Outputs

This method returns a JSON object with a `Token` field containing a signed [JSON Web Token](https://jwt.io/introduction) (JWT), with standard and custom claims, represented as a JSON string.

```json
{ "Token": "<JSON Web Token String>" }
```

#### Errors

* InvalidInput
  * `job-xxxx/getIdentityToken` is missing parameter `audience`
  * audience input expected to be a non empty string between 1 and 255 characters containing only alphanumeric or `'.', '-', '_'` characters
  * Expected `subject_claims` to be a nonempty array of strings
  * The `subject_claims` array contains at least one invalid claim
  * Claim `'<input_claim>'` is expected to only occur at most once in `subject_claims`
* PermissionDenied
  * job-xxxx/getIdentityToken can only be called with a job token
    * This error is returned if this API method is called with a non-job DNAnexus token
* InvalidAuthentication
  * The token could not be found
    * This error is returned if this API method is called with an invalid DNAnexus token
  * The token cannot be used from this IP address
    * This error is returned if this API method is called with a DNAnexus job token corresponding to a different DNAnexus job.
