Applets and Entry Points
Last updated
Was this helpful?
Last updated
Was this helpful?
Applets are executable data objects. Like other data objects, applets exist inside projects and can be cloned between projects. By default, applets also have VIEW permissions into the projects from which they are run. Applets can be used to create private, customized scripts for specialized needs, or for testing and developing more general apps that may be of interest to the community at large.
Applets must be created with a so that the system knows how to run it, but they can be created with or without . Providing I/O specifications is encouraged because it allows the system to validate the input arguments when an applet is launched, and validate the produced output when an applet is finished. Also, when an applet has an I/O specification, the DNAnexus website can automatically render a configuration form for users who want to launch the applet (since the system is aware of the names and types of inputs that the applet requires). Finally, other developers who want to invoke the applet from their applets can look at the I/O specification for help on how to launch the applet programmatically (i.e., what to give as inputs, or what to expect from the outputs).
If I/O specifications are not provided, users can launch the applet with any input (which the system will not validate, except for and ), and the applet can produce any output. Developers of such applets are responsible for documenting what their applet expects in the input and what outputs their applet can produce. Likewise, they are responsible for providing a user interface for someone to configure the applet’s input on the DNAnexus’ website. This allows for building powerful applets whose input or output might be hard to formally describe, perhaps because it is variable or polymorphic, or would otherwise be constrained by what DNAnexus allows in its input/output definition grammar.
Running an applet is slightly different depending on whether the applet is launched by a user from outside of the DNAnexus platform, or by another running job.
Launching an applet outside of the platform requires associating a project with it. As mentioned earlier, this project context is important for the following reasons:
Any charges related to the execution of the applet are associated with that project.
Jobs (such as the job created by launching the applet, as well as any other jobs created by the applet itself while running) will be given VIEW access to that project.
Any objects output by the applet will be placed into that project.
When launching an applet from another running job, this parent job is already associated with a project. This project is carried forward to the launched master job; more specifically:
Any charges related to the execution of the master job are associated with that project.
Jobs (such as the job created by launching the master job, as well as any other descendant jobs of the master job) will be given VIEW access to that project.
Any objects output by the master job will be placed into the workspace of the parent job.
When computing the effective value of systemRequirements
, the keys are retrieved in the following order (using the first one found):
The systemRequirementsByExecutable
values provided at launch of the job's ancestors and the job itself. Note that child job's systemRequirementsByExecutable
are merged into parent's systemRequirementsByExecutable
without overriding parental values.
The systemRequirements
value requested for the job's master job.
The runSpec.systemRequirements
field in the applet or app.
If none of these values are present (or contain a relevant entry point), system-wide defaults are used.
The "*" entry point in systemRequirements
argument refers to all other entry points that are not already named in systemRequirements
. Note that if an applet has a run specification that includes an instance type X for entry point "main", and the applet is run with systemRequirements
specifying that "*" should have a different instance type Y, the "main" entry point will be run on instance type Y because the "main" key has been implicitly overridden by the "*" entry point specified at runtime.
key executable id (applet-xxxx
or app-xxxx
)or "*"
to indicate all executables not explicitly specified in this mapping.
value mapping System requirement for the corresponding executable. It includes at least one of the following key-value pairs:
key Entry point name or "*"
to indicate all entry points not explicitly specified in this mapping.
value mapping Requested resources for the entry point:
clusterSpec
mapping (optional). If specified, must contain
initialInstanceCount
positive integer: The number of nodes in the cluster including the driver node. Value of 1 indicates a cluster with no worker nodes.
fpgaDriver
string (optional): Specifies the FPGA driver that will be installed on the FPGA-enabled cloud host instance prior to app's code execution. Accepted values are "edico-1.4.2" (installed on FPGA-enabled instances by default) , "edico-1.4.5", and "edico-1.4.7".
nvidiaDriver
string (optional): Specifies the NVIDIA driver to install on the GPU-enabled cloud host instance prior to app's code execution. Accepted values are "R470"
(470.256.02, default value) and"R535"
(535.183.01).
The systemRequirementsByExecutable
argument applies to the entire resulting job execution tree and merges with all downstream runtime inputs, with explicit upstream inputs taking precedence.
Second-level systemRequirementsByExecutable
keys (instanceType
, fpgaDriver
, nvidiaDriver,
clusterSpec
) are resolved independently from each other.
For example, calling
forces mem2_ssd1_v2_x2
for all jobs running all entry points of applet-1
, but allows overrides of fpgaDriver
, nvidiaDriver
and clusterSpec.initialInstanceCount
for applet-1
via downstream specification of systemRequirementsByExecutable
, systemRequirements
/ stageSystemRequirements
specified at runtime, system requirements embedded in workflow stages, and systemRequirements
supplied to /executable/new
.
systemRequirementsByExecutable
specified at the root level can not be overridden by later specifications within the execution tree. Children's systemRequirementsByExecutable
are merged into parent's systemRequirementsByExecutable
without overriding parental values. Specification of "*" at a higher level (both for executable key and entry point key) precludes overrides at lower levels of execution subtree. "*" at either executable or entry point level refers to "everything else" and does not take precedence over any sibling fields in the systemRequirementsByExecutable
object that mention specific executable IDs or entry points, but precluding overrides at lower level:
If a parent has "*", the child's specification of any key in that hash is ignored because otherwise the child's key (either * or some other specific key) would override the parental specification.
If a parent does not have "*", the child's specification of "*" will apply to all keys not specified at the parent level or by an explicit non-* key inside the child. The child's non-* keys that are already mentioned in the parent are ignored, while non-* keys that are not mentioned in any of the parents will take effect.
Examples:
Force all jobs in the execution tree to use mem2_ssd1_v2_x2
. Can not be overridden at either executables or entry point level of any downstream job in the resulting execution tree by either systemRequirementsByExecutable
specifications or by systemRequirements
specifications.
Force all jobs in the execution tree executing applet-1
to run on mem2_ssd1_v2_x2
. Downstream jobs in the resulting execution tree will be able to override instanceType
for executables other than applet-1
, but will not be able to override applet-1
's instance types for a specific entry point because of the "*"
entry point in the below specification.
Force all jobs in the execution tree that are executing the applet-1
main
entry point to run on mem2_ssd1_v2_x2
. Downstream jobs will be able to override instanceType
for executables other than applet-1
and for entry points of applet-1
other than main
.
Force all jobs in the execution tree that are executing the applet-1
main
entry point to run on mem2_ssd1_v2_x2
and jobs executing all other entry points of applet-1
to run on mem2_ssd1_v2_x4
. Also force all jobs in the execution tree executing the main
entry point of applet-2
to run on mem2_ssd1_v2_x8
. Downstream jobs will be able to override instanceType
for executables other than applet-1
, except for the main
entry point of applet-2
.
For more examples of how system requirements are resolved, see
.
The runtime input timeoutPolicyByExecutable.<executable_id>.<entry_point>
field. Note that this field overrides both user-specified and system default timeout policies, and it propagates down the entire resulting job execution tree and merges with all downstream runtime inputs, with explicit upstream inputs taking precedence.
The system default of 30 days. This limit is enforced for all jobs billed to orgs that do not have the allowJobsWithoutTimeout
license.
Note that setting the timeout of a specific executable at a specific entry point to 0 is the same as not setting a timeout for that executable at that entry point at all (i.e. there exists no runtime or default run specification entry for that executable at that entry point).
/applet/new
Specification
Creates a new applet object with the given applet specification.
Links specified in "bundledDepends" contribute to the "links" array returned by a describe call and are always cloned together with the applet regardless of their visibility.
The applet object does not receive special permissions for any referenced data objects (such as "id" in "bundledDepends"); these entries are accessed every time the applet is run, with the same permissions as the job. This means that if some referenced file is later deleted or is not present in the project context, the applet will not be able to run. However, the system will not automatically "invalidate" the applet object for any broken links; if the referenced file is reinstated from a copy existing in another project, the applet can then be run.
Inputs
project
string ID of the project or container to which the applet should belong (e.g. the string "project-xxxx")
name
string (optional, default is the new ID) The name of the object
title
string (optional, default "") Title of the applet, e.g. "Micro Map"
summary
string (optional, default "") A short description of the applet
description
string (optional, default "") A longer description about the applet
developerNotes
string (optional, default "") More detailed notes about the applet
tags
array of strings (optional) Tags to associate with the object
types
array of strings (optional) Types to associate with the object
hidden
boolean (optional, default false) Whether the object should be hidden
properties
mapping (optional) Properties to associate with the object
key Property name
value string Property value
folder
string (optional, default "/") Full path of the folder that is to contain the new object
parents
boolean (optional, default false) Whether all folders in the path provided in folder
should be created if they do not exist
ignoreReuse
boolean (optional, default false) If true then no job reuse will occur by default for this applet.
dxapi
string The version of the API that the applet was developed with, for example, "1.0.0"
httpsApp
mapping (optional) HTTPS app configuration
ports
array of integers Array of ports open for inbound access. Allowed ports are 443
, 8080
, and 8081
shared_access
string HTTPS access restriction for jobs run from this executable. Allowed values are "VIEW", "CONTRIBUTE", "ADMINISTER", "NONE". VIEW, CONTRIBUTE, ADMINISTER require the specified permission level or greater for the project in which the job executes. Most restrictive NONE setting limits access to only the user who launched the job.
dns
mapping DNS configuration for the job
The effective billTo
of the running app must be an organization.
The user must be an admin of that organization.
If the id of the billTo
organization is org-some_name
then the hostname must resemble hostprefix-some-name
, i.e. it must end in the handle
portion of the org id with _
and .
replaced by hyphen (-
) and start with an alphanumeric "prefix" (in this case hostprefix
).
For an org with id of org-some_name
, valid hostnames include myprefix-some-name
and ab5-some-name
; invalid hostnames include a-some-name
(prefix too short), a-some_name
(underscore present), 4abc-some-name
(starts with a numerical character), myprefix-someotherorgname
(does not end with -some-name
).
When two jobs attempt to use the same URL, the newer job will take over the hostname from an already running job. We recommend having a single job running with any given URL to avoid the risk of having the URL re-assigned to another job with same URL if the other job restarts due to its restart policy.
Outputs
id
string ID of the created applet object (i.e. a string in the form "applet-xxxx")
Errors
InvalidInput
A reserved linking string ("$dnanexus_link") appears as a key in a mapping in details
but is not the only key in the hash.
A reserved linking string ("$dnanexus_link") appears as the only key in a hash in details
but has value other than a string.
The spec is invalid.
All specified bundled dependencies must be in the same region as the specified project.
A nonce
was reused in a request but some of the other inputs had changed signifying a new and different request
A nonce
may not exceed 128 bytes
treeTurnaroundTimeThreshold
must be a non-negative integer less than 2592000
timeoutPolicy for all entry points should not exceed 30 days
PermissionDenied
UPLOAD access is required to the specified project.
ResourceNotFound
The route in folder
does not exist while parents
is false.
/applet-xxxx/describe
Specification
Describes an applet object.
Inputs
project
string (optional) Project or container ID to be used as a hint for finding the object in an accessible project
defaultFields
boolean (optional, default false if fields
is supplied, true otherwise) whether to include the default set of fields in the output (the default fields are described in the "Outputs" section below). The selections are overridden by any fields explicitly named in fields
.
fields
mapping (optional) include or exclude the specified fields from the output. These selections override the settings in defaultFields
.
key Desired output field; see the "Outputs" section below for valid values here
value boolean whether to include the field
The following options are deprecated (and will not be respected if fields
is present):
properties
boolean (optional, default false) Whether the properties should be returned
details
boolean (optional, default false) Whether the details should also be returned
Outputs
id
string The object ID (i.e. the string "applet-xxxx")
The following fields are included by default (but can be disabled using fields
or defaultFields
):
project
string ID of the project or container in which the object was found
class
string The value "applet"
types
array of strings Types associated with the object
created
timestamp Time at which this object was created
state
string Either "open" or "closed"
hidden
boolean Whether the object is hidden or not
links
array of strings The object IDs that are pointed to from this object, including links found in both the details
and in bundledDepends
(if it exists) of the applet
name
string The name of the object
folder
string The full path to the folder containing the object
sponsored
boolean Whether the object is sponsored by DNAnexus
tags
array of strings Tags associated with the object
modified
timestamp Time at which the user-provided metadata of the object was last modified
createdBy
mapping How the object was created
user
string ID of the user who created the object or launched an execution which created the object
job
string present if a job created the object ID of the job that created the object
executable
string present if a job created the object ID of the app or applet that the job was running
dxapi
string The version of the API used
access
mappings The access requirements of the applet
title
string The title of the applet
summary
string The summary of the applet
description
string The description of the applet
developerNotes
string The developer notes of the applet
ignoreReuse
boolean Whether job reuse is disabled for this applet
httpsApp
mapping HTTPS app configuration
shared_access
string HTTPS access restriction for this job
ports
array of integers Ports that are open for inbound access
dns
mapping DNS configuration for the job
The following field (included by default) is available if an input specification is specified for the applet:
inputSpec
array of mappings The input specification of the applet
The following field (included by default) is available if an output specification is specified for the applet:
outputSpec
array of mappings The output specification of the applet
The following field (included by default) is available if the object is sponsored by a third party:
sponsoredUntil
timestamp Indicates the expiration time of data sponsorship (this field is only set if the object is currently sponsored, and if set, the specified time is always in the future)
The following fields are only returned if the corresponding field in the fields
input is set to true
:
properties
mapping Properties associated with the object
key Property name
value string Property value
details
mapping or array Contents of the object’s details
Errors
ResourceNotFound
the specified object does not exist or the specified project does not exist
InvalidInput
the input is not a hash, project
(if supplied) is not a string, or the value of properties
(if supplied) is not a boolean
PermissionDenied
VIEW access required for the project
provided (if any), and VIEW access required for some project containing the specified object (not necessarily the same as the hint provided)
/applet-xxxx/get
Specification
Inputs
None
Outputs
project
string ID of the project or container in which the
object was found
id
string The object ID (i.e. the string "applet-xxxx")
class
string The value "applet"
types
array of strings Types associated with the object
created
timestamp Time at which this object was created
state
string Either "open" or "closed"
hidden
boolean Whether the object is hidden or not
links
array of strings The object IDs that are pointed to from this object, including links found in both the details
and in bundledDepends
(if it exists) of the applet
name
string The name of the object
folder
string The full path to the folder containing the object
sponsored
boolean Whether the object is sponsored by DNAnexus
tags
array of strings Tags associated with the object
modified
timestamp Time at which the user-provided metadata of the object was last modified
createdBy
mapping How the object was created
user
string ID of the user who created the object or launched an execution which created the object
job
string present if a job created the object ID of the job that created the object
executable
string present if a job created the object ID of the app or applet that the job was running
runSpec
mapping The run specification of the applet
dxapi
string The version of the API used
access
mappings The access requirements of the applet
title
string The title of the applet
summary
string The summary of the applet
description
string The description of the applet
developerNotes
string The developer notes of the applet
If an input specification is specified for the applet:
inputSpec
array of mappings The input specification of the applet
If an output specification is specified for the applet:
outputSpec
array of mappings The output specification of the applet
Errors
ResourceNotFound
the specified object does not exist
PermissionDenied
VIEW access required
/applet-xxxx/run
Specification
Creates a new job which will execute the code of this applet. The default entry point for the applet’s interpreter (given in the runSpec.interpreter field of the applet spec) will be called.
Interpreter
Entry point
bash
main() in top level scope with no args, if it exists. Also, $1 is set to "main"
python3
Any function decorated with @dxpy.app_entry(func="main") with no args
The job might fail for the following reasons (this list is non-exhaustive):
A reference such as one mentioned in "bundledDepends" could not be accessed using the job’s credentials (VIEW access to project context, CONTRIBUTE access to a workspace, VIEW access to public projects)
An input object does not exist.
Permission denied accessing an input object.
An input object is not a data object (things like users, projects, or jobs are not data objects)
An input object does not satisfy the class constraints.
An input object does not satisfy the type constraints.
An input object is not in the "closed" state.
Insufficient credits.
The user has too many jobs that are in a nonterminal state.
Inputs
name
name (optional, default is the applet's title if set and otherwise the applet's name) Name for the resulting job
input
mapping Input that the applet is launched with
key Input field name. If the applet has an input specification, it must be one of the names of the inputs; otherwise, it can be any valid input field name.
value Input field value
dependsOn
array of strings (optional) List of job, analysis and/or data object IDs; the applet will not begin running any of its entry points until all jobs listed have transitioned to the "done" state, and all data objects listed are in the "closed" state
folder
string (optional, default "/") The folder into which objects output by the job will be placed. If the folder does not exist when the job completes, the folder (and any parent folders necessary) will be created. The folder structure that output objects reside in is replicated within the target folder, e.g. if folder
is set to "/myJobOutput" and the job outputs an object which is in the folder "/mappings/mouse" in the workspace, the object is placed into "/myJobOutput/mappings/mouse".
tags
array of strings (optional) Tags to associate with the resulting job
properties
mapping (optional) Properties to associate with the resulting job
key Property name
value string Property value
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the job
key app or applet ID. If an executable is not explicitly specified in timeoutPolicyByExecutable
, then any job in the resulting job execution tree that runs that executable will have the default timeout policy present in the run specification of that executable.
value mapping Timeout policy for the corresponding executable. Includes at least one of the following key-value pairs:
key Entry point name or "*"
to indicate all entry points not explicitly specified in this mapping. If an entry point name is not explicitly specified and "*"
is not present, then any job in the resulting job execution tree that runs the corresponding executable at that entry point will have the default timeout policy present in the run specification of the corresponding executable.
value mapping Timeout for a job running the corresponding executable at the corresponding entry point. Includes at least one of the following key-value pairs:
key Unit of time; one of "days", "hours", or "minutes".
value number Amount of time for the corresponding time unit; must be non-negative. The effective timeout is the sum of the units of time represented in this mapping.
delayWorkspaceDestruction
boolean (optional) If not given, the value defaults to false for root executions (launched by a user or detached from another job), or to the parent's delayWorkspaceDestruction
setting. If set to true, the temporary workspace created for the resulting job will be preserved for 3 days after the job either succeeds or fails.
debug
mapping (optional, default { }) Specify debugging options for running the executable; this field is only accepted when this call is made by a user (and not a job)
debugOn
array of strings (optional, default [ ]) Array of job errors after which the job's worker should be kept running for debugging purposes, offering a chance to SSH into the worker before worker termination (assuming SSH has been enabled). This option applies to all jobs in the execution tree. Jobs in this state for longer than 2 days will be automatically terminated but can be terminated earlier. Allowed entries include "ExecutionError", "AppError", and "AppInternalError".
singleContext
boolean (optional) If true then the resulting job and all of its descendants will only be allowed to use the authentication token given to it at the onset. Use of any other authentication token will result in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, and singleContext: true
setting is used instead.
ignoreReuse
boolean (optional) If true then no job reuse will occur for this execution. Takes precedence over value supplied for applet-xxxx/new
.
detach
boolean (optional) This option has no impact when the API is invoked by a user. If invoked from a job with detach set to true, the new job will be detached from the creator job and will appear as a typical root execution. A failure in the detached job will not cause a termination of the job from which it was created and vice versa. Detached job will inherit neither the access to the workspace of its creator job nor the creator job's priority. Detached job's access permissions will be the intersection (most restricted) of access permissions of the creator job and the permissions requested by the detached job's executable. To launch the detached job, creator job must have CONTRIBUTE or higher access to the project in which the detached job is launched. Additionally, the billTo of the project in which the creator job is running must be licensed to launch detached executions.
rank
integer (optional) An integer between -1024 and 1023, inclusive. The rank indicates the priority in which the executions generated from this executable will be processed. The higher the rank, the more prioritized it will be. If no rank is provided, the executions default to a rank of zero. If the execution is not a root execution, it will inherit its parent's rank.
costLimit
float (optional) The limit of the cost that this execution tree should accrue before termination. This field will be ignored if this is not a root execution.
headJobOnDemand
boolean (optional) If true, then the resulting master job will be allocated to an on-demand instance, regardless of its scheduling priority. All of its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app’s headJobOnDemand
(if any).
preserveJobOutputs
mapping (optional, default is null). Preserves all cloneable outputs of every completed, non-jobReused job in the execution tree launched by this api call in the root execution project, even if root execution ends up failing. Preserving the job outputs in the project trades off higher costs of storage for the possibility of subsequent job reuse.
When a non-jobReused job in the root execution tree launched with non-null preserveJobOutputs
enters the "done" state, all cloneable objects (e.g. files, records, applets, closed workflows, but not databases) referenced by the $dnanexus_link
in the job's output
field will be cloned to the project folder described by preserveJobOutputs.folder
, unless the output objects already appear elsewhere in the project. If the folder specified by preserveJobOutputs.folder
does not exist in the project, the system will create the folder and its parents.
As the root job or root analysis' stages complete, the regular outputs of the root execution will be moved from preserveJobOutputs.folder
to the regular output folder(s) of the root execution. So if you [1] run your root execution without the preserveJobOutputs
option to completion, some root execution outputs will appear in the project in the root execution's output folder(s). If you had run the same execution with preserveJobOutputs.folder
set to "/pjo_folder"
, the same set of outputs would appear in the same set of root execution folders as in [1] at completion of the root execution, while some additional job outputs that are not outputs of the root execution would appear in "/pjo_folder"
.
preserveJobOutputs
argument can be specified only when starting a root execution or a detached job.
preserveJobOutputs
value, if not null, should be a mapping that may contain the following:
key "folder"
string (optional)
valuepath_to_folder
string (required if "folder" key is specified). Specifies a folder in the root execution project where the outputs of jobs that are a part of the launched execution will be stored. path_to_folder
starting with "/
" is interpreted as absolute folder path in the project the job is running in. path_to_folder
not starting with "/
" is interpreted as a path relative to the root execution's folder
field. path_to_folder
value of ""
(i.e. empty string) will preserve job outputs in the folder described by root execution's folder
field.
If the preserveJobOutputs
mapping does not have a folder
key, the system will use the default folder value of "intermediateJobOutputs"
(i.e. "preserveJobOutputs": {}
is equivalent to
"preserveJobOutputs": {"folder":"intermediateJobOutputs"}
).
It is recommended to place preserveJobOutputs outputs for different root executions into different folders so as not to create a single folder with a very large (>450K) number of files.
Outputs
id
string ID of the created job (i.e. a string in the form "job-xxxx")
Errors
ResourceNotFound
The specified applet object or project context does not exist.
One of the IDs listed in dependsOn
does not exist.
PermissionDenied
The requesting user must have VIEW access to all objects listed in dependsOn
, and to all project contexts of all jobs listed in dependsOn
.
The requesting user must have VIEW access to the applet object.
If invoked by a user, then the requesting user must have CONTRIBUTE access to the project context.
The requesting user has too many (65536, by default) nonterminal (e.g., running, runnable) jobs and must wait for some to finish before creating more.
The billTo
of the job's project must be licensed to start detached executions when invoked from the job with detach: true
argument.
If rank is provided and the billTo does not have license feature executionRankEnabled set to true.
if preserveJobOutputs
is not null and billTo
of the project where execution is attempted does not have preserveJobOutputs license.
detailedJobMetrics
setting of true requires project's billTo
to have detailedJobMetrics
license feature set to true.
app{let}-xxxx
can not run in project-xxxx
because executable's httpsApp.shared_access
should be NONE
to run with isolated browsing.
InvalidInput
input
does not satisfy the input specification of this applet; an additional field is provided in the error JSON for this error that looks like
If invoked by a user, then project
must be specified.
If invoked by a job, then project
must not be specified.
The project context must be in the same region as this applet.
All data object inputs that are specified directly must be in the same region as this applet.
All inputs that are job-based object references must refer to a job that was run in the same region as this applet.
allowSSH
accepts only IP addresses or CIDR blocks up to /16.
A nonce
was reused in a request but some of the other inputs had changed signifying a new and different request.
A nonce
may not exceed 128 bytes.
preserveJobOutputs.folder
value is a syntactically invalid path to a folder.
preserveJobOutputs
is specified when launching a non-detached execution from a job.
detailedJobMetrics
can not be specified when launching a non-detached execution from a job.
timeoutPolicyByExecutable
for all executables should not be null
timeoutPolicyByExecutable
for all entry points of all executables should not be null
timeoutPolicyByExecutable
for all entry points of all executables should not exceed 30 days
Expected key "timeoutPolicyByExecutable
.*" of input to match "/^(app|applet)-[0-9A-Za-z]{24}$/"
InvalidState
Some specified input is not in the "closed" state.
Some job in dependsOn
has failed or has been terminated.
Input Specification Errors
The following list describes the possible error reasons and what the fields mean:
"class": the specified "field" was expected to have class "expected". If the input spec required an array but it was not an array, the value for "expected" will be "array". If the input spec required an array but an element was of the wrong class, then the value for "expected" will be the actual class the entry was expected to be, e.g. "record".
"type": the specified "field" either needs to have the type "expected" or does not satisfy the or-condition in "expected"
"missing": the specified "field" was not provided but is required in the input specification
"unrecognized": the given "field" is not present in the input specification
"key "field"": the key "field" was missing in a job-based object reference
"only two keys": exactly two keys were expected in the hash for the job-based object reference
"key "$dnanexus_link"": the key "$dnanexus_link" was missing in a link for specifying a data object
"choices": the specified "field" must be one of the values in "expected"
/applet-xxxx/validateBatch
Specification
This API call verifies that a set of input values for a particular applet can be used to launch a batch of jobs in parallel. The applet must have input specification defined.
Batch and common inputs:
batchInput
: mapping of inputs corresponding to batches. The nth value of each array corresponds to nth execution of the applet. Including a null
value in an array at a given position means that the corresponding applet input field is optional and the default value, if defined, should be used. E.g.:
commonInput
: mapping of non-batch, constant inputs common to all batch jobs, e.g.:
File references:
files
: list of files (passed as $dnanexus_link references), must be a superset of files included in batchInput
and/or commonInput
e.g.:
Output: list of mappings, each mapping corresponds to an expanded batch call. Nth mapping contains the input values with which the nth execution of the applet will be run, e.g.:
It performs the following validation:
the input types match the expected applet input field types,
provided inputs are sufficient to run the applet,
null values are only among values for inputs that are optional or have no specified default values,
all arrays of batchInput
are of equal size,
every file referred to in batchInputs
exists in files
input.
Inputs
batchInput
mapping Input that the applet is launched with
key Input field name. It must be one of the names of the inputs defined in the applet input specification.
value Input field values. It must be an array of fields.
commonInput
mapping (optional) Input that the applet is launched with
key Input field name. It must be one of the names of the inputs defined in the applet input specification.
value Input field values. It must be an array of fields.
files
list (optional) Files that are needed to run the batch jobs, they must be provided as $dnanexus_links
. They must correspond to all the files included in commonInput
or batchInput
.
Outputs
expandedBatch
list of mappings Each mapping contains the input values for one execution of the applet in batch mode.
Errors
InvalidInput
inputSpec
must be specified for the applet
Expected batchInput
to be a JSON object
Expected commonInput
to be a JSON object
Expected files
to be an array of $dnanexus_link
references to files
The batchInput
field is required but empty array was provided
Expected the value of batchInput
for an applet input field to be an array
Expected the length of all arrays in batchInput
to be equal
The applet input field value must be specified in batchInput
The applet input field is not defined in the input specification of the applet
All the values of a specific batchInput
field must be provided (cannot be null
) since the field is required and has no default value
Expected all the files in batchInput
and commonInput
to be referenced in the files
input array
/job/new
Specification
The entry point for the job’s execution will be determined as follows, where f is the string given in the function parameter in the input:
Interpreter
Entry point
bash
f() in top level scope with no args, if it exists. Also, $1 is set to f
python3
Any function decorated with @dxpy.entry_point("f")
This call will fail if the specified OAuth2 token does not internally represent a currently executing job.
The new job may fail for at least the following reasons:
A reference such as one mentioned in "bundledDepends" could not be accessed using the job’s credentials (VIEW access to project context, CONTRIBUTE access to workspace, VIEW access to public projects)
Insufficient credits.
Inputs
name
string (optional, default "<parent job's name>:<function
>") Name for the resulting job
input
mapping Input that the job is launched with; no syntax checking occurs, but the mapping will be checked for links and create dependencies on any open data objects or unfinished jobs accordingly
key Input field name
value Input field value
dependsOn
array of strings (optional) List of job, analysis and/or data object IDs; the newly created job will not run until all executions listed in dependsOn
have transitioned to the "done" state, and all data objects listed are in the "closed" state
function
string The name of the entry point or function of the applet’s code that will be executed
tags
array of strings (optional) Tags to associate with the resulting job
properties
mapping (optional) Properties to associate with the resulting job
key Property name
value string Property value
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the job
ignoreReuse
boolean (optional) If true then no job reuse will occur for this execution. Takes precedence over value supplied for applet-xxxx/new
.
singleContext
boolean (optional) If true then the resulting job and all of its descendants will only be allowed to use the authentication token given to it at the onset. Use of any other authentication token will result in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, and singleContext: true
setting is used instead.
headJobOnDemand
boolean (optional) If true, then the resulting root job will be allocated to an on-demand instance, regardless of its scheduling priority. All of its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app’s headJobOnDemand
(if any).
Outputs
id
string ID of the created job (i.e. a string in the form "job-xxxx").
Errors
InvalidAuthentication (the usual reasons InvalidAuthentication is thrown, or the auth token used is not a token issued to a job)
ResourceNotFound (one of the IDs listed in dependsOn
does not exist)
PermissionDenied (VIEW access required for any objects listed in dependsOn
and for the project contexts of any jobs listed in dependsOn
; ability to describe any job used in a job-based object reference
InvalidInput
The input is not a hash
input
is missing or is not a hash
an invalid link syntax appears in the input
dependsOn
, if given, is not an array of strings
"details", if given, is not a conformant JSON object
For each property key-value pair, the size, encoded in UTF-8, of the property key may not exceed 100 bytes and the property value may not exceed 700 bytes
A nonce
was reused in a request but some of the other inputs had changed signifying a new and different request
A nonce
may not exceed 128 bytes
InvalidState
some job in dependsOn
has already failed or been terminated
/job-xxxx/describe
Specification
Users with reorg apps that rely on describing the currently running job may want to check the output field "dependsOn" before the full analysis description becomes available using dx describe analysis-xxx --json | jq -r .dependsOn
or equivalent dxpy
bindings. The output of the command will be an empty array "[]" if it no longer depends on anything (e.g. status "done") which is the signal to proceed, or some job / subanalysis IDs if it is not ready, and the reorg script should wait.
Inputs
defaultFields
boolean (optional, default false
if fields is supplied, true
otherwise) Whether to include the default set of fields in the output (default fields described in the "Outputs" section below). The selections are overridden by any fields explicitly named in fields.
fields
mapping (optional) Include or exclude the specified fields from the output. These selections override the settings in defaultFields
.
key Desired output field; see the "Outputs" section below for valid values here
value boolean Whether to include the field
The following option is deprecated (and will not be respected if fields
is present):
io
boolean (optional, default true) Whether the input and output fields (runInput
, originalInput
, input
, and output
) should be returned
Outputs
id
string The job ID (i.e. the string "job-xxxx")
The following fields are included by default (but can be disabled using fields
):
try
non-negative integer or null. Returns the try for this job, with 0 corresponding to the first try, 1 corresponding to the second try for restarted jobs and so on. null
is returned for jobs belonging to root executions launched before July 12, 2023 00:13 UTC and information for the latest job try is returned.
class
string The value "job"
name
string The name of the job try
executableName
string The name of the executable (applet or app) that the job was created to run
created
timestamp Time at which this job was created. All tries of this job have the same created
value corresponding to creation time of the first job try.
tryCreated
timestamp or null Time at which this job'stry
was created. null
is returned for jobs belonging to root executions launched before July 12, 2023 00:13 UTC. For job try 0, this field has the same value as the created
field.
modified
timestamp Time at which this job try was last updated
egressReport
mapping or undefined A mapping detailing the total bytes of egress for a particular job try.
regionLocalEgress
int Amount in bytes of data transfer between IP in the same cloud region.
internetEgress
int Amount in bytes of data transfer to IP outside of the cloud provider.
interRegionEgress
int Amount in bytes of data transfer to IP in other regions of the cloud provider.
billTo
string ID of the account to which any costs associated with this job will be billed
project
string The project context associated with this job
folder
string The output folder in which the outputs of this job’s master job will be placed
rootExecution
string ID of the top-level job or analysis in the execution tree
parentJob
string or null ID of the job which created this job, or null if this job is an origin job
parentJobTry
non-negative integer or null. null
is returned if the job try being described had no parent, or if the parent itself had a null try
attribute. Otherwise, this job-xxxx
with the try attributes specified in the method's input was launched from the parentJobTry
try of the parentJob
.
originJob
string The closest ancestor job whose parentJob
is null, either because it was run by a user directly or was run as a stage in an analysis
detachedFrom
string or null The ID of the job this job was detached from via the detach
option, otherwise null
detachedFromTry
non-negative integer or null. If this job was detached from a job, detachedFrom
and detachedFromTry
describe the specific try of the job this job was detached from. null
is returned if this job was not detached from another job or if the detachedFrom
had a null
try
attribute.
parentAnalysis
string or null ID of the analysis, present for an origin job that is run as a stage in an analysis, otherwise null
analysis
string or null ID of the nearest ancestor analysis in the execution tree, if one exists, otherwise null
stage
string or null Null if this job was not run as part of a stage in an analysis; otherwise, the ID of the stage this job is part of
stateTransitions
array of mappings Each element in the list indicates a time at which the state of the job try changed; the initial state of a job try is always "idle" when it is created and is not included in the list. Each hash has the key/values:
newState
string The new state, e.g. "runnable"
setAt
timestamp Time at which the new state was set for the job try
workspace
string (present once the workspace has been allocated) ID of the temporary workspace assigned to the job try (e.g. "container-xxxx")
launchedBy
string ID of the user who launched the original job
function
string Name of the function, or entry point, that this job is running
tags
array of strings Tags associated with the job try
properties
mapping Properties associated with the job try
key Property name
value string Property value
finalPriority
string The final priority this job try was run at. If the job try was run on an on-demand instance, finalPriority
will be set to "high" regardless the original priroity setting.
rank
integer The rank assigned to the job and all of its child executions. The range is from [-1024 to 1023].
details
mapping or array JSON details that were stored with this job
timeout
int The effective timeout, in milliseconds, for this job. This number will be smaller or equal to 30 days (2592000000 milliseconds) for most jobs. Most jobs launched prior to December 19, 2024 or jobs billed to orgs with theallowJobsWithoutTimeout
license may have this field set to "undefined," indicating that no timeout policy was specified for the executable being run by this job, or that the effective timeout resolved to 0ms.
instanceType
string Instance type for this job try, computed from the systemRequirements
specification or the system-wide default if no instance type was requested.
delayWorkspaceDestruction
boolean (present for origin and master jobs only) Whether the job's temporary workspace will be kept around for 3 days after the job either succeeds or fails
dependsOn
array of strings If the job is in a waiting state ("waiting_on_input" or "waiting_on_output"), an array of job IDs and data object IDs that must transition to the done or closed state for the job to transition out of the waiting state to either "runnable" or "done"
failureReason
string (present if the job try is in a failing, failed or restarted state) A short string describing where the error occurred, e.g. "AppError" for errors thrown in the execution of the applet or app
failureMessage
string (present if the job try is in a failing, failed or restarted state) A more detailed message describing why the error occurred
failureFrom
mapping (present if the job is try in a failing, failed or restarted state) Metadata describing the job try which caused the failure of this job try (which may be the same job and job try as the one being described):
id
string ID of the failed job
try
non-negative integer or undefined. try
of the failure-causing job for this job-xxxx
’s try
. Present only if the failure-causing job had the try attribute.
name
string name of the job
executable
string ID (of the form "applet-xxxx" or "app-xxxx") of the executable the job was running
executableName
string Name of the executable the job was running
function
string name of the function, or entry point, the job was running
failureReason
string failureReason
of the failed job
failureMessage
string failureMessage
of the failed job
failureReports
array of mappings (present if this job failure was reported through a support mechanism) Each item in the list has the following fields:
to
string Email address the failure was reported to
by
string ID of the user who reported the failure
at
timestamp Time at which the report was made
failureCounts
mapping A mapping from failure types (e.g. "AppError") to the number of times that type occurred and caused the job to be restarted prior to the job try being described.
runInput
mapping The input
field that was provided when launching this job
originalInput
mapping The same as runInput
but with default values filled in for any optional inputs that were not provided
input
mapping The same as originalInput
, except that if any job-based references have since been resolved, they are replaced with the resulting object IDs; once the job's state has transitioned to "runnable", this represents exactly the input that will be given to the job
output
mapping or null If the job is in the "done" state, this is the output that this job has generated as output. Otherwise, this value is null. The output may contain unresolved job-based object references.
region
string The region (e.g., "aws:us-east-1") in which the job is running.
singleContext
boolean Whether the job was specified, at run time, to be locked down in order to only issue requests from its own job token.
ignoreReuse
boolean Whether job reuse was disabled for this job
httpsApp
mapping HTTPS app configuration
enabled
boolean Whether HTTPS app configuration is enabled for this job
shared_access
string HTTPS access restriction for this job
ports
array of integers If enabled
is true, which ports are open for inbound access
dns
mapping DNS configuration for the job
preserveJobOutputs
null or a mapping with preserveJobOutputs.folder
expanded to start with "/"
.
detailedJobMetrics
boolean Set to true only if the detailed job metrics collection was enabled for this job
clusterSpec
mapping A copy of the clusterSpec (if present) in the executable used to launch this job
clusterID
string (present only for jobs with a clusterSpec) Unique ID used to identify the cluster of workers running this job try
costLimit
float If the job is a root execution, and has the root execution cost limit, this is the cost limit for the root execution.
If this job is a root execution, the following fields are included by default (but can be disabled using fields
):
selectedTreeTurnaroundTimeThreshold
integer or null The selected turnaround time threshold (in seconds) for this root execution. When treeTurnaroundTime
reaches selectedTreeTurnaroundTimeThreshold
, the system sends an email about this root execution to the launchedBy
user and the billTo
profile.
selectedTreeTurnaroundTimeThresholdFrom
string or null Where selectedTreeTurnaroundTimeThreshold
is from. executable
means that selectedTreeTurnaroundTimeThreshold
is from this root execution's executable's treeTurnaroundTimeThreshold
. system
means that selectedTreeTurnaroundTimeThreshold
is from the system's default threshold.
debugOn
array of strings Array of error types (e.g. AppError) upon which the job will be held for debugging
The following fields (included by default) are only available if the requesting user has permissions to view the pricing model of the billTo
of the job, the job is the last try of an origin job, and the job's price has been computed:
isFree
boolean Whether this job will be charged to the billTo
(usually set to true if the job has failed for failure reasons which are usually indicative of some system error rather than user error)
currency
mapping Information about currency settings, such as dxCode, code, symbol, symbolPosition, decimalSymbol and groupingSymbol.
totalPrice
number Price (in currency
) for how much this job (along with all its subjobs) costs (or would cost if isFree
is true)
priceComputedAt
timestamp Time at which totalPrice
was computed. For billing purposes, the cost of the job accrues to the invoice of the month that contains priceComputedAt
(in UTC).
totalEgress
mapping Egress (in Byte
) for how much data amount this job (along with all its subjobs) has egressed.
regionLocalEgress
int Amount in bytes of data transfer between IP in the same cloud region.
internetEgress
int Amount in bytes of data transfer to IP outside of the cloud provider.
interRegionEgress
int Amount in bytes of data transfer to IP in other regions of the cloud provider.
egressComputedAt
timestamp Time at which totalEgress
was computed. For billing purposes, the cost of the analysis accrues to the invoice of the month that contains egressComputedAt (in UTC).
The following fields (included by default) are only available if the requesting user has permissions to view worker information for the job (either the user launched the job, or the user has CONTRIBUTE permissions in the job's project context project):
allowSSH
array of strings Array of IP addresses or CIDR blocks from which SSH access will be allowed to the user by the worker running this job try.
The following fields (included by default) are only available if the requesting user has permissions to view worker information for the job (either the user launched the job, or the user has CONTRIBUTE permissions in the job's project context project) and a worker has started running the job try:
sshHostKey
string The worker's SSH host key
host
string The worker's public FQDN or IP address
sshPort
string TCP port that can be used to connect to the SSH daemon for monitoring or debugging the job
clusterSlaves
array of mappings For jobs running on a cluster of instances, an array describing the slave nodes with keys:
host
string the public hostname of the worker, can be used to SSH to the node if allowSSH was enabled for the job
sshPort
string TCP port to use when connecting via SSH to this node
internalIp
string The private IP address for this node, only accessible from other hosts in this cluster
The following fields (included by default) are only available if this job try is running an applet:
applet
string ID of the applet from which the job was run
The following fields (included by default) are only available if this job try is running an app:
app
string ID of the app from which the job was run
resources
string ID of the app’s resources container
projectCache
string ID of the project cache
The following field is only returned if the corresponding field in the fields
input is set to true
:
headJobOnDemand
boolean The value of headJobOnDemand
that the job was started with
runSystemRequirements
mapping or null A mapping with the systemRequirements
values that were passed explicitly to /executable-xxxx/run
or /job/new
when the job was created, or null
if the systemRequirements
input was not supplied the API call that created the job.
runSystemRequirementsByExecutable
mapping or null Similar to runSystemRequirements
but for systemRequirementsByExecutable
.
The following field is only returned if the corresponding field in the fields
input is set to true, and
the caller is an ADMIN of the org that the project of job-xxxx
is currently billed to (in addition to the caller having access to the project, as already required by /job-xxxx/describe
) and the billTo
of the project in which the job ran (at the time of the /job-xxxx/describe
call) is licensed to collect and view job's internet usage IPs. If any of these conditions are not met, /job-xxxx/describe
will omit internetUsageIPs
from its output, while returning other valid requested output fields.
internetUsageIPs
array of strings with unique string-encoded IP addresses that the job code communicated with, in no specific order, subject to the following conditions
httpsApp and ssh-to-worker connections are included
IP traffic involving Application Execution Environment (AEE)
IP traffic passing through DNAnexus proxies running on the worker is excluded
All other IP traffic involving AEE is included
internetUsageIPs
includes IPs that were communicated with over various IP protocols, not just TCP (e.g. UDP, ICMP)
IP addresses accessed by a restarted job will be rolled up into internetUsageIPs field of the restarted job's closest visible ancestor job for jobs whose root execution was created before July 12, 2023 00:13 UTC. internetUsageIPs for
restarted jobs in root executions created after July 12, 2023 00:13 UTC can be described using the try
input argument to this API method.
internetUsageIPs
for a cluster job will include IP addresses that were communicated with from the cluster worker nodes as well as the main node, including from restarted cluster nodes.
The following field is only returned if the corresponding field in the fields
input is set to true
, the requesting user has permissions to view the pricing model of the billTo
of the job, and the job is the last try of a root execution:
subtotalPriceInfo
mapping Information about the current costs associated with all jobs in the tree rooted at this job
subtotalPrice
number Current cost (in currency
) of the job tree rooted at this job
priceComputedAt
timestamp Time at which subtotalPrice
was computed
subtotalEgressInfo
mapping Information about the aggregated egress amount in bytes associated with all jobs in the tree rooted at this analysis
subtotalRegionLocalEgress
int Amount in bytes of data transfer between IP in the same cloud region.
subtotalInternetEgress
int Amount in bytes of data transfer to IP outside of the cloud provider.
subtotalInterRegionEgress
int Amount in bytes of data transfer to IP in other regions of the cloud provider.
egressComputedAt
timestamp Time at which subtotalEgress was computed
The following field is included only if explicitly requested in the fields input, by a user with VIEW access to a job that is billed to an org with the job logs forwarding feature enabled:
jobLogsForwardingStatus
mapping Information on the status of job logs for the job; or null
, if jobLogsForwarding
has not been configured for this job, or if jobLogsForwardingStatus
has not been updated yet. It has the following keys:
linesDropped
int The number of job log lines whose delivery to Splunk failed from the start of this job's try.
Errors
ResourceNotFound
The specified object does not exist
try
input T
is specified, but there is no tryT
for job-xxxx.
Also returned if try
input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
PermissionDenied
Require either VIEW access to the job's temporary workspace, or VIEW access to the parent job's temporary workspace
InvalidInput
Input is not a hash, or fields
if present, is not a hash or has a non-boolean key
try
input should be a non-negative integer.
/job-xxxx/update
Specification
Update a job. Note that most runtime options for jobs are immutable for reproducibility reasons.
If a rank field is present, a valid rank must be provided. The organization associated with this job must have the license feature executionRankEnabled
active in order to update rank. In addition, the user must be the original launcher of the analysis or is an administrator of the organization.
When supplying rank, the job/analysis being updated must be a rootExecution
, and the job/analysis being updated must be in a state that is capable of creating more jobs. Rank cannot be supplied for terminal states like "terminated", "done", "failed", "debug_hold".
Inputs
rank
integer (optional) The rank to set the job and all of its children executions to.
Outputs
id
string ID of the updated job (i.e., the string "job-xxxx")
Errors
InvalidInput
Input is not a hash
Expected key "rank" of input to be an integer
Expected key "rank" of input to be in range [-1024, 1023]
Not a root execution
allowSSH accepts only IP addresses or CIDR blocks up to /16
ResourceNotFound
The specified job does not exist
PermissionDenied
billTo does not have license feature executionRankEnabled
Not permitted to change rank
The supplied token does not belong to the user who started the job or who has ADMINISTER access to the job's parent project.
/job-xxxx/addTags
Specification
Adds the specified tags to the specified job. If any of the tags are already present, no action is taken for those tags.
Inputs
tags
array of strings Tags to be added
Outputs
id
string ID of the manipulated job
Errors
InvalidInput
The input is not a hash, the key tags
is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag.
try
input should be a non-negative integer
ResourceNotFound
The specified job does not exist
try
input T
is specified, but there is no tryT
for job-xxxx.
Also returned if try
input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
PermissionDenied
CONTRIBUTE access is required for the job's project context; otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job.
/job-xxxx/removeTags
Specification
Removes the specified tags from the specified job. Ensures that the specified tags are not part of the job -- if any of the tags are already missing, no action is taken for those tags.
Inputs
tags
array of strings Tags to be removed
Outputs
id
string ID of the manipulated job
Errors
InvalidInput
The input is not a hash, or the key tags
is missing, or its value is not an array, or the array contains at least one invalid (not a string of nonzero length) tag
try
input should be a non-negative integer
ResourceNotFound
The specified job does not exist
try
input T
is specified, but there is no tryT
for job-xxxx.
Also returned if try
input was specified for jobs in root executions created before July 12, 2023 00:13 UTC
PermissionDenied
CONTRIBUTE access is required for the job's project context; otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job
/job-xxxx/setProperties
Specification
Sets properties on the specified job. To remove a property altogether, its value needs to be set to the JSON null (instead of a string). This call updates the properties of the job by merging any old (previously existing) ones with what is provided in the input, the newer ones taking precedence when the same key appears in the old.
Best practices: to completely "reset" properties (i.e. remove all existing key/value pairs and replace them with some new), issue a describe call to get the names of all properties, then issue a setProperties request to set the values of those properties to null.
Inputs
properties
mapping Properties to modify
key Name of property to modify
value string or null Either a new string value for the
property, or null to unset the property
Outputs
id
string ID of the manipulated job
Errors
InvalidInput
The input is not a hash, properties
is missing or is not a hash, or there exists at least one value in properties
which is neither a string nor the JSON null.
try
input should be a non-negative integer.
ResourceNotFound
The specified job does not exist
try
input T
is specified, but there is no tryT
for job-xxxx.
Also returned if try
input was specified for jobs in root executions created before July 12, 2023 00:13 UTC.
PermissionDenied
CONTRIBUTE access is required for the job's project context; otherwise, the request can also be made by jobs sharing the same workspace as the specified job or the same workspace as the parent job of the specified job.
/job-xxxx/terminate
Specification
Jobs can only be terminated by the user who launched the job or by any user with ADMINISTER access to the project context.
Inputs
None
Outputs
id
string ID of the terminated job (i.e., the string "job-xxxx")
Errors
ResourceNotFound
The specified object does not exist
PermissionDenied
Either (1) the user must match the "launchedBy" entry of the job object, and CONTRIBUTE access is required to the project context of the job, or (2) ADMINISTER access is required to the project context of the job
/job-xxxx/getIdentityToken
Specification
This API method must be called from a DNAnexus job with the DNAnexus job token corresponding to the job.
Internally DNAnexus uses the RSASSA_PSS_SHA_256
algorithm to sign the retrieved JWT. This algorithm follows the following spec:
Algorithm
Algorithm description
RSASSA_PSS_SHA_256
PKCS #1 v2.2, Section 8.2, RSA signature with PKCS #1v1.5 Padding and SHA-256
Inputs
audience
string of between 1 and 255 characters containing only alphanumeric characters and .
(period), _
(underscore) and -
(dash) characters. User can specify the intended audience claim from the token. This value corresponds to the audience IdP setting in the 3rd party service such as AWS.
subject_claims
non-empty array of strings An array of unique, valid DNAnexus claims that will be joined together to overwrite the default "sub" claim. The default value is ["launched_by", "job_worker_ipv4"]
Valid claims:
'job_id'
'root_execution_id'
'root_executable_id'
'root_executable_name'
'root_executable_version'
'executable_id'
'app_name'
'app_version'
'project_id'
'bill_to'
'launched_by'
'region'
'job_worker_ipv4'
'job_try'
Outputs
Errors
InvalidInput
job-xxxx/getIdentityToken
is missing parameter audience
audience input expected to be a non empty string between 1 and 255 characters containing only alphanumeric or '.', '-', '_'
characters
Expected subject_claims
to be a nonempty array of strings
The subject_claims
array contains at least one invalid claim
Claim '<input_claim>'
is expected to only occur at most once in subject_claims
PermissionDenied
job-xxxx/getIdentityToken can only be called with a job token
This error is returned if this API method is called with a non-job DNAnexus token
InvalidAuthentication
The token could not be found
This error is returned if this API method is called with an invalid DNAnexus token
The token cannot be used from this IP address
This error is returned if this API method is called with a DNAnexus job token corresponding to a different DNAnexus job.
When running an applet, the applet object does not need to be inside the project context. Whoever runs the applet needs to have VIEW access to the applet object and CONTRIBUTE access to the project context. However, the generated job will have VIEW access to the project context and CONTRIBUTE access to the workspace only, and the system "acts as that job" for the purposes of accessing any references to file objects mandated by the applet spec, (such as assets mentioned under bundledDepends
, etc.); therefore, these objects need to be accessible by the job, and this can only be done if they happen to be in the project context or in a public project. Thus, if an applet object’s dependencies are not in the project context, then running it will eventually fail unless it is in a public project, because the system will not be able to fetch associated files. As such, when an applet is cloned, all objects linked to in the bundledDepends
of its will also be cloned with it.
Any job can issue the API call to create a subjob that will run a particular entry point. Running an entry point is like calling a function in your code that will then execute on its own worker in the cloud. Because each job runs on a separate worker, any communication between jobs under the same master job must be via their input, output, and/or stored data objects in their shared temporary workspace. When an applet is first run, it automatically runs the entry point called "main". See for more information on how to create entry points in your code given the interpreter of your choice.
Different instance types and other system resources can be requested for different entry points of an applet using the systemRequirements
field. This can be specified in an applet's and partially (or fully) overridden at runtime using the systemRequirementsByExecutable
or systemRequirements
runtime arguments. The systemRequirements
argument used at runtime overrides uses the same syntax as in the for applets and apps. systemRequirementsByExecutable
argument syntax is described below.
If the job was run via or , then this is the value of systemRequirements
in the API call.
If the job was run as a stage in a workflow, then the effective value is a combination of fields provided in and of any stored in the workflow itself.
The systemRequirements
field provided to the method.
The systemRequirementsByExecutable
argument to , , , and allows users to specify instanceType
, fpgaDriver
, nvidiaDriver
and clusterSpec.initialInstanceCount
fields for all jobs in the resulting execution tree, configurable by executable ID and then by entry point. If present, it includes at least one of the following key-value pairs:
instanceType
string (optional): A string specifying the instance type used to execute jobs running the specified entry point of the specified executable. See for a list of possible values.
Invoking with "detach": true
argument will cause the new detached execution tree to disregard detachedFrom
's systemRequirementsByExecutable
. In other words, a detached execution tree is treated like a new root execution: detached execution tree does not inherit detachedFrom
's systemRequirementsByExecutable
setting, but honors an optional systemRequirementsByExecutable
S argument if it is supplied to /executable-xxxx/run(detached=true, systemRequirementsByExecutable=S)
call that created the detached execution tree.
Job timeout policies (configurable by executable ID and then by entry point) may be specified to request timeouts for jobs running specific executables at specific entry points. When a job times out, the job is either terminated or restarted (depending on the job restart policy). Job timeout policies may be specified upon creation of the executable (app or applet; configurable by entry point) in the executable's timeoutPolicy
field of the and may be overridden (in part or in full) at runtime (via an input to , , , , or ; configurable by executable ID and then by entry point). The effective timeout of a job running a specific executable at a specific entry point is determined from the following sources (in decreasing priority):
The timeoutPolicy.<entry_point>
field provided in the executable's upon executable creation. This field serves as the user-specified default timeout policy and will override the system default timeout policy.
The '*'
entry point refers to all other entry points that are not named in timeoutPolicyByExecutable.<executable_id>
or the timeoutPolicy
field of the executable's .
Example: Suppose that an applet with ID <applet_id>
has a that includes a timeout policy for entry point '*'
that specifies a timeout of 5 hours, and suppose that that applet is run with a runtime policy for <applet_id>
for entry point 'main'
that specifies a timeout of 2 hours. The resulting job will have a timeout of 2 hours. Now, suppose that the resulting job runs a subjob with no runtime policies at entry point 'foo'
. The resulting subjob will have a timeout of 5 hours.
Note that an applet that is not written in a supported interpreted language can still be run if a separate file object is created containing the compiled code. It is suggested that the user mark this file object as "hidden", and create an applet object with a link to it in the "bundledDepends" and code which specifically runs the compiled binary. Note that the file object does not need to be hidden for it to be bundled with the applet object, but for this purpose, it may be more useful to hide the file object if it is unlikely to be used in isolation. See the for more details on how the contents of "bundledDepends" are handled.
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the object; see the section for details on valid input
inputSpec
array of mappings (optional) An input specification as described in the section
outputSpec
array of mappings (optional) An output specification as described in the section
runSpec
mapping A run specification as described in the section
access
mapping (optional) Access requirements as described in the section
hostname
string (optional) if specified, the URL to access this job in the browser will be "" instead of the default "". The hostname must consist of lower case alphanumeric characters and a hyphen (-
) character and must match /^[a-z][a-z0-9]{2,}-[a-z][a-z0-9-]{2,}[a-z0-9]$/
regular expression. A user may run an app with custom hostname subject to these additional restrictions:
nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single applet is created. For more information, see .
treeTurnaroundTimeThreshold
integer (optional, default: N/A or not available.) The turnaround time threshold (in seconds) for trees, specifically, root executions, that run this executable. See for more information about turnaround time and managing job notifications.
A license is required to use the Job Notifications feature. for more information.
The billTo
of the project is not licensed to use jobNotifications
. Contact to enable jobNotifications
.
Alternatively, you can use the method to describe a large number of data objects at once.
runSpec
mapping The run specification of the applet but without the code
field (use the method for obtaining the source code)
hostname
string if specified, the URL to access this job in the browser will be "" instead of the default ""
treeTurnaroundTimeThreshold
integer or null The turnaround time threshold (in seconds) for trees (specifically, root executions) that run this executable. See for more information about turnaround time and managing job notifications.
A license is required to use the jobNotifications
feature. Contact to enable jobNotifications
.
Returns the full specification of the applet, i.e. the same output as but with the runSpec
field left intact.
If constraints on inputs are specified in the applet spec, and the given inputs do not satisfy those constraints at the time the API call is performed, or if the names of inputs given do not exactly match the inputs listed in the applet object, or if an input is omitted and no default is listed in the applet object, an InvalidInput error will result. For inputs given as , an equivalent error may result at job dispatch time, in which case the job will fail.
A did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
project
string (required if invoked by a user; optional if invoked from a job with detach: true
option; prohibited when invoked from a job with detach: false
) The ID of the project in which this applet will be run (i.e., the project context). If invoked with the detach: true
option, then the detached job will run under the provided project
(if provided), otherwise project context is inherited from that of the invoking job. If invoked by a user or run as detached, all output objects are cloned into the project context; otherwise, all output objects will be cloned into the temporary workspace of the invoking job. See for more information on project context on the DNAnexus Platform.
systemRequirements
mapping (optional) Request specific resources for each of the executable's entry points; see the section above for more details
systemRequirementsByExecutable
mapping (optional) Request system requirements for all jobs in the resulting execution tree, configurable by executable and by entry point, described in more detail in the section.
timeoutPolicyByExecutable
mapping (optional) The timeout policies for jobs in the resulting job execution tree, configurable by executable and the entry point within that executable. If unspecified, it indicates that all jobs in the resulting job execution tree will have the default timeout policies present in the of their executables. Note that timeoutPolicyByExecutable
(keyed first by app or applet ID and then entry point name) will propagate down the entire job execution tree, and that explicitly specified upstream policies always take precedence. If present, includes at least one of the following key-value pairs:
executionPolicy
mapping (optional) A collection of options that govern automatic job restart upon certain types of failures. The format of this field is identical to that of the executionPolicy
field in the supplied to and can override part or all of the executionPolicy
found in the applet's run specification (if present).
allowSSH
array of strings (optional, default [ ]) Array of IP addresses or CIDR blocks (up to /16) from which SSH access will be allowed to the user by the worker running this job. Array may also include '*' which is interpreted as the IP address of the client issuing this API call as seen by the API server. See for more information.
nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see .
For more information on a license that supports launching detached executions, .
A license is required to use the Job Ranking feature. for more information.
A license is required to use preserveJobOutputs. for more information.
detailedJobMetrics
boolean Requests detailed metrics collection for jobs if set to true. The default value for this flag is project billTo
's detailedJobMetricsCollectDefault
policy setting or false if org default is not set. This flag can be specified for root executions and will apply to all jobs in the root execution. The list of detailed metrics collected every 60 seconds and viewable for 15 days from the start of a job is .
A license is required to use detailedJobMetrics
. for more information.
The requesting user must be able to describe all jobs used in a -- see .
The possible error reasons are described in .
"malformedLink": incorrect syntax was given either for a or for a link to a data object. Possible values for "expected" include:
This API call may only be made from within an executing job. This call creates a new job which will execute a particular function (from the same applet as the one the current job is running) with a particular input. The input will be checked for links, and any will be honored. However, the input is not checked against the applet spec. Since this is done from inside another job, the new job will inherit the same workspace and project context -- no objects will be cloned, and no other modification will take place in the workspace.
See for more info.
The system takes note of the job that this new job is created from; this information is available in the "parent" field when describing the new job object. Moreover, the parent-child relationship is tracked by the system, and used to advance the job state of the parent. Specifically, a job which has finished executing remains in the "waiting_on_output" state until all of its child jobs have proceeded to the "done" state. See for more information.
A did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
systemRequirements
mapping (optional) Request specific resources for each of the executable's entry points; see the section above for more details
systemRequirementsByExecutable
mapping (optional) Request system requirements for all jobs in the resulting execution subtree, configurable by executable and by entry point, described in more detail in the section.
timeoutPolicyByExecutable
mapping (optional) Similar to the timeoutPolicyByExecutable
field supplied to
nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see .
Describes a job object. Each job is the result of either running an applet by calling "run" on an applet or app object, or the result of launching a job from an existing job by calling the "new" job class method. In all cases, the job is associated with either an applet or an app, and a project context reflected in the "project" field. Moreover, the ID of the user who launched the execution that this job is a part of is reflected in the "launchedBy" field (and is propagated to all child jobs as well, including any executables launched from those jobs). A job is always in a particular state; for information about job states, see the section.
try
non-negative integer (optional). Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try (i.e. the try with the largest try
attribute) for the specified job ID. See section for details.
startedRunning
timestamp (present once the transition has occurred) Time at which this job try transitioned into the "running" state (see )
stoppedRunning
timestamp (present once the transition has occurred) Time at which this job try transitioned out of the "running" state (see )
state
string The job state: one of "idle", "waiting_on_input", "runnable", "running", "waiting_on_output", "done", "debug_hold", "restartable", "failed", "terminating", and "terminated"); see for more details on job states
A license is required to use the Job Ranking feature. for more information.
systemRequirements
mapping Resolved resources requested for each of the executable's entry points based on mergedSystemRequirementsByExecutable
, runStageSystemRequirements
, runSystemRequirements
, system requirements embedded in workflow stages, and systemRequirements
supplied to /executable/new
. See the section above for more details.
executionPolicy
mapping Options specified for this job to perform automatic restart upon certain types of failures. The format of this field is identical to that of the executionPolicy
field in the supplied to . This hash is computed from the following sources, in order of precedence:
Values given to , , or
Values given to
Values given to in the
networkAccess
array of strings The computed network access list that is available to the job; this may be a subset of the requested list in the of the executable if an ancestor job had access to a more restricted list of domains.
url
string Defaults to "" unless overridden by hostname
in the executable's dxapp.json.
isolatedBrowsing
boolean Whether httpsApp access to this job is wrapped in .
treeTurnaroundTime
integer The turnaround time (in seconds) of this root execution, which is the time between its creation time and its terminal state time (or the current time if it is not in a terminal state. Terminal states for an execution include done, terminated, and failed. See for information on them). If this root execution can be retried, the turnaround time begins at the creation time of the root execution's first try, so it includes the turnaround times of all tries.
A license is required to use the jobNotifications
feature. Contact to enable jobNotifications
.
The following fields (included by default) are only available if the requesting user has permissions to view debug options for the job (either the user launched the job, or the user has ADMINISTER permissions in the job's project context; see for more information):
mergedSystemRequirementsByExecutable
mapping or null A mapping with values of systemRequirementsByExecutable
supplied to all the ancestors of this job and the value supplied to create this job, merged as described in the section. If neither the ancestors of this job nor this job were created with the systemRequirementsByExecutable
input, mergedSystemRequirementsByExecutable
value of null
is returned.
A license is required to use the Internet Usage IPs
feature. for more information.
Note that this information should only be used by org admins for forensic investigation and fraud prevention purposes.
A license is required to use the . for more information
A license is required to use the Job Ranking feature. for more information.
allowSSH
array of strings (optional, default [ ]) Array of IP addresses or CIDR blocks (up to /16) from which SSH access will be allowed to the user by the worker running this job. Array may also include '*' which is interpreted as the IP address of the client issuing this API call as seen by the API server. See for more information. Changing this value after a job has started running may not be taken into account for the running job, but it may be passed on to new child jobs created after the update.
try
non-negative integer (optional) Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try (i.e. the try with the largest try
attribute) for the specified job ID. See section for details.
try
non-negative integer (optional). Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try (i.e. the try with the largest try
attribute) for the specified job ID. See section for details.
try
non-negative integer (optional). Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try (i.e. the try with the largest try
attribute) for the specified job ID. See section for details.
Terminates a job and its job tree. If the job is already in a terminal state such as "terminated", "failed", or "done", no action will be taken. Otherwise, all jobs left in the job tree that have not reached a terminal state will eventually be put into the "terminated" state with failure reason "Terminated". Any authentication tokens generated for this execution will be invalidated, and any running jobs will be stopped. See for more details on job states.
Get a signed DNAnexus JSON Web Token (JWT) that establishes a security-hardened and verifiable identity linked to the specific DNAnexus jobs. This job identity token can be provided to a 3rd party service (such as AWS), which will validate the job identity token and exchange it for a temporary access credentials (e.g. AWS token) allowing job code to securely access specific 3rd party resources, such as external storage buckets, external lambda functions, external databases, or external secrets vaults. See for details.
This method returns a JSON object with a Token
field containing a signed (JWT), with standard and custom claims, represented as a JSON string.