Applets and Entry Points
Applets are executable data objects. Like other data objects, applets exist inside projects and can be cloned between projects. By default, applets also have VIEW permissions into the projects from which they are run. Applets can be used to create private, customized scripts for specialized needs, or for testing and developing more general apps that may be of interest to the community at large.
Applets must be created with a run specification so that the system knows how to run it, but they can be created with or without input and output specifications. Providing I/O specifications is encouraged because it allows the system to validate the input arguments when an applet is launched, and validate the produced output when an applet is finished. Also, when an applet has an I/O specification, the DNAnexus website can automatically render a configuration form for users who want to launch the applet (since the system is aware of the names and types of inputs that the applet requires). Finally, other developers who want to invoke the applet from their applets can look at the I/O specification for help on how to launch the applet programmatically (i.e., what to give as inputs, or what to expect from the outputs).
If I/O specifications are not provided, users can launch the applet with any input (which the system will not validate, except for DNAnexus links and job-based object references), and the applet can produce any output. Developers of such applets are responsible for documenting what their applet expects in the input and what outputs their applet can produce. Likewise, they are responsible for providing a user interface for someone to configure the applet’s input on the DNAnexus’ website. This allows for building powerful applets whose input or output might be hard to formally describe, perhaps because it is variable or polymorphic, or would otherwise be constrained by what DNAnexus allows in its input/output definition grammar.
Running an applet is slightly different depending on whether the applet is launched by a user from outside of the DNAnexus platform, or by another running job.
Launching an applet outside of the platform requires associating a project with it. As mentioned earlier, this project context is important for the following reasons:
- Any charges related to the execution of the applet are associated with that project.
- Jobs (such as the job created by launching the applet, as well as any other jobs created by the applet itself while running) will be given VIEW access to that project.
- Any objects output by the applet will be placed into that project.
When launching an applet from another running job, this parent job is already associated with a project. This project is carried forward to the launched master job; more specifically:
- Any charges related to the execution of the master job are associated with that project.
- Jobs (such as the job created by launching the master job, as well as any other descendant jobs of the master job) will be given VIEW access to that project.
- Any objects output by the master job will be placed into the workspace of the parent job.
When running an applet, the applet object does not need to be inside the project context. Whoever runs the applet needs to have VIEW access to the applet object and CONTRIBUTE access to the project context. However, the generated job will have VIEW access to the project context and CONTRIBUTE access to the workspace only, and the system "acts as that job" for the purposes of accessing any references to file objects mandated by the applet spec, (such as assets mentioned under
bundledDepends
, etc.); therefore, these objects need to be accessible by the job, and this can only be done if they happen to be in the project context or in a public project. Thus, if an applet object’s dependencies are not in the project context, then running it will eventually fail unless it is in a public project, because the system will not be able to fetch associated files. As such, when an applet is cloned, all objects linked to in the bundledDepends
of its run specification will also be cloned with it.Any job can issue the /job/new API call to create a subjob that will run a particular entry point. Running an entry point is like calling a function in your code that will then execute on its own worker in the cloud. Because each job runs on a separate worker, any communication between jobs under the same master job must be via their input, output, and/or stored data objects in their shared temporary workspace. When an applet is first run, it automatically runs the entry point called "main". See Code Interpreters for more information on how to create entry points in your code given the interpreter of your choice.
Different instance types and other system resources can be requested for different entry points of an applet using the
systemRequirements
field. This can be specified in an applet's run specification and partially (or fully) overridden at runtime (using the same syntax as in the run specification). When computing the effective value of systemRequirements
, the keys are retrieved in the following order (using the first one found):- The
systemRequirements
value requested for the job's master job- If the job was run via /app-xxxx/run or /applet-xxxx/run, then this is the value of
systemRequirements
in the API call - If the job was run as a stage in a workflow, then the effective value is a combination of fields provided in /workflow-xxxx/run and of any default values stored in the workflow itself
- The
runSpec.systemRequirements
field in the applet or app - If none of these values are present (or contain a relevant entry point), system-wide defaults are used
The "*" entry point refers to all other entry points that are not already named in
systemRequirements
. Note that if an applet has a run specification that includes an instance type X for entry point "main", and the applet is run with systemRequirements
specifying that "*" should have a different instance type Y, the "main" entry point will be run on instance type Y because the "main" key has been implicitly overridden by the "*" option.Job timeout policies (configurable by executable ID and then by entry point) may be specified to request timeouts for jobs running specific executables at specific entry points. When a job times out, the job is either terminated or restarted (depending on the job restart policy). Job timeout policies may be specified upon creation of the executable (app or applet; configurable by entry point) in the executable's run specification and may be overridden (in part or in full) at runtime (via an input to /app-xxxx/run, /applet-xxxx/run, /workflow-xxxx/run , or /job/new; configurable by executable ID and then by entry point). The effective timeout of a job running a specific executable at a specific entry point is determined from the following sources (in decreasing priority):
- The runtime input
timeoutPolicyByExecutable.<executable_id>.<entry_point>
field. Note that this field overrides both user-specified and system default timeout policies, and it propagates down the entire resulting job execution tree and merges with all downstream runtime inputs, with explicit upstream inputs taking precedence. - The
timeoutPolicy.<entry_point>
field provided in the executable's run specification upon executable creation. This field serves as the user-specified default timeout policy and will override the system default timeout policy. - The system default policy, which is no timeout.
The
'*'
entry point refers to all other entry points that are not named in timeoutPolicyByExecutable.<executable_id>
or the timeoutPolicy
field of the executable's run specification.Note that setting the timeout of a specific executable at a specific entry point to 0 is the same as not setting a timeout for that executable at that entry point at all (i.e. there exists no runtime or default run specification entry for that executable at that entry point).
Example: Suppose that an applet with ID
<applet_id>
has a run specification that includes a timeout policy for entry point '*'
that specifies a timeout of 5 hours, and suppose that that applet is run with a runtime policy for <applet_id>
for entry point 'main'
that specifies a timeout of 2 hours. The resulting job will have a timeout of 2 hours. Now, suppose that the resulting job runs a subjob with no runtime policies at entry point 'foo'
. The resulting subjob will have a timeout of 5 hours.Specification
Creates a new applet object with the given applet specification.
Links specified in "bundledDepends" contribute to the "links" array returned by a describe call and are always cloned together with the applet regardless of their visibility.
Note that an applet that is not written in a supported interpreted language can still be run if a separate file object is created containing the compiled code. It is suggested that the user mark this file object as "hidden", and create an applet object with a link to it in the "bundledDepends" and code which specifically runs the compiled binary. Note that the file object does not need to be hidden for it to be bundled with the applet object, but for this purpose, it may be more useful to hide the file object if it is unlikely to be used in isolation. See the Execution Environment Reference for more details on how the contents of "bundledDepends" are handled.
The applet object does not receive special permissions for any referenced data objects (such as "id" in "bundledDepends"); these entries are accessed every time the applet is run, with the same permissions as the job. This means that if some referenced file is later deleted or is not present in the project context, the applet will not be able to run. However, the system will not automatically "invalidate" the applet object for any broken links; if the referenced file is reinstated from a copy existing in another project, the applet can then be run.
Inputs
project
string ID of the project or container to which the applet should belong (e.g. the string "project-xxxx")name
string (optional, default is the new ID) The name of the objecttitle
string (optional, default "") Title of the applet, e.g. "Micro Map"summary
string (optional, default "") A short description of the appletdescription
string (optional, default "") A longer description about the appletdeveloperNotes
string (optional, default "") More detailed notes about the applettags
array of strings (optional) Tags to associate with the objecttypes
array of strings (optional) Types to associate with the objecthidden
boolean (optional, default false) Whether the object should be hiddenproperties
mapping (optional) Properties to associate with the object- key Property name
- value string Property value
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the object; see the Object Details section for details on valid inputfolder
string (optional, default "/") Full path of the folder that is to contain the new objectparents
boolean (optional, default false) Whether all folders in the path provided infolder
should be created if they do not existignoreReuse
boolean (optional, default false) If true then no job reuse will occur by default for this applet.inputSpec
array of mappings (optional) An input specification as described in the Input Specification sectionoutputSpec
array of mappings (optional) An output specification as described in the Output Specification sectiondxapi
string The version of the API that the applet was developed with, for example, "1.0.0"httpsApp
mapping (optional) HTTPS app configurationports
array of integers Array of ports open for inbound access. Allowed ports are443
,8080
, and8081
shared_access
string HTTPS access restriction for jobs run from this executable. Allowed values are "VIEW", "CONTRIBUTE", "ADMINISTER", "NONE". VIEW, CONTRIBUTE, ADMINISTER require the specified permission level or greater for the project in which the job executes. Most restrictive NONE setting limits access to only the user who launched the job.dns
mapping DNS configuration for the jobhostname
string (optional) if specified, the URL to access this job in the browser will be "https://hostname.dnanexus.cloud" instead of the default "https://job-xxxx.dnanexus.cloud". The hostname must consist of lower case alphanumeric characters and a hyphen (-
) character and must match/^[a-z][a-z0-9]{2,}-[a-z][a-z0-9-]{2,}[a-z0-9]$/
regular expression. A user may run an app with custom hostname subject to these additional restrictions:- The effective
billTo
of the running app must be an organization. - The user must be an admin of that organization.
- If the id of the
billTo
organization isorg-some_name
then the hostname must resemblehostprefix-some-name
, i.e. it must end in thehandle
portion of the org id with_
and.
replaced by hyphen (-
) and start with an alphanumeric "prefix" (in this casehostprefix
).
For an org with id oforg-some_name
, valid hostnames includemyprefix-some-name
andab5-some-name
; invalid hostnames includea-some-name
(prefix too short),a-some_name
(underscore present),4abc-some-name
(starts with a numerical character),myprefix-someotherorgname
(does not end with-some-name
).When two jobs attempt to use the same URL, the newer job will take over the hostname from an already running job. We recommend having a single job running with any given URL to avoid the risk of having the URL re-assigned to another job with same URL if the other job restarts due to its restart policy.
nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single applet is created. For more information, see Nonces.treeTurnaroundTimeThreshold
integer (optional, default: N/A or not available.) The turnaround time threshold (in seconds) for trees, specifically, root executions, that run this executable. See Job Notifications for more information about turnaround time and managing job notifications.
A license is required to use the
jobNotifications
feature. Contact DNAnexus Sales to enable jobNotifications
.Outputs
id
string ID of the created applet object (i.e. a string in the form "applet-xxxx")
Errors
- InvalidInput
- A reserved linking string ("$dnanexus_link") appears as a key in a mapping in
details
but is not the only key in the hash. - A reserved linking string ("$dnanexus_link") appears as the only key in a hash in
details
but has value other than a string. - The spec is invalid.
- All specified bundled dependencies must be in the same region as the specified project.
- A
nonce
was reused in a request but some of the other inputs had changed signifying a new and different request - A
nonce
may not exceed 128 bytes treeTurnaroundTimeThreshold
must be a non-negative integer less than 2592000.
- PermissionDenied
- UPLOAD access is required to the specified project.
- The
billTo
of the project is not licensed to usejobNotifications
. Contact [email protected] to enablejobNotifications
.
- ResourceNotFound
- The route in
folder
does not exist whileparents
is false.
Specification
Describes an applet object.
Alternatively, you can use the /system/describeDataObjects method to describe a large number of data objects at once.
Inputs
project
string (optional) Project or container ID to be used as a hint for finding the object in an accessible projectdefaultFields
boolean (optional, default false iffields
is supplied, true otherwise) whether to include the default set of fields in the output (the default fields are described in the "Outputs" section below). The selections are overridden by any fields explicitly named infields
.fields
mapping (optional) include or exclude the specified fields from the output. These selections override the settings indefaultFields
.- key Desired output field; see the "Outputs" section below for valid values here
- value boolean whether to include the field
The following options are deprecated (and will not be respected if
fields
is present):properties
boolean (optional, default false) Whether the properties should be returneddetails
boolean (optional, default false) Whether the details should also be returned
Outputs
id
string The object ID (i.e. the string "applet-xxxx")
The following fields are included by default (but can be disabled using
fields
or defaultFields
):project
string ID of the project or container in which the object was foundclass
string The value "applet"types
array of strings Types associated with the objectcreated
timestamp Time at which this object was createdstate
string Either "open" or "closed"hidden
boolean Whether the object is hidden or notlinks
array of strings The object IDs that are pointed to from this object, including links found in both thedetails
and inbundledDepends
(if it exists) of the appletname
string The name of the objectfolder
string The full path to the folder containing the objectsponsored
boolean Whether the object is sponsored by DNAnexustags
array of strings Tags associated with the objectmodified
timestamp Time at which the user-provided metadata of the object was last modifiedcreatedBy
mapping How the object was createduser
string ID of the user who created the object or launched an execution which created the objectjob
string present if a job created the object ID of the job that created the objectexecutable
string present if a job created the object ID of the app or applet that the job was running
runSpec
mapping The run specification of the applet but without thecode
field (use the /applet-xxxx/get method for obtaining the source code)dxapi
string The version of the API usedaccess
mappings The access requirements of the applettitle
string The title of the appletsummary
string The summary of the appletdescription
string The description of the appletdeveloperNotes
string The developer notes of the appletignoreReuse
boolean Whether job reuse is disabled for this applethttpsApp
mapping HTTPS app configurationshared_access
string HTTPS access restriction for this jobports
array of integers Ports that are open for inbound accessdns
mapping DNS configuration for the jobhostname
string if specified, the URL to access this job in the browser will be "https://hostname.dnanexus.cloud" instead of the default "https://job-xxxx.dnanexus.cloud"
treeTurnaroundTimeThreshold
integer or null The turnaround time threshold (in seconds) for trees (specifically, root executions) that run this executable. See Job Notifications for more information about turnaround time and managing job notifications.
A license is required to use the
jobNotifications
feature. Contact DNAnexus Sales to enable jobNotifications
.The following field (included by default) is available if an input specification is specified for the applet:
inputSpec
array of mappings The input specification of the applet
The following field (included by default) is available if an output specification is specified for the applet:
outputSpec
array of mappings The output specification of the applet
The following field (included by default) is available if the object is sponsored by a third party:
sponsoredUntil
timestamp Indicates the expiration time of data sponsorship (this field is only set if the object is currently sponsored, and if set, the specified time is always in the future)
The following fields are only returned if the corresponding field in the
fields
input is set to true
:properties
mapping Properties associated with the object- key Property name
- value string Property value
details
mapping or array Contents of the object’s details
Errors
- ResourceNotFound
- the specified object does not exist or the specified project does not exist
- InvalidInput
- the input is not a hash,
project
(if supplied) is not a string, or the value ofproperties
(if supplied) is not a boolean
- PermissionDenied
- VIEW access required for the
project
provided (if any), and VIEW access required for some project containing the specified object (not necessarily the same as the hint provided)
Specification
Returns the full specification of the applet, i.e. the same output as /applet-xxxx/describe but with the
runSpec
field left intact.Inputs
- None
Outputs
project
string ID of the project or container in which theobject was foundid
string The object ID (i.e. the string "applet-xxxx")class
string The value "applet"types
array of strings Types associated with the objectcreated
timestamp Time at which this object was createdstate
string Either "open" or "closed"hidden
boolean Whether the object is hidden or notlinks
array of strings The object IDs that are pointed to from this object, including links found in both thedetails
and inbundledDepends
(if it exists) of the appletname
string The name of the objectfolder
string The full path to the folder containing the objectsponsored
boolean Whether the object is sponsored by DNAnexustags
array of strings Tags associated with the objectmodified
timestamp Time at which the user-provided metadata of the object was last modifiedcreatedBy
mapping How the object was createduser
string ID of the user who created the object or launched an execution which created the objectjob
string present if a job created the object ID of the job that created the objectexecutable
string present if a job created the object ID of the app or applet that the job was running
runSpec
mapping The run specification of the appletdxapi
string The version of the API usedaccess
mappings The access requirements of the applettitle
string The title of the appletsummary
string The summary of the appletdescription
string The description of the appletdeveloperNotes
string The developer notes of the applet
If an input specification is specified for the applet:
inputSpec
array of mappings The input specification of the applet
If an output specification is specified for the applet:
outputSpec
array of mappings The output specification of the applet
Errors
- ResourceNotFound
- the specified object does not exist
- PermissionDenied
- VIEW access required
Specification
Creates a new job which will execute the code of this applet. The default entry point for the applet’s interpreter (given in the runSpec.interpreter field of the applet spec) will be called.
Interpreter | Entry point |
bash | main() in top level scope with no args, if it exists. Also, $1 is set to "main" |
python3 | Any function decorated with @dxpy.app_entry(func="main") with no args |
If constraints on inputs are specified in the applet spec, and the given inputs do not satisfy those constraints at the time the API call is performed, or if the names of inputs given do not exactly match the inputs listed in the applet object, or if an input is omitted and no default is listed in the applet object, an InvalidInput error will result. For inputs given as job-based object reference, an equivalent error may result at job dispatch time, in which case the job will fail.
The job might fail for the following reasons (this list is non-exhaustive):
- A reference such as one mentioned in "bundledDepends" could not be accessed using the job’s credentials (VIEW access to project context, CONTRIBUTE access to a workspace, VIEW access to public projects)
- A job-based object reference did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
- An input object does not exist.
- Permission denied accessing an input object.
- An input object is not a data object (things like users, projects, or jobs are not data objects)
- An input object does not satisfy the class constraints.
- An input object does not satisfy the type constraints.
- An input object is not in the "closed" state.
- Insufficient credits.
- The user has too many jobs that are in a nonterminal state.
Inputs
name
name (optional, default is the applet's title if set and otherwise the applet's name) Name for the resulting jobinput
mapping Input that the applet is launched with- key Input field name. If the applet has an input specification, it must be one of the names of the inputs; otherwise, it can be any valid input field name.
- value Input field value
dependsOn
array of strings (optional) List of job, analysis and/or data object IDs; the applet will not begin running any of its entry points until all jobs listed have transitioned to the "done" state, and all data objects listed are in the "closed" stateproject
string (required if invoked by a user; optional if invoked from a job withdetach: true
option; prohibited when invoked from a job withdetach: false
) The ID of the project in which this applet will be run (i.e., the project context). If invoked with thedetach: true
option, then the detached job will run under the providedproject
(if provided), otherwise project context is inherited from that of the invoking job. If invoked by a user or run as detached, all output objects are cloned into the project context; otherwise, all output objects will be cloned into the temporary workspace of the invoking job. See The Project Context and Temporary Workspace for more information.folder
string (optional, default "/") The folder into which objects output by the job will be placed. If the folder does not exist when the job completes, the folder (and any parent folders necessary) will be created. The folder structure that output objects reside in is replicated within the target folder, e.g. iffolder
is set to "/myJobOutput" and the job outputs an object which is in the folder "/mappings/mouse" in the workspace, the object is placed into "/myJobOutput/mappings/mouse".tags
array of strings (optional) Tags to associate with the resulting jobproperties
mapping (optional) Properties to associate with the resulting job- key Property name
- value string Property value
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the jobsystemRequirements
mapping (optional) Request specific resources for each of the executable's entry points; see the Requesting Instance Types section above for more detailsexecutionPolicy
mapping (optional) A collection of options that govern automatic job restart upon certain types of failures. The format of this field is identical to that of theexecutionPolicy
field in the run specification supplied to /applet/new and can override part or all of theexecutionPolicy
found in the applet's run specification (if present).timeoutPolicyByExecutable
mapping (optional) The timeout policies for all jobs in the resulting job execution tree, configurable by executable. If unspecified, it indicates that all jobs in the resulting job execution tree will have the default timeout policies present in the run specifications of their executables. If present, includes at least one of the following key-value pairs:- key Executable ID. If an executable is not explicitly specified in
timeoutPolicyByExecutable
, then any job in the resulting job execution tree that runs that executable will have the default timeout policy present in the run specification of that executable. - value mapping or null Timeout policy for the corresponding executable. A value of null overrides the default timeout policy present in the run specification of the corresponding executable and indicates that no job in the resulting job execution tree that runs the corresponding executable will have a timeout policy. If a mapping, includes at least one of the following key-value pairs:
- key Entry point name or
"*"
to indicate all entry points not explicitly specified in this mapping. If an entry point name is not explicitly specified and"*"
is not present, then any job in the resulting job execution tree that runs the corresponding executable at that entry point will have the default timeout policy present in the run specification of the corresponding executable. - value mapping or null Timeout for a job running the corresponding executable at the corresponding entry point. A value of null indicates that no job in the resulting job execution tree that runs the corresponding executable at the corresponding entry point will have a timeout. Includes at least one of the following key-value pairs:
- key Unit of time; one of "days", "hours", or "minutes".
- value number Amount of time for the corresponding time unit; must be non-negative. The effective timeout is the sum of the units of time represented in this mapping. Note that setting the effective timeout to 0 is the same as specifying null for the corresponding executable at the corresponding entry point. Note that
timeoutPolicyByExecutable
(keyed by executable ID and then entry point name) will propagate down the entire job execution tree, and that explicitly specified upstream policies always take precedence.
delayWorkspaceDestruction
boolean (optional) If not given, the value defaults to false for root executions (launched by a user or detached from another job), or to the parent'sdelayWorkspaceDestruction
setting. If set to true, the temporary workspace created for the resulting job will be preserved for 3 days after the job either succeeds or fails.allowSSH
array of strings (optional, default [ ]) Array of IP addresses or CIDR blocks (up to /16) from which SSH access will be allowed to the user by the worker running this job. Array may also include '*' which is interpreted as the IP address of the client issuing this API call as seen by the API server. See Connecting to Jobs for more information.debug
mapping (optional, default { }) Specify debugging options for running the executable; this field is only accepted when this call is made by a user (and not a job)debugOn
array of strings (optional, default [ ]) Array of job errors after which the job's worker should be kept running for debugging purposes, offering a chance to SSH into the worker before worker termination (assuming SSH has been enabled). This option applies to all jobs in the execution tree. Jobs in this state for longer than 2 days will be automatically terminated but can be terminated earlier. Allowed entries include "ExecutionError", "AppError", and "AppInternalError".
singleContext
boolean (optional) If true then the resulting job and all of its descendants will only be allowed to use the authentication token given to it at the onset. Use of any other authentication token will result in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, andsingleContext: true
setting is used instead.ignoreReuse
boolean (optional) If true then no job reuse will occur for this execution. Takes precedence over value supplied forapplet-xxxx/new
.nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see Nonces.detach
boolean (optional) This option has no impact when the API is invoked by a user. If invoked from a job with detach set to true, the new job will be detached from the creator job and will appear as a typical root execution. A failure in the detached job will not cause a termination of the job from which it was created and vice versa. Detached job will inherit neither the access to the workspace of its creator job nor the creator job's priority. Detached job's access permissions will be the intersection (most restricted) of access permissions of the creator job and the permissions requested by the detached job's executable. To launch the detached job, creator job must have CONTRIBUTE or higher access to the project in which the detached job is launched. Additionally, the billTo of the project in which the creator job is running must be licensed to launch detached executions.
For more information on a license that supports launching detached executions, contact DNAnexus Sales.
rank
integer (optional) An integer between -1024 and 1023, inclusive. The rank indicates the priority in which the executions generated from this executable will be processed. The higher the rank, the more prioritized it will be. If no rank is provided, the executions default to a rank of zero. If the execution is not a root execution, it will inherit its parent's rank.
costLimit
float (optional) The limit of the cost that this execution tree should accrue before termination. This field will be ignored if this is not a root execution.headJobOnDemand
boolean (optional) If true, then the resulting master job will be allocated to an on-demand instance, regardless of its scheduling priority. All of its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app’sheadJobOnDemand
(if any).preserveJobOutputs
mapping (optional, default is null). Preserves all cloneable outputs of every completed, non-jobReused job in the execution tree launched by this api call in the root execution project, even if root execution ends up failing. Preserving the job outputs in the project trades off higher costs of storage for the possibility of subsequent job reuse.When a non-jobReused job in the root execution tree launched with non-nullpreserveJobOutputs
enters the "done" state, all cloneable objects (e.g. files, records, applets, closed workflows, but not databases) referenced by the$dnanexus_link
in the job'soutput
field will be cloned to the project folder described bypreserveJobOutputs.folder
, unless the output objects already appear elsewhere in the project. If the folder specified bypreserveJobOutputs.folder
does not exist in the project, the system will create the folder and its parents. As the root job or root analysis' stages complete, the regular outputs of the root execution will be moved frompreserveJobOutputs.folder
to the regular output folder(s) of the root execution. So if you [1] run your root execution without thepreserveJobOutputs
option to completion, some root execution outputs will appear in the project in the root execution's output folder(s). If you had run the same execution withpreserveJobOutputs.folder
set to"/pjo_folder"
, the same set of outputs would appear in the same set of root execution folders as in [1] at completion of the root execution, while some additional job outputs that are not outputs of the root execution would appear in"/pjo_folder"
.preserveJobOutputs
argument can be specified only when starting a root execution or a detached job.preserveJobOutputs
value, if not null, should be a mapping that may contain the following:- key
"folder"
string (optional) - value
path_to_folder
string (required if "folder" key is specified). Specifies a folder in the root execution project where the outputs of jobs that are a part of the launched execution will be stored.path_to_folder
starting with "/
" is interpreted as absolute folder path in the project the job is running in.path_to_folder
not starting with "/
" is interpreted as a path relative to the root execution'sfolder
field.path_to_folder
value of""
(i.e. empty string) will preserve job outputs in the folder described by root execution'sfolder
field. If thepreserveJobOutputs
mapping does not have afolder
key, the system will use the default folder value of"intermediateJobOutputs"
(i.e."preserveJobOutputs": {}
is equivalent to"preserveJobOutputs": {"folder":"intermediateJobOutputs"}
).
It is recommended to place preserveJobOutputs outputs for different root executions into different folders so as not to create a single folder with a very large (>450K) number of files.
detailedJobMetrics
boolean Requests detailed metrics collection for jobs if set to true. The default value for this flag is projectbillTo
'sdetailedJobMetricsCollectDefault
policy setting or false if org default is not set. This flag can be specified for root executions and will apply to all jobs in the root execution. The list of detailed metrics collected every 60 seconds and viewable for 15 days from the start of a job is here.
Outputs
id
string ID of the created job (i.e. a string in the form "job-xxxx")
Errors
- ResourceNotFound
- The specified applet object or project context does not exist.
- One of the IDs listed in
dependsOn
does not exist.
- PermissionDenied
- The requesting user must have VIEW access to all objects listed in
dependsOn
, and to all project contexts of all jobs listed independsOn
. - The requesting user must have VIEW access to the applet object.
- If invoked by a user, then the requesting user must have CONTRIBUTE access to the project context.
- The requesting user must be able to describe all jobs used in a job-based object reference -- see /job-xxxx/describe.
- The requesting user has too many (65536, by default) nonterminal (e.g., running, runnable) jobs and must wait for some to finish before creating more.
- The
billTo
of the job's project must be licensed to start detached executions when invoked from the job withdetach: true
argument. - If rank is provided and the billTo does not have license feature executionRankEnabled set to true.
- if
preserveJobOutputs
is not null andbillTo
of the project where execution is attempted does not have preserveJobOutputs license. detailedJobMetrics
setting of true requires project'sbillTo
to havedetailedJobMetrics
license feature set to true.
- InvalidInput
input
does not satisfy the input specification of this applet; an additional field is provided in the error JSON for this error that looks like{ "error": { "type": "InvalidInput","message": "i/o value for fieldname is not int","details": { "field": "fieldname","reason": "class","expected": "int"}}}- If invoked by a user, then
project
must be specified. - If invoked by a job, then
project
must not be specified. - The project context must be in the same region as this applet.
- All data object inputs that are specified directly must be in the same region as this applet.
- All inputs that are job-based object references must refer to a job that was run in the same region as this applet.
allowSSH
accepts only IP addresses or CIDR blocks up to /16.- A
nonce
was reused in a request but some of the other inputs had changed signifying a new and different request. - A
nonce
may not exceed 128 bytes. preserveJobOutputs.folder
value is a syntactically invalid path to a folder.preserveJobOutputs
is specified when launching a non-detached execution from a job.detailedJobMetrics
can not be specified when launching a non-detached execution from a job.
- InvalidState
- Some specified input is not in the "closed" state.
- Some job in
dependsOn
has failed or has been terminated.
Input Specification Errors
The following list describes the possible error reasons and what the fields mean:
- "class": the specified "field" was expected to have class "expected". If the input spec required an array but it was not an array, the value for "expected" will be "array". If the input spec required an array but an element was of the wrong class, then the value for "expected" will be the actual class the entry was expected to be, e.g. "record".
- "type": the specified "field" either needs to have the type "expected" or does not satisfy the or-condition in "expected"
- "missing": the specified "field" was not provided but is required in the input specification
- "unrecognized": the given "field" is not present in the input specification
- "malformedLink": incorrect syntax was given either for a job-based object reference or for a link to a data object. Possible values for "expected" include:
- "key "field"": the key "field" was missing in a job-based object reference
- "only two keys": exactly two keys were expected in the hash for the job-based object reference
- "key "$dnanexus_link"": the key "$dnanexus_link" was missing in a link for specifying a data object
- "choices": the specified "field" must be one of the values in "expected"
Specification
This API call verifies that a set of input values for a particular applet can be used to launch a batch of jobs in parallel. The applet must have input specification defined.
Batch and common inputs:
batchInput
: mapping of inputs corresponding to batches. The nth value of each array corresponds to nth execution of the applet. Including a null
value in an array at a given position means that the corresponding applet input field is optional and the default value, if defined, should be used. E.g.:{
"a": [{$dnanexus_link: "file-xxxx"}, {$dnanexus_link: "file-yyyy"}, ....],
"b": [1,null, ...]
}
commonInput
: mapping of non-batch, constant inputs common to all batch jobs, e.g.:{
"c": "foo"
}
File references:
files
: list of files (passed as $dnanexus_link references), must be a superset of files included in batchInput
and/or commonInput
e.g.:[
{$dnanexus_link: "file-xxxx"},
{$dnanexus_link: "file-yyyy"}
]
Output: list of mappings, each mapping corresponds to an expanded batch call. Nth mapping contains the input values with which the nth execution of the applet will be run, e.g.:
[
{"a": {$dnanexus_link: "file-xxxx"}, b: 1, c: "foo"},
{"a": {$dnanexus_link: "file-yyyy"}, b: null, c: "foo"}
]
It performs the following validation:
- the input types match the expected applet input field types,
- provided inputs are sufficient to run the applet,
- null values are only among values for inputs that are optional or have no specified default values,
- all arrays of
batchInput
are of equal size, - every file referred to in
batchInputs
exists infiles
input.
Inputs
batchInput
mapping Input that the applet is launched with- key Input field name. It must be one of the names of the inputs defined in the applet input specification.
- value Input field values. It must be an array of fields.
commonInput
mapping (optional) Input that the applet is launched with- key Input field name. It must be one of the names of the inputs defined in the applet input specification.
- value Input field values. It must be an array of fields.
files
list (optional) Files that are needed to run the batch jobs, they must be provided as$dnanexus_links
. They must correspond to all the files included incommonInput
orbatchInput
.OutputsexpandedBatch
list of mappings Each mapping contains the input values for one execution of the applet in batch mode.
Errors
- InvalidInput
inputSpec
must be specified for the applet- Expected
batchInput
to be a JSON object - Expected
commonInput
to be a JSON object - Expected
files
to be an array of$dnanexus_link
references to files - The
batchInput
field is required but empty array was provided - Expected the value of
batchInput
for an applet input field to be an array - Expected the length of all arrays in
batchInput
to be equal - The applet input field value must be specified in
batchInput
- The applet input field is not defined in the input specification of the applet
- All the values of a specific
batchInput
field must be provided (cannot benull
) since the field is required and has no default value - Expected all the files in
batchInput
andcommonInput
to be referenced in thefiles
input array
Specification
This API call may only be made from within an executing job. This call creates a new job which will execute a particular function (from the same applet as the one the current job is running) with a particular input. The input will be checked for links, and any job-based object reference will be honored. However, the input is not checked against the applet spec. Since this is done from inside another job, the new job will inherit the same workspace and project context -- no objects will be cloned, and no other modification will take place in the workspace.
The entry point for the job’s execution will be determined as follows, where f is the string given in the function parameter in the input:
Interpreter | Entry point |
bash | f() in top level scope with no args, if it exists. Also, $1 is set to f |
python3 | Any function decorated with @dxpy.entry_point("f") |
This call will fail if the specified OAuth2 token does not internally represent a currently executing job.
The system takes note of the job that this new job is created from; this information is available in the "parent" field when describing the new job object. Moreover, the parent-child relationship is tracked by the system, and used to advance the job state of the parent. Specifically, a job which has finished executing remains in the "waiting_on_output" state until all of its child jobs have proceeded to the "done" state. See Job Lifecycle for more information.
The new job may fail for at least the following reasons:
- A reference such as one mentioned in "bundledDepends" could not be accessed using the job’s credentials (VIEW access to project context, CONTRIBUTE access to workspace, VIEW access to public projects)
- A job-based object reference did not resolve successfully (invalid job, job ID not found, job not in that project, job is in failed state, field does not exist, field does not contain a valid object link).
- Insufficient credits.
Inputs
name
string (optional, default "<parent job's name>:<function
>") Name for the resulting jobinput
mapping Input that the job is launched with; no syntax checking occurs, but the mapping will be checked for links and create dependencies on any open data objects or unfinished jobs accordingly- key Input field name
- value Input field value
dependsOn
array of strings (optional) List of job, analysis and/or data object IDs; the newly created job will not run until all executions listed independsOn
have transitioned to the "done" state, and all data objects listed are in the "closed" statefunction
string The name of the entry point or function of the applet’s code that will be executedtags
array of strings (optional) Tags to associate with the resulting jobproperties
mapping (optional) Properties to associate with the resulting job- key Property name
- value string Property value
details
mapping or array (optional, default { }) JSON object or array that is to be associated with the jobsystemRequirements
mapping (optional) Request specific resources for each of the executable's entry points; see the Requesting Instance Types section above for more detailstimeoutPolicyByExecutable
mapping (optional) Similar to thetimeoutPolicyByExecutable
field supplied to /applet-xxxx/run​ignoreReuse
boolean (optional) If true then no job reuse will occur for this execution. Takes precedence over value supplied forapplet-xxxx/new
.singleContext
boolean (optional) If true then the resulting job and all of its descendants will only be allowed to use the authentication token given to it at the onset. Use of any other authentication token will result in an error. This option offers extra security to ensure data cannot leak out of your given context. In restricted projects user-specified value is ignored, andsingleContext: true
setting is used instead.nonce
string (optional) Unique identifier for this request. Ensures that even if multiple requests fail and are retried, only a single job is created. For more information, see Nonces.headJobOnDemand
boolean (optional) If true, then the resulting root job will be allocated to an on-demand instance, regardless of its scheduling priority. All of its descendent jobs (if any) inherit its scheduling priority, and their instance allocations are independent from this option. This option overrides the settings in the app’sheadJobOnDemand
(if any).
Outputs
id
string ID of the created job (i.e. a string in the form "job-xxxx").
Errors
- InvalidAuthentication (the usual reasons InvalidAuthentication is thrown, or the auth token used is not a token issued to a job)
- ResourceNotFound (one of the IDs listed in
dependsOn
does not exist) - PermissionDenied (VIEW access required for any objects listed in
dependsOn
and for the project contexts of any jobs listed independsOn
; ability to describe any job used in a job-based object reference - InvalidInput
- The input is not a hash
input
is missing or is not a hash- an invalid link syntax appears in the
input
dependsOn
, if given, is not an array of strings- "details", if given, is not a conformant JSON object
- For each property key-value pair, the size, encoded in UTF-8, of the property key may not exceed 100 bytes and the property value may not exceed 700 bytes
- A
nonce
was reused in a request but some of the other inputs had changed signifying a new and different request - A
nonce
may not exceed 128 bytes
- InvalidState
- some job in
dependsOn
has already failed or been terminated
Specification
Describes a job object. Each job is the result of either running an applet by calling "run" on an applet or app object, or the result of launching a job from an existing job by calling the "new" job class method. In all cases, the job is associated with either an applet or an app, and a project context reflected in the "project" field. Moreover, the ID of the user who launched the execution that this job is a part of is reflected in the "launchedBy" field (and is propagated to all child jobs as well, including any executables launched from those jobs). A job is always in a particular state; for information about job states, see the Job Lifecycle section.
Users with reorg apps that rely on describing the currently running job may want to check the output field "dependsOn" before the full analysis description becomes available using
dx describe analysis-xxx --json | jq -r .dependsOn
or equivalent dxpy
bindings. The output of the command will be an empty array "[]" if it no longer depends on anything (e.g. status "done") which is the signal to proceed, or some job / subanalysis IDs if it is not ready, and the reorg script should wait.Inputs
defaultFields
boolean (optional, defaultfalse
if fields is supplied,true
otherwise) Whether to include the default set of fields in the output (default fields described in the "Outputs" section below). The selections are overridden by any fields explicitly named in fields.fields
mapping (optional) Include or exclude the specified fields from the output. These selections override the settings indefaultFields
.- key Desired output field; see the "Outputs" section below for valid values here
- value boolean Whether to include the field
try
non-negative integer (optional). Specifies a particular try of a restarted job. Value of 0 refers to the first try. Defaults to the latest try (i.e. the try with the largesttry
attribute) for the specified job ID. See Restartable Jobs section for details.
The following option is deprecated (and will not be respected if
fields
is present):io
boolean (optional, default true) Whether the input and output fields (runInput
,originalInput
,input
, andoutput
) should be returned
Outputs
id
string The job ID (i.e. the string "job-xxxx")
The following fields are included by default (but can be disabled using
fields
):try
non-negative integer or null. Returns the try for this job, with 0 corresponding to the first try, 1 corresponding to the second try for restarted jobs and so on.null
is returned for jobs belonging to root executions launched before July 12, 2023 00:13 UTC and information for the latest job try is returned.class
string The value "job"name
string The name of the job tryexecutableName
string The name of the executable (applet or app) that the job was created to runcreated
timestamp Time at which this job was created. All tries of this job have the samecreated
value corresponding to creation time of the first job try.tryCreated
timestamp or null Time at which this job'stry
was created.null
is returned for jobs belonging to root executions launched before July 12, 2023 00:13 UTC. For job try 0, this field has the same value as thecreated
field.modified
timestamp Time at which this job try was last updatedstartedRunning
timestamp (present once the transition has occurred) Time at which this job try transitioned into the "running" state (see Job Lifecycle)stoppedRunning
timestamp (present once the transition has occurred) Time at which this job try transitioned out of the "running" state (see Job Lifecycle)egressReport
mapping or undefined A mapping detailing the total bytes of egress for a particular job try.regionLocalEgress
int Amount in bytes of data transfer between IP in the same cloud region.internetEgress
int Amount in bytes of data transfer to IP outside of the cloud provider.interRegionEgress
int Amount in bytes of data transfer to IP in other regions of the cloud provider.
billTo
string ID of the account to which any costs associated with this job will be billedproject
string The project context associated with this jobfolder
string The output folder in which the outputs of this job’s master job will be placedrootExecution
string ID of the job or analysis at the root of the execution tree (the job or analysis created by a user's API call rather than called by a job or as a stage in an analysis)parentJob
string or null ID of the job which created this job, or null if this job is an origin jobparentJobTry
non-negative integer or null.null
is returned if the job try being described had no parent, or if the parent itself had a nulltry
attribute. Otherwise, thisjob-xxxx
with the try attributes specified in the method's input was launched from theparentJobTry
try of theparentJob
.originJob
string The closest ancestor job whoseparentJob
is null, either because it was run by a user directly or was run as a stage in an analysisdetachedFrom
string or null The ID of the job this job was detached from via thedetach
option, otherwise nulldetachedFromTry
non-negative integer or null. If this job was detached from a job,detachedFrom
anddetachedFromTry
describe the specific try of the job this job was detached from.null
is returned if this job was not detached from another job or if thedetachedFrom
had anull
try
attribute.parentAnalysis
string or null If this is an origin job that was run as a stage in an analysis, then this is the ID of that analysis; otherwise, it is nullanalysis
string or null Null if this job was not run as part of a stage in an analysis; otherwise, the ID of the analysis this job is part ofstage
string or null Null if this job was not run as part of a stage in an analysis; otherwise, the ID of the stage this job is part ofstate
string The job state: one of "idle", "waiting_on_input", "runnable", "running", "waiting_on_output", "done", "debug_hold", "restartable", "failed", "terminating", and "terminated"); see Job Lifecycle for more details on job statesstateTransitions
array of mappings Each element in the list indicates a time at which the state of the job try changed; the initial state of a job try is always "idle" when it is created and is not included in the list. Each hash has the key/values:newState
string The new state, e.g. "runnable"setAt
timestamp Time at which the new state was set for the job try