Index of dx commands

This page contains the help messages for each of the commands under dx, grouped by their primary category.

help

usage: dx help [-h] [command_or_category] [subcommand]

Displays the help message for the given command (and subcommand if given), or
displays the list of all commands in the given category.

CATEGORIES

  all		All commands
  session	Manage your login session
  fs		Navigate and organize your projects and files
  data		View, download, and upload data
  metadata	View and modify metadata for projects, data, and executions
  workflow	View and modify workflows
  exec		Manage and run apps, applets, and workflows
  org		Administer and operate on orgs
  other		Miscellaneous advanced utilities

EXAMPLE

  To find all commands related to running and monitoring a job and then display
  the help message for the command "run", run

  $ dx help exec
    <list of all execution-related dx commands>
  $ dx help run
    <help message for dx run>

positional arguments:
  command_or_category  Display the help message for the given command, or the
                       list of all available commands for the given category
  subcommand           Display the help message for the given subcommand of
                       the command

optional arguments:
  -h, --help           show this help message and exit

Overriding environment variables

usage: dx command ... [--apiserver-host APISERVER_HOST]
                      [--apiserver-port APISERVER_PORT]
                      [--apiserver-protocol APISERVER_PROTOCOL]
                      [--project-context-id PROJECT_CONTEXT_ID]
                      [--workspace-id WORKSPACE_ID]
                      [--security-context SECURITY_CONTEXT]
                      [--auth-token AUTH_TOKEN]

optional arguments:
  --apiserver-host APISERVER_HOST
                        API server host
  --apiserver-port APISERVER_PORT
                        API server port
  --apiserver-protocol APISERVER_PROTOCOL
                        API server protocol (http or https)
  --project-context-id PROJECT_CONTEXT_ID
                        Default project or project context ID
  --workspace-id WORKSPACE_ID
                        Workspace ID (for jobs only)
  --security-context SECURITY_CONTEXT
                        JSON string of security context
  --auth-token AUTH_TOKEN
                        Authentication token

Category: session

Manage your login session.

login

usage: dx login [-h] [--env-help] [--token TOKEN] [--noprojects] [--save]
                [--timeout TIMEOUT]

Log in interactively and acquire credentials. Use "--token" to log in with an
existing API token.

optional arguments:
  -h, --help         show this help message and exit
  --env-help         Display help message for overriding environment variables
  --token TOKEN      Authentication token to use
  --noprojects       Do not print available projects
  --save             Save token and other environment variables for future
                     sessions
  --timeout TIMEOUT  Timeout for this login token (in seconds, or use suffix
                     s, m, h, d, w, M, y)

logout

usage: dx logout [-h] [--env-help] [--host HOST] [--port PORT]
                 [--protocol PROTOCOL]

Log out and remove credentials

optional arguments:
  -h, --help           show this help message and exit
  --env-help           Display help message for overriding environment
                       variables
  --host HOST          Log out of the given auth server host (port must also
                       be given)
  --port PORT          Log out of the given auth server port (host must also
                       be given)
  --protocol PROTOCOL  Used in conjunction with host and port arguments, gives
                       the protocol to use when contacting auth server

exit

usage: dx exit [-h]

Exit out of the interactive shell

optional arguments:
  -h, --help  show this help message and exit

whoami

usage: dx whoami [-h] [--env-help] [--id]

Print the username of the current user, in the form "user-USERNAME"

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  --id        Print user ID instead of username

env

usage: dx env [-h] [--env-help] [--bash] [--dx-flags]

Prints all environment variables in use as they have been resolved from
environment variables and configuration files.  For more details, see

https://documentation.dnanexus.com/user/helpstrings-of-sdk-command-line-utilities#overriding-environment-variables

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment
              variables
  --bash      Prints a list of bash commands to export the environment
              variables
  --dx-flags  Prints the dx options to override the environment variables

clearenv

usage: dx clearenv [-h] [--reset]

Clears all environment variables set by dx. More specifically, it removes
local state stored in ~/.dnanexus_config/environment. Does not affect the
environment variables currently set in your shell.

optional arguments:
  -h, --help  show this help message and exit
  --reset     Reset dx environment variables to empty values. Use this to
              avoid interference between multiple dx sessions when using shell
              environment variables.

Category: fs

Navigate and organize your projects and files

ls

usage: dx ls [-h] [--color {off,on,auto}] [--delimiter [DELIMITER]]
             [--env-help] [--brief | --verbose] [-a] [-l] [--obj] [--folders]
             [--full]
             [path]

List folders and/or objects in a folder

positional arguments:
  path                  Folder (possibly in another project) to list the
                        contents of, default is the current directory in the
                        current project. Syntax: projectID:/folder/path

optional arguments:
  -h, --help            show this help message and exit
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  -a, --all             show hidden files
  -l, --long            Alias for "verbose"
  --obj                 show only objects
  --folders             show only folders
  --full                show full paths of folders

tree

usage: dx tree [-h] [--color {off,on,auto}] [--env-help] [-a] [-l] [path]

List folders and objects in a tree

positional arguments:
  path                  Folder (possibly in another project) to list the
                        contents of, default is the current directory in the
                        current project. Syntax: projectID:/folder/path

optional arguments:
  -h, --help            show this help message and exit
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --env-help            Display help message for overriding environment
                        variables
  -a, --all             show hidden files
  -l, --long            use a long listing format

pwd

usage: dx pwd [-h] [--env-help]

Print current working directory

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

select

usage: dx select [-h] [--env-help] [--name NAME]
                 [--level {VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}] [--public]
                 [project]

Interactively list and select a project to switch to. By default, only lists
projects for which you have at least CONTRIBUTE permissions. Use --public to
see the list of public projects.

positional arguments:
  project               Name or ID of a project to switch to; if not provided
                        a list will be provided for you

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  --name NAME           Name of the project (wildcard patterns supported)
  --level {VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}
                        Minimum level of permissions expected
  --public              Include ONLY public projects (will automatically set
                        --level to VIEW)

cd

usage: dx cd [-h] [--env-help] [path]

Change the current working directory

positional arguments:
  path        Folder (possibly in another project) to which to change the
              current working directory, default is "/" in the current project

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

cp

usage: dx cp [-h] [--env-help] [-a] source [source ...] destination

Copy objects and/or folders between different projects.  Folders will
automatically be copied recursively.  To specify which project to use as a
source or destination, prepend the path or ID of the object/folder with the
project ID or name and a colon.

EXAMPLES

  The first example copies a file in a project called "FirstProj" to the
  current directory of the current project.  The second example copies the
  object named "reads.fq.gz" in the current directory to the folder
  /folder/path in the project with ID "project-B0VK6F6gpqG6z7JGkbqQ000Q",
  and finally renaming it to "newname.fq.gz".

  $ dx cp FirstProj:file-B0XBQFygpqGK8ZPjbk0Q000q .
  $ dx cp reads.fq.gz project-B0VK6F6gpqG6z7JGkbqQ000Q:/folder/path/newname.fq.gz

positional arguments:
  source       Objects and/or folder names to copy
  destination  Folder into which to copy the sources or new pathname (if only
               one source is provided).  Must be in a different
               project/container than all source paths.

optional arguments:
  -h, --help   show this help message and exit
  --env-help   Display help message for overriding environment
               variables
  -a, --all    Apply to all results with the same name without
               prompting

mv

usage: dx mv [-h] [--env-help] [-a] source [source ...] destination

Move or rename data objects and/or folders inside a single project.  To copy
data between different projects, use 'dx cp' instead.

positional arguments:
  source       Objects and/or folder names to move
  destination  Folder into which to move the sources or new pathname (if only
               one source is provided).  Must be in the same project/container
               as all source paths.

optional arguments:
  -h, --help   show this help message and exit
  --env-help   Display help message for overriding environment
               variables
  -a, --all    Apply to all results with the same name without
               prompting

mkdir

usage: dx mkdir [-h] [--env-help] [-p] path [path ...]

Create a new folder

positional arguments:
  path           Paths to folders to create

optional arguments:
  -h, --help     show this help message and exit
  --env-help     Display help message for overriding environment variables
  -p, --parents  no error if existing, create parent directories as needed

rmdir

usage: dx rmdir [-h] [--env-help] path [path ...]

Remove a folder

positional arguments:
  path        Paths to folders to remove

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

rm

usage: dx rm [-h] [--env-help] [-a] [-r] [-f] path [path ...]

Remove data objects and folders.

positional arguments:
  path             Paths to remove

optional arguments:
  -h, --help       show this help message and exit
  --env-help       Display help message for overriding environment variables
  -a, --all        Apply to all results with the same name without prompting
  -r, --recursive  Recurse into a directory
  -f, --force      Force removal of files

rmproject

usage: dx rmproject [-h] [--env-help] [-y] [-q] project [project ...]

Delete projects and all their associated data

positional arguments:
  project      Projects to remove

optional arguments:
  -h, --help   show this help message and exit
  --env-help   Display help message for overriding environment variables
  -y, --yes    Do not ask for confirmation
  -q, --quiet  Do not print purely informational messages

archive

usage: dx archive [-h] [-a] [-q] [--all-copies] [-y] [--no-recurse]
                  path [path ...]

Requests for the specified set files or for the files in a single specified folder in ONE project to be archived on the platform.
For each file, if this is the last copy of a file to have archival requested, it will trigger the full archival of the object. 
Otherwise, the file will be marked in an archival state denoting that archival has been requested.

The input paths should be either 1 folder path or up to 1000 files, and all path(s) need to be in the same project. 
To specify which project to use, prepend the path or ID of the file/folder with the project ID or name and a colon. 

EXAMPLES:

    # archive 3 files in project "FirstProj" with project ID project-B0VK6F6gpqG6z7JGkbqQ000Q
    $ dx archive FirstProj:file-B0XBQFygpqGK8ZPjbk0Q000Q FirstProj:/path/to/file1 project-B0VK6F6gpqG6z7JGkbqQ000Q:/file2
    
    # archive 2 files in current project. Specifying file ids saves time by avoiding file name resolution.
    $ dx select FirstProj
    $ dx archive file-A00000ygpqGK8ZPjbk0Q000Q file-B00000ygpqGK8ZPjbk0Q000Q

    # archive all files recursively in project-B0VK6F6gpqG6z7JGkbqQ000Q
    $ dx archive project-B0VK6F6gpqG6z7JGkbqQ000Q:/
  

positional arguments:
  path          May refer to a single folder or specify up to 1000
                files inside a project.

optional arguments:
  -h, --help    show this help message and exit
  -a, --all     Apply to all results with the same name without
                prompting
  -q, --quiet   Do not print extra info messages
  --all-copies  If true, archive all the copies of files in projects
                with the same billTo org.
                See https://documentation.dnanexus.com/developer/api/d
                ata-containers/projects#api-method-project-xxxx-archiv
                e for details.
  -y, --yes     Do not ask for confirmation.
  --no-recurse  When `path` refers to a single folder, this flag
                causes only files in the specified folder and not its
                subfolders to be archived. This flag has no impact
                when `path` input refers to a collection of files.

Output:
  If -q option is not specified, prints "Tagged <count> file(s) for archival"

unarchive

usage: dx unarchive [-h] [-a] [--rate {Standard,Bulk}] [-q] [-y] [--no-recurse] path [path ...]

Requests for a specified set files or for the files in a single specified folder in ONE project to be unarchived on the platform.
The requested copy will eventually be transitioned over to the live state while all other copies will move over to the archival state.

The input paths should be either 1 folder path or up to 1000 files, and all path(s) need to be in the same project. 
To specify which project to use, prepend the path or ID of the file/folder with the project ID or name and a colon.

EXAMPLES:

    # unarchive 3 files in project "FirstProj" with project ID project-B0VK6F6gpqG6z7JGkbqQ000Q 
    $ dx unarchive FirstProj:file-B0XBQFygpqGK8ZPjbk0Q000Q FirstProj:/path/to/file1 project-B0VK6F6gpqG6z7JGkbqQ000Q:/file2
 
    # unarchive 2 files in current project. Specifying file ids saves time by avoiding file name resolution.
    $ dx select FirstProj
    $ dx unarchive file-A00000ygpqGK8ZPjbk0Q000Q file-B00000ygpqGK8ZPjbk0Q000Q

    # unarchive all files recursively in project-B0VK6F6gpqG6z7JGkbqQ000Q
    $ dx unarchive project-B0VK6F6gpqG6z7JGkbqQ000Q:/
  

positional arguments:
  path                  May refer to a single folder or specify up to 1000 files inside a project.

optional arguments:
  -h, --help            show this help message and exit
  -a, --all             Apply to all results with the same name without prompting
  --rate {Standard,Bulk}
                        The speed at which all files in this request are unarchived.
                          - Azure regions: {Standard}
                          - AWS regions: {Standard, Bulk}
  -q, --quiet           Do not print extra info messages
  -y, --yes             Do not ask for confirmation.
  --no-recurse          When `path` refers to a single folder, this flag causes only files in the
                        specified folder and not its subfolders to be unarchived. This flag has no
                        impact when `path` input refers to a collection of files.

Output:
  If -q option is not specified, prints "Tagged <> file(s) for unarchival, totalling <> GB, costing <> "

list database files

usage: dx list database files [-h] [--env-help] [--folder FOLDER] [--recurse]
                              [--csv] [--timeout TIMEOUT]
                              database

List files associated with a specific database

positional arguments:
  database           Data object ID or path of the database.

optional arguments:
  -h, --help         show this help message and exit
  --env-help         Display help message for overriding environment variables
  --folder FOLDER    Name of folder (directory) in which to start searching
                     for database files. This will typically match the name of
                     the table whose files are of interest. The default value
                     is "/" which will start the search at the root folder of
                     the database.
  --recurse          Look for files recursively down the directory structure.
                     Otherwise, by default, only look on one level.
  --csv              Write output as comma delimited fields, suitable as CSV
                     format.
  --timeout TIMEOUT  Number of seconds to wait before aborting the request. If
                     omitted, default timeout is 120 seconds.

Category: data

View, download, and upload data

describe

usage: dx describe [-h] [--json] [--color {off,on,auto}]
                   [--delimiter [DELIMITER]] [--env-help] [--details]
                   [--verbose] [--name] [--multi] [--try T]
                   path

Describe a DNAnexus entity.  Use this command to describe data objects by name
or ID, jobs, apps, users, organizations, etc.  If using the "--json" flag, it
will thrown an error if more than one match is found (but if you would like a
JSON array of the describe hashes of all matches, then provide the "--multi"
flag).  Otherwise, it will always display all results it finds.

NOTES:

- The project found in the path is used as a HINT when you are using an object
ID; you may still get a result if you have access to a copy of the object in
some other project, but if it exists in the specified project, its description
will be returned.

- When describing apps or applets, options marked as advanced inputs will be
hidden unless --verbose is provided

positional arguments:
  path                  Object ID or path to an object (possibly in another
                        project) to describe.

optional arguments:
  -h, --help            show this help message and exit
  --json                Display return value in JSON
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --details             Include details of data objects
  --verbose             Include additional metadata
  --name                Only print the matching names, one per line
  --multi               If the flag --json is also provided, then returns a
                        JSON array of describe hashes of all matching results
  --try T               When describing a job that was restarted, describe job
                        try T. T=0 refers to the first try. Default is the
                        last job try.
  --defaultSymlink      When describing a project linked to a Symlink
                        Drive defined, return a mapping describing the drive, 
                        in the form {"drive": "drive-xxxx", "container": 
                        "containerOrBucketName", "prefix": "/"}
  --symlinkPath         When describing a symlinked file, return the path to the
                        file, including the name of the cloud storage container
                        where the file is stored, the folder name, if present, and the
                        filename proper.

upload

usage: dx upload [-h] [--visibility {hidden,visible}] [--property KEY=VALUE]
                 [--type TYPE] [--tag TAG] [--details DETAILS] [-p]
                 [--brief | --verbose] [--env-help] [--path [PATH]] [-r]
                 [--wait] [--no-progress] [--buffer-size WRITE_BUFFER_SIZE]
                 [--singlethread]
                 filename [filename ...]

Upload local file(s) or directory. If "-" is provided, stdin will be used
instead. By default, the filename will be used as its new name. If
--path/--destination is provided with a path ending in a slash, the filename
will be used, and the folder path will be used as a destination. If it does
not end in a slash, then it will be used as the final name.

positional arguments:
  filename              Local file or directory to upload ("-" indicates stdin
                        input); provide multiple times to upload multiple
                        files or directories

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --path [PATH], --destination [PATH]
                        DNAnexus path to upload file(s) to (default uses
                        current project and folder if not provided)
  -r, --recursive       Upload directories recursively
  --wait                Wait until the file has finished closing
  --no-progress         Do not show a progress bar
  --buffer-size WRITE_BUFFER_SIZE
                        Set the write buffer size (in bytes)
  --singlethread        Enable singlethreaded uploading

metadata arguments:
  --visibility {hidden,visible}
                        Whether the object is hidden or not
  --property KEY=VALUE  Key-value pair to add as a property; repeat as
                        necessary, e.g. "--property key1=val1 --property
                        key2=val2"
  --type TYPE           Type of the data object; repeat as necessary, e.g. "--
                        type type1 --type type2"
  --tag TAG             Tag of the data object; repeat as necessary, e.g. "--
                        tag tag1 --tag tag2"
  --details DETAILS     JSON to store as details
  -p, --parents         Create any parent folders necessary

download

usage: dx download [-h] [--env-help] [-o OUTPUT] [-f] [-r] [-a]
                   [--no-progress] [--lightweight] [--unicode]
                   path [path ...]

Download the contents of a file object or multiple objects. Use "-o -" to
direct the output to stdout.

positional arguments:
  path                  Data object ID or name, or folder to download

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  -o OUTPUT, --output OUTPUT
                        Local filename or directory to be used ("-" indicates
                        stdout output); if not supplied or a directory is
                        given, the object's name on the platform will be used,
                        along with any applicable extensions
  -f, --overwrite       Resume an interupted download if the local and remote
                        file signatures match. If the signatures do not match
                        the local file will be overwritten.
  -r, --recursive       Download folders recursively
  -a, --all             If multiple objects match the input, download all of
                        them
  --no-progress         Do not show a progress bar
  --lightweight         Skip some validation steps to make fewer API calls
  --unicode             Display the characters as text/unicode when writing to
                        stdout

make_download_url

usage: dx make_download_url [-h] [--duration DURATION] [--filename FILENAME]
                            path

Creates a pre-authenticated link that can be used to download a file without
logging in.

positional arguments:
  path                 Data object ID or name to access

optional arguments:
  -h, --help           show this help message and exit
  --duration DURATION  Time for which the URL will remain valid (in seconds,
                       or use suffix s, m, h, d, w, M, y). Default: 1 day
  --filename FILENAME  Name that the server will instruct the client to save
                       the file as (default is the filename)

cat

usage: dx cat [-h] [--env-help] [--unicode] path [path ...]

positional arguments:
  path        File ID or name(s) to print to stdout

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  --unicode   Display the characters as text/unicode when writing to stdout
usage: dx head [-h] [--color {off,on,auto}] [--env-help] [-n N] path

Print the first part of a file. By default, prints the first 10 lines.

positional arguments:
  path                  File ID or name to access

optional arguments:
  -h, --help            show this help message and exit
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --env-help            Display help message for overriding environment
                        variables
  -n N, --lines N       Print the first N lines (default 10)

new

usage: dx new [-h] class ...

Use this command with one of the available subcommands (classes) to create a
new project or data object from scratch. Not all data types are supported. See
'dx upload' for files and 'dx build' for applets.

positional arguments:
  class
    user      Create a new user account
    org       Create new non-billable org
    project   Create a new project
    record    Create a new record
    workflow  Create a new workflow

optional arguments:
  -h, --help  show this help message and exit

new project

usage: dx new project [-h] [--brief | --verbose] [--env-help]
                      [--region REGION] [-s] [--bill-to BILL_TO] [--phi]
                      [--database-ui-view-only]
                      [name]

Create a new project

positional arguments:
  name                  Name of the new project

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --region REGION       Region affinity of the new project
  -s, --select          Select the new project as current after creating
  --bill-to BILL_TO     ID of the user or org to which the project will be
                        billed. The default value is the billTo of the
                        requesting user.
  --phi                 Add PHI protection to project
  --database-ui-view-only
                        If set to true, viewers of the project will not be
                        able to access database data directly
  --default-symlink DEFAULT_SYMLINK_MAPPING
                        Specifies default remote location for new files in the 
                        project and all its execution containers. Mapping in the 
                        form '{"drive": "drive-xxxx", 
                        "container": "containerOrBucketName", "prefix": "/"}'
                        "drive" is the ID of the linked Symlink Drive.
                        "container" is the name of the cloud service container 
                        where symlinked files are stored. "prefix," which is 
                        optional, is the name of the folder, in the cloud service 
                        container, where symlinked files are stored.
  --monthly-compute-limit MONTHLY_COMPUTE_LIMIT
                        Monthly project spending limit for compute (in currency units)
  --monthly-egress-bytes-limit MONTHLY_EGRESS_BYTES_LIMIT
                        Monthly project spending limit for egress (in bytes)

new record

usage: dx new record [-h] [--visibility {hidden,visible}]
                     [--property KEY=VALUE] [--type TYPE] [--tag TAG]
                     [--details DETAILS] [-p] [--brief | --verbose]
                     [--env-help] [--init INIT] [--close]
                     [path]

Create a new record

positional arguments:
  path                  DNAnexus path for the new data object (default uses
                        current project and folder if not provided)

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --init INIT           Path to record from which to initialize all metadata
  --close               Close the record immediately after creating it

metadata arguments:
  --visibility {hidden,visible}
                        Whether the object is hidden or not
  --property KEY=VALUE  Key-value pair to add as a property; repeat as
                        necessary,
                         e.g. "--property key1=val1 --property key2=val2"
  --type TYPE           Type of the data object; repeat as necessary,
                         e.g. "--type type1 --type type2"
  --tag TAG             Tag of the data object; repeat as necessary,
                         e.g. "--tag tag1 --tag tag2"
  --details DETAILS     JSON to store as details
  -p, --parents         Create any parent folders necessary

new workflow

usage: dx new workflow [-h] [--visibility {hidden,visible}]
                       [--property KEY=VALUE] [--type TYPE] [--tag TAG]
                       [--details DETAILS] [-p] [--brief | --verbose]
                       [--env-help] [--title TITLE] [--summary SUMMARY]
                       [--description DESCRIPTION]
                       [--output-folder OUTPUT_FOLDER] [--init INIT]
                       [path]

Create a new workflow

positional arguments:
  path                  DNAnexus path for the new data object (default uses
                        current project and folder if not provided)

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --title TITLE         Workflow title
  --summary SUMMARY     Workflow summary
  --description DESCRIPTION
                        Workflow description
  --output-folder OUTPUT_FOLDER
                        Default output folder for the workflow
  --init INIT           Path to workflow or an analysis ID from which to
                        initialize all metadata

metadata arguments:
  --visibility {hidden,visible}
                        Whether the object is hidden or not
  --property KEY=VALUE  Key-value pair to add as a property; repeat as
                        necessary,
                         e.g. "--property key1=val1 --property key2=val2"
  --type TYPE           Type of the data object; repeat as necessary,
                         e.g. "--type type1 --type type2"
  --tag TAG             Tag of the data object; repeat as necessary,
                         e.g. "--tag tag1 --tag tag2"
  --details DETAILS     JSON to store as details
  -p, --parents         Create any parent folders necessary

close

usage: dx close [-h] [--env-help] [-a] [--wait] path [path ...]

Close a remote data object or set of objects.

positional arguments:
  path        Path to a data object to close

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting
  --wait      Wait for the object(s) to close

wait

usage: dx wait [-h] [--env-help] [--from-file] path [path ...]

Polls the state of specified data object(s) or job(s) until they are all in
the desired state. Waits until the "closed" state for a data object, and for
any terminal state for a job ("terminated", "failed", or "done"). Exits with a
non-zero code if a job reaches a terminal state that is not "done". Can also
provide a local file containing a list of data object(s) or job(s), one per
line; the file will be read if "--from-file" argument is added.

positional arguments:
  path         Path to a data object, job ID, or file with IDs to wait for

optional arguments:
  -h, --help   show this help message and exit
  --env-help   Display help message for overriding environment variables
  --from-file  Read the list of objects to wait for from the file provided in
               path

get

usage: dx get [-h] [--env-help] [-o OUTPUT] [--filename FILENAME]
              [--allow-all-files] [--recurse] [--no-ext] [--omit-resources]
              [-f]
              path

Download the contents of some types of data (records, apps, applets,
workflows, files, and databases). Downloading an app, applet or a workflow
will attempt to reconstruct a source directory that can be used to rebuild it
with "dx build". Use "-o -" to direct the output to stdout.

positional arguments:
  path                  Data object ID or name to access

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  -o OUTPUT, --output OUTPUT
                        local file path where the data is to be saved ("-"
                        indicates stdout output for objects of class file and
                        record). If not supplied, the object's name on the
                        platform will be used, along with any applicable
                        extensions. For app(let) and workflow objects, if
                        OUTPUT does not exist, the object's source directory
                        will be created there; if OUTPUT is an existing
                        directory, a new directory with the object's name will
                        be created inside it.
  --filename FILENAME   When downloading from a database, name of the file or
                        folder to be downloaded. If omitted, all files in the
                        database will be downloaded, so use caution and
                        include the --allow-all-files argument.
  --allow-all-files     When downloading from a database, this allows all
                        files in a database to be downloaded when --filename
                        argument is omitted.
  --recurse             When downloading from a database, look for files
                        recursively down the directory structure. Otherwise,
                        by default, only look on one level.
  --no-ext              If -o is not provided, do not add an extension to the
                        filename
  --omit-resources      When downloading an app(let), omit fetching the
                        resources associated with the app(let).
  -f, --overwrite       Overwrite the local file if necessary

find data

usage: dx find data [-h] [--brief | --verbose] [--json]
                    [--color {off,on,auto}] [--delimiter [DELIMITER]]
                    [--env-help] [--property KEY[=VALUE]] [--tag TAG]
                    [--class {record,file,applet,workflow,database}]
                    [--state {open,closing,closed,any}]
                    [--visibility {hidden,visible,either}] [--name NAME]
                    [--type TYPE] [--link LINK] [--all-projects]
                    [--path PROJECT:FOLDER] [--norecurse]
                    [--created-after CREATED_AFTER]
                    [--created-before CREATED_BEFORE] [--mod-after MOD_AFTER]
                    [--mod-before MOD_BEFORE] [--region REGION]

Finds data objects subject to the given search parameters. By default,
restricts the search to the current project if set. To search over all
projects (excluding public projects), use --all-projects (overrides --path and
--norecurse).

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key;
                        if only a key is provided, matches a result that has
                        the key with any value; repeat as necessary, e.g. "--
                        property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1
                        --tag tag2" will require both tags
  --class {record,file,applet,workflow,database}
                        Data object class
  --state {open,closing,closed,any}
                        State of the object
  --visibility {hidden,visible,either}
                        Whether the object is hidden or not
  --name NAME           Name of the object
  --type TYPE           Type of the data object
  --link LINK           Object ID that the data object links to
  --all-projects, --allprojects
                        Extend search to all projects (excluding public
                        projects)
  --path PROJECT:FOLDER
                        Project and/or folder in which to restrict the results
  --norecurse           Do not recurse into subfolders
  --created-after CREATED_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the object was created (negative number means ms
                        in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--created-after=-2d"
  --created-before CREATED_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before
                        which the object was created (negative number means ms
                        in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--created-before=-2d"
  --mod-after MOD_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the object was last modified (negative number
                        means ms in the past, or use suffix s, m, h, d, w, M,
                        y) Negative input example "--mod-after=-2d"
  --mod-before MOD_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before
                        which the object was last modified (negative number
                        means ms in the past, or use suffix s, m, h, d, w, M,
                        y) Negative input example "--mod-before=-2d"
  --region REGION       Restrict the search to the provided region

find projects

usage: dx find projects [-h] [--brief | --verbose] [--json]
                        [--delimiter [DELIMITER]] [--env-help]
                        [--property KEY[=VALUE]] [--tag TAG]
                        [--phi {true,false}] [--name NAME]
                        [--level {VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}]
                        [--public] [--created-after CREATED_AFTER]
                        [--created-before CREATED_BEFORE] [--region REGION]
                        [--externalUploadRestricted {true,false}]

Finds projects subject to the given search parameters. Use the --public flag
to list all public projects.

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key;
                        if only a key is provided, matches a result that has
                        the key with any value; repeat as necessary, e.g. "--
                        property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1
                        --tag tag2" will require both tags
  --phi {true,false}    If set to true, only projects that contain PHI data
                        will be retrieved. If set to false, only projects that
                        do not contain PHI data will be retrieved.
  --name NAME           Name of the project
  --level {VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}
                        Minimum level of permissions expected
  --public              Include ONLY public projects (will automatically set
                        --level to VIEW)
  --created-after CREATED_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the project was created (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--created-after=-2d"
  --created-before CREATED_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the project was created (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--created-before=-2d"
  --region REGION       Restrict the search to the provided region
  
--externalUploadRestricted {true,false}
                        If set to true, only externalUploadRestricted projects will 
                        be retrieved. If set to false, only projects that are not 
                        externalUploadRestricted will be retrieved.

update project

usage: dx update project [-h] [--brief | --verbose] [--env-help] [--name NAME]
                         [--summary SUMMARY] [--description DESCRIPTION]
                         [--protected {true,false}]
                         [--restricted {true,false}]
                         [--download-restricted {true,false}]
                         [--containsPHI {true}]
                         [--database-ui-view-only {true,false}]
                         [--bill-to BILL_TO]
                         [ --external-upload-restricted {true,false}] 
                         [--allowed-executables ALLOWED_EXECUTABLES 
                                [ALLOWED_EXECUTABLES ...]
                         [ --unset-allowed-executables]
                         project_id

positional arguments:
  project_id            Project ID or project name

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --name NAME           New project name
  --summary SUMMARY     Project summary
  --description DESCRIPTION
                        Project description
  --protected {true,false}
                        Whether the project should be PROTECTED
  --restricted {true,false}
                        Whether the project should be RESTRICTED
  --download-restricted {true,false}
                        Whether the project should be DOWNLOAD RESTRICTED
  --containsPHI {true}  Flag to tell if project contains PHI
  --database-ui-view-only {true,false}
                        Whether the viewers on the project can access the
                        database data directly
  --bill-to BILL_TO     Update the user or org ID of the billing account
  --allowed-executables ALLOWED_EXECUTABLES [ALLOWED_EXECUTABLES ...]
                        Executable ID(s) this project is allowed to run. This
                        operation overrides any existing list of executables.
  --unset-allowed-executables
                        Removes any restriction to run executables as set by
                        --allowed-executables                   
  --external-upload-restricted {true,false}
                        Whether uploads of file and table data to the project 
                        should be restricted

extract_dataset

usage: dx extract_dataset [-h] [-ddd] [--fields FIELDS]
                          [--fields-file FIELDS_FILE][--sql]
                          [--delim [DELIM]] [-o OUTPUT] [--list-fields] [--list-entities] [--entities ENTITIES]
                          path
                          
Retrieves the data or generates SQL to retrieve the data from a dataset or
cohort for a set of entity.fields. Additionally, the dataset's dictionary can
be extracted independently or in conjunction with data. Note: A separate version of pandas may be required to install when using this functionality.

positional arguments:
  path                  v3.0 Dataset or Cohort object ID (project-id:record-id
                        where "record-id" indicates the record ID in the
                        currently selected project) or name
optional arguments:
  -h, --help            show this help message and exit
  -ddd, --dump-dataset-dictionary
                        If provided, the three dictionary files,
                        <record_name>.data_dictionary.csv,
                        <record_name>.entity_dictionary.csv, and
                        <record_name>.codings.csv will be generated. Files
                        will be comma delimited and written to the local
                        working directory, unless otherwise specified using
                        --delimiter and --output arguments. If stdout is
                        specified with the output argument, the data
                        dictionary, entity dictionary, and coding are output
                        in succession, without separators. If any of the
                        three dictionary files does not contain data (i.e. the
                        dictionary is empty), then that particular file will
                        not be created (or be output if the output is stdout).
  --fields FIELDS
                        A comma-separated string where each value is the
                        phenotypic entity name and field name, separated by a
                        dot.  For example: "<entity_name>.<field_name>,<entity
                        _name>.<field_name>". Internal spaces are permitted.
                        If multiple entities are provided, field values will
                        be automatically inner joined. If only the --fields
                        argument is provided, data will be retrieved and
                        returned. If both --fields and --sql arguments are
                        provided, a SQL statement to retrieve the specified
                        field data will be automatically generated and
                        returned. Alternatively, use --fields-file option when
                        the number of fields to be retrieved is large.
  --fields-file FIELDS_FILE
                        A file with no header and one entry per line where
                        every entry is the phenotypic entity name and field
                        name, separated by a dot. For example:
                        <entity_name>.<field_name>. If multiple entities are
                        provided, field values will be automatically inner
                        joined. If only the --fields-file argument is
                        provided, data will be retrieved and returned. If both
                        --fields-file and --sql arguments are provided, a SQL
                        statement to retrieve the specified field data will be
                        automatically generated and returned. May not be used
                        in conjunction with the argument --fields.
  --sql                 
                        If provided, a SQL statement (string) will be returned
                        to query the set of entity.fields, instead of
                        returning stored values from the set of entity.fields
  --delim [DELIM], --delimiter [DELIM]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, COMMA will be used
  -o OUTPUT, --output OUTPUT
                        Local filename or directory to be used ("-" indicates
                        stdout output). If not supplied, output will create a
                        file with a default name in the current folder
  --list-fields         List the names and titles of all fields available in the dataset specified. When not specified together with "–-entities", it will return all the fields
                        from the main entity. Output will be a two column table, field names and field titles, separated by a tab, where field names will be of the format,
                        "<entity name>.<field name>" and field titles will be of the format, "<field title>".
  --list-entities       List the names and titles of all the entities available in the dataset specified. Output will be a two column table, entity names and entity titles,
                        separated by a tab.
  --entities ENTITIES   Similar output to "--list-fields", however using "--entities" will allow for specific entities to be specified. When multiple entities are specified,
                        use comma as the delimiter. For example: "--list-fields --entities entityA,entityB,entityC"

extract_assay expression

usage: dx extract_assay expression [-h] [--list-assays]
                                   [--retrieve-expression]
                                   [--additional-fields-help]
                                   [--assay-name ASSAY_NAME]
                                   [--filter-json FILTER_JSON]
                                   [--filter-json-file FILTER_JSON_FILE]
                                   [--json-help] [--sql]
                                   [--additional-fields ADDITIONAL_FIELDS [ADDITIONAL_FIELDS ...]]
                                   [--expression-matrix] [--delim DELIM]
                                   [--output OUTPUT]
                                   [path]

Retrieve the selected data or generate SQL to retrieve the data from a
molecular expression assay in a dataset or cohort based on provided rules.

positional arguments:
  path                  v3.0 Dataset or Cohort object ID, project-id:record-
                        id, where ":record-id" indicates the record-id in
                        current selected project, or name

options:
  -h, --help            show this help message and exit
  --list-assays         List molecular expression assays available for query
                        in the specified Dataset or Cohort object
  --retrieve-expression
                        A flag to support, specifying criteria of molecular
                        expression to retrieve. Retrieves rows from the
                        expression table, optionally extended with sample and
                        annotation information where the extension is inline
                        without affecting row count. By default returns the
                        following set of fields; "sample_id", "feature_id",
                        and "value". Additional fields may be returned using "
                        --additional-fields". Must be used with either "--
                        filter-json" or "--filter-json-file". Specify "--json-
                        help" following this option to get detailed
                        information on the json format and filters. When
                        filtering, one, and only one of "location",
                        "annotation.feature_id", or "annotation.feature_name"
                        may be supplied. If a Cohort object is supplied,
                        returned samples will be initially filtered to match
                        the cohort-defined set of samples, and any additional
                        filters will only further refine the cohort-defined
                        set.
  --additional-fields-help
                        List all fields available for output.
  --assay-name ASSAY_NAME
                        Specify a specific molecular expression assay to
                        query. If the argument is not specified, the default
                        assay used is the first assay listed when using the
                        argument, "--list-assays"
  --filter-json FILTER_JSON, -j FILTER_JSON
                        The full input JSON object as a string and
                        corresponding to "--retrieve-expression". Must be used
                        with "--retrieve-expression" flag. Either "--filter-
                        json" or "--filter-json-file" may be supplied, not
                        both.
  --filter-json-file FILTER_JSON_FILE, -f FILTER_JSON_FILE
                        The full input JSON object as a file and corresponding
                        to "--retrieve-expression". Must be used with "--
                        retrieve-expression" flag. Either "--filter-json" or "
                        --filter-json-file" may be supplied, not both.
  --json-help           When set, return a json template of "--retrieve-
                        expression" and a list of filters with definitions.
  --sql                 If the flag is provided, a SQL statement (as a string)
                        will be returned for the user to further query the
                        specified data, instead of returning actual data
                        values. Use of "--sql" is not supported when also
                        using the flag, --expression-matrix/-em
  --additional-fields ADDITIONAL_FIELDS [ADDITIONAL_FIELDS ...]
                        A set of fields to return, in addition to the default
                        set; "sample_id", "feature_id", and "value". Fields
                        must be represented as field names and supplied as a
                        single string, where each field name is separated by a
                        single comma. For example, fieldA,fieldB,fieldC. Use "
                        --additional-fields-help" to get the full list of
                        output fields available.
  --expression-matrix, -em
                        If the flag is provided with "--retrieve-expression",
                        the returned data will be a matrix of sample IDs
                        (rows) by feature IDs (columns), where each cell is
                        the respective pairwise value. The flag is not
                        compatible with "--additional-fields". Additionally,
                        the flag is not compatible with an "expression"
                        filter. If the underlying expression value is missing,
                        the value will be empty in returned data. Use of
                        --expression-matrix/-em is not supported when also
                        using the flag, "--sql".
  --delim DELIM, --delimiter DELIM
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, COMMA will be used. If a file is specified and
                        no --delim argument is passed or is COMMA, the file
                        suffix will be ".csv". If a file is specified and the
                        --delim argument is TAB, the file suffix will be
                        ".tsv". Otherwise, if a file is specified and "--
                        delim" is neither COMMA or TAB file suffix will be
                        ".txt".
  --output OUTPUT, -o OUTPUT
                        A local filename to be used, where "-" indicates
                        printing to STDOUT. If -o/--output is not supplied,
                        default behavior is to create a file with a
                        constructed name in the current folder.

extract_assay germline

usage: dx extract_assay germline [-h] [--assay-name ASSAY_NAME]
                                 (--list-assays |
                                 --retrieve-allele [RETRIEVE_ALLELE] | 
                                 --retrieve-annotation [RETRIEVE_ANNOTATION] | 
                                 --retrieve-genotype [RETRIEVE_GENOTYPE])
                                 [--infer-nocall | --infer-ref]
                                 [--sql] [-o OUTPUT]
                                 path

Query a Dataset or Cohort for an instance of a germline variant assay and retrieve data, 
or generate SQL to retrieve data, as defined by user-provided filters.

positional arguments:
  path
                        v3.0 Dataset or Cohort object ID (project-id:record-id, 
                        where ":record-id" indicates the record-id in the currently 
                        selected project) or name.
optional arguments:
  -h, --help
                        Show this help message and exit.
  --assay-name ASSAY_NAME
                        Specify the genetic variant assay to query. 
                        If the argument is not specified, the default assay 
                        used is the first assay listed when using the argument, 
                        “--list-assays.”
  --list-assays
                        List genetic variant assays available for query 
                        in the specified Dataset or Cohort object.
  --retrieve-allele [RETRIEVE_ALLELE]
                        A JSON object, either in a file (.json extension) 
                        or as a string (‘<JSON object>’), specifying criteria of 
                        alleles to retrieve. Returns a list of allele IDs with 
                        additional information. Use --json-help with this option to 
                        get detailed information on the JSON format and filters.
  --retrieve-annotation [RETRIEVE_ANNOTATION]
                        A JSON object, either in a file (.json extension) or as a 
                        string (‘<JSON object>’), specifying criteria to retrieve 
                        corresponding alleles and their annotation. Use --json-help 
                        with this option to get detailed information on the JSON 
                        format and filters.
  --retrieve-genotype   [RETRIEVE_GENOTYPE]
                        A JSON object, either in a file (.json extension) or as a 
                        string (‘<JSON object>’), specifying criteria of samples to 
                        retrieve. Returns a list of genotypes and associated sample 
                        IDs and allele IDs. Genotype types "ref" and "no-call" have 
                        no allele ID, and "half" types where the genotype is half 
                        reference and half no-call also have no allele ID. All other 
                        genotype types have an allele ID, including "half" types where 
                        the genotype is half alternate allele and half no-call. Use 
                        --json-help with this option to get detailed information on 
                        the JSON format and filters.
  --infer-nocall        
                        When using the "--retrieve-genotype" option, infer genotypes 
                        with type "no-call" if they were excluded when the dataset 
                        was created. This option is only valid if the exclusion 
                        parameters at ingestion were set to "exclude_nocall=true", 
                        "exclude_halfref=false", and "exclude_refdata=false".			
  --infer-ref           
                        When using the "--retrieve-genotype" option, infer genotypes 
                        with type "ref" if they were excluded when the dataset was 
                        created. This option is only valid if the exclusion parameters 
                        at ingestion were set to "exclude_nocall=false", 
                        "exclude_halfref=false", and "exclude_refdata=true".
  --sql
                        If the flag is provided, a SQL statement, returned as a string, 
                        will be provided to query the specified data instead of 
                        returning data.
   -o OUTPUT, --output OUTPUT
                        A local filename or directory to be used, where "-" indicates 
                        printing to STDOUT. If -o/--output is not supplied, default 
                        behavior is to create a file with a constructed name in the 
                        current folder.

extract_assay somatic

usage: dx extract_assay somatic [-h] (--list-assays | --retrieve-meta-info | 
                                --retrieve-variant [RETRIEVE_VARIANT])
                                [--additional-fields [ADDITIONAL_FIELDS]]
                                [--assay-name ASSAY_NAME]
                                [--include-normal-samples]
                                [--sql] [-o OUTPUT]
                                path

Query a Dataset or Cohort for an instance of a somatic variant assay and retrieve data, 
or generate SQL to retrieve data, as defined by user-provided filters.

positional arguments:
  path
                        v3.0 Dataset or Cohort object ID (project-id:record-id, where 
                        ":record-id" indicates the record-id in the currently 
                        selected project) or name.
optional arguments:
  -h, --help
                        Show this help message and exit.
  --list-assays
                        List somatic variant assays available for query in the 
                        specified Dataset or Cohort object.
  --assay-name ASSAY_NAME
                        Specify the somatic variant assay to query. 
                        If the argument is not specified, the default assay used 
                        is the first assay listed when using the argument, 
                        “--list-assays.”
  --retrieve-variant [RETRIEVE_VARIANT]
                        A JSON object, either in a file (.json extension) or as a 
                        string (‘<JSON object>’), specifying criteria of somatic 
                        variants to retrieve. Retrieves rows from the variant table, 
                        optionally extended with sample and annotation information 
                        (the extension is inline without affecting row count). 
                        By default returns the following set of fields; 
                        “assay_sample_id”, “allele_id”, “CHROM”, “POS”, “REF”, and 
                        “allele”. Additional fields may be returned using 
                        --additional-fields. Use --json-help with this option to get detailed 
                        information on the JSON format and filters. When filtering, 
                        the user must supply one, and only one of “location”, 
                        “annotation.symbol”, “annotation.gene”, “annotation.feature”, 
                        “allele.allele_id”.
  --additional-fields ADDITIONAL_FIELDS
                        A set of fields to return, in addition to the default set; 
                        “assay_sample_id”, “allele_id”, “CHROM”, “POS”, “REF”, 
                        “allele”. Fields must be represented as field names and 
                        supplied as a single string, where each field name is 
                        separated by a single comma. For example, “fieldA,fieldB,
                        fieldC.” Internal spaces are permitted.
                        Use --additional-fields-help with this option to get 
                        detailed information and the full list of output fields available.
  --include-normal-sample
                        Include variants associated with normal samples in the assay. 
                        If no flag is supplied, variants from normal samples will not be supplied.
  --retrieve-meta-info
                        List meta information, as it exists in the original VCF 
                        headers for both INFO and FORMAT fields.
  --sql
                        If the flag is provided, a SQL statement, returned as a 
                        string, will be provided to query the specified data instead 
                        of returning data.
  -o OUTPUT, --output OUTPUT
                        A local filename or directory to be used, where "-" indicates 
                        printing to STDOUT. If -o/--output is not supplied, default 
                        behavior is to create a file with a constructed name in the 
                        current folder.

create_cohort

usage: dx create_cohort [--brief | --verbose] --from FROM
                        (--cohort-ids COHORT_IDS | --cohort-ids-file COHORT_IDS_FILE)
                        [-h]
                        [PATH]

Generates a new Cohort object on the platform from an existing Dataset or
Cohort object and using list of IDs.

positional arguments:
  PATH                  DNAnexus path for the new data object. If not
                        provided, default behavior uses current project and
                        folder, and will name the object identical to the
                        assigned record-id.

optional arguments:
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --from FROM           v3.0 Dataset or Cohort object ID, project-id:record-
                        id, where ":record-id" indicates the record-id in
                        current selected project, or name
  --cohort-ids COHORT_IDS
                        A set of IDs used to subset the Dataset or Cohort
                        object as a comma-separated string. IDs must match
                        identically in the supplied Dataset. If a Cohort is
                        supplied instead of a Dataset, the intersection of
                        supplied and existing cohort IDs will be used to
                        create the new cohort.
  --cohort-ids-file COHORT_IDS_FILE
                        A set of IDs used to subset the Dataset or Cohort
                        object in a file with one ID per line and no header.
                        IDs must match identically in the supplied Dataset. If
                        a Cohort is supplied instead of a Dataset, the
                        intersection of supplied and existing cohort IDs will
                        be used to create the new cohort.
  -h, --help            Return the docstring and exit

Category: metadata

View and modify metadata for projects and data objects.

See also dx describe and dx close.

set_details

usage: dx set_details [-h] [--env-help] [-a] [-f DETAILS_FILE] path [details]

Set the JSON details of a data object.

positional arguments:
  path                  ID or path to data object to modify
  details               JSON to store as details

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  -a, --all             Apply to all results with the same name without
                        prompting
  -f DETAILS_FILE, --details-file DETAILS_FILE
                        Path to local file containing JSON to store as details

get_details

usage: dx get_details [-h] [--env-help] path

Get the JSON details of a data object.

positional arguments:
  path        ID or path to data object to get details for

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

set_visibility

usage: dx set_visibility [-h] [--env-help] [-a] path {hidden,visible}

Set visibility on a data object.

positional arguments:
  path              ID or path to data object to modify
  {hidden,visible}  Visibility that the object should have

optional arguments:
  -h, --help        show this help message and exit
  --env-help        Display help message for overriding environment variables
  -a, --all         Apply to all results with the same name without prompting

add_types

usage: dx add_types [-h] [--env-help] [-a] path type [type ...]

Add types to a data object. See
https://documentation.dnanexus.com/developer/api/data-object-lifecycle/types
for a list of DNAnexus types.

positional arguments:
  path        ID or path to data object to modify
  type        Types to add

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting

remove_types

usage: dx remove_types [-h] [--env-help] [-a] path type [type ...]

Remove types from a data object. See
https://documentation.dnanexus.com/developer/api/data-object-lifecycle/types
for a list of DNAnexus types.

positional arguments:
  path        ID or path to data object to modify
  type        Types to remove

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting

tag

usage: dx tag [-h] [--env-help] [-a] [--try T] path tag [tag ...]

Tag a project, data object, or execution. Note that a project context must be
either set or specified for data object IDs or paths.

positional arguments:
  path        ID or path to project, data object, or execution to modify
  tag         Tags to add

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting
  --try T     When modifying a job that was restarted, apply the change to try T of the restarted job. T=0 refers to
              the first try. Default is the last job try.

untag

usage: dx untag [-h] [--env-help] [-a] [--try T] path tag [tag ...]

Untag a project, data object, or execution. Note that a project context must
be either set or specified for data object IDs or paths.

positional arguments:
  path        ID or path to project, data object, or execution to modify
  tag         Tags to remove

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting
  --try T     When modifying a job that was restarted, apply the change to try T of the restarted job. T=0 refers to
              the first try. Default is the last job try.

rename

usage: dx rename [-h] [--env-help] [-a] path name

Rename a project or data object. To rename folders, use 'dx mv' instead. Note
that a project context must be either set or specified to rename a data
object. To specify a project or a project context, append a colon character
":" after the project ID or name.

positional arguments:
  path        Path to project or data object to rename
  name        New name

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables
  -a, --all   Apply to all results with the same name without prompting

set_properties

usage: dx set_properties [-h] [--env-help] [-a] [--try T]
                         path propertyname=value [propertyname=value ...]

Set properties of a project, data object, or execution. Note that a project
context must be either set or specified for data object IDs or paths.

positional arguments:
  path                ID or path to project, data object, or execution to
                      modify
  propertyname=value  Key-value pairs of property names and their new values

optional arguments:
  -h, --help          show this help message and exit
  --env-help          Display help message for overriding environment
                      variables
  -a, --all           Apply to all results with the same name without
                      prompting
  --try T             When modifying a job that was restarted, apply the change to try T of the restarted job. T=0
                      refers to the first try. Default is the last job try.

unset_properties

usage: dx unset_properties [-h] [--env-help] [-a] [--try T]
                           path propertyname [propertyname ...]

Unset properties of a project, data object, or execution. Note that a project
context must be either set or specified for data object IDs or paths.

positional arguments:
  path          ID or path to project, data object, or execution to modify
  propertyname  Property names to unset

optional arguments:
  -h, --help    show this help message and exit
  --env-help    Display help message for overriding environment variables
  -a, --all     Apply to all results with the same name without prompting
  --try T       When modifying a job that was restarted, apply the change to try T of the restarted job. T=0 refers
                to the first try. Default is the last job try.

Category: exec

Manage and run your apps, applets, and workflows.

build

usage: dx build [-h] [--env-help] [--brief | --verbose] [--ensure-upload]
                [--force-symlinks] [--app] [--workflow] [--globalworkflow]
                [-d DESTINATION] [--dry-run] [--publish] [--from _FROM]
                [--remote] [--no-watch] [-f] [-a] [-v VERSION]
                [-b USER_OR_ORG] [--no-check-syntax]
                [--no-version-autonumbering] [--no-update]
                [--no-dx-toolkit-autodep] [--no-parallel-build]
                [--no-temp-build-project] [-y] [--extra-args EXTRA_ARGS]
                [--run ...] [--region REGION] [--keep-open]
                [src_dir]

Build an applet, app, or workflow object from a local source directory or an
app from an existing applet in the platform. You can use dx-app-wizard to
generate a skeleton directory of an app/applet with the necessary files.

positional arguments:
  src_dir               Source directory that contains dxapp.json or
                        dxworkflow.json. (default: current directory)

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --ensure-upload       If specified, will bypass computing checksum of
                        resources directory and upload it unconditionally; by
                        default, will compute checksum and upload only if it
                        differs from a previously uploaded resources bundle.
  --force-symlinks      If specified, will not attempt to dereference symbolic
                        links pointing outside of the resource directory. By
                        default, any symlinks within the resource directory
                        are kept as links while links to files outside the
                        resource directory are dereferenced (note that links
                        to directories outside of the resource directory will
                        cause an error).
  --app, --create-app   Create an app.
  --workflow, --create-workflow
                        Create a workflow.
  --globalworkflow, --create-globalworkflow
                        Create a global workflow.
  --dry-run, -n         Do not create an app(let): only perform local checks
                        and compilation steps, and show the spec of the
                        app(let) that would have been created.
  --remote              Build the app remotely by uploading the source
                        directory to the DNAnexus Platform and building it
                        there. This option is useful if you would otherwise
                        need to cross-compile the app(let) to target the
                        Execution Environment.
  --no-watch            Don't watch the real-time logs of the remote builder.
                        (This option only applicable if --remote was
                        specified).
  -v VERSION, --version VERSION
                        Override the version number supplied in the manifest.
                        This option needs to be specified when using --from
                        option.
  --no-check-syntax     Warn but do not fail when syntax problems are found
                        (default is to fail on such errors)
  --no-dx-toolkit-autodep
                        Do not auto-insert the dx-toolkit dependency (default
                        is to add it if it would otherwise be absent from the
                        runSpec)
  --no-parallel-build   Build with make instead of make -jN.
  --extra-args EXTRA_ARGS
                        Arguments (in JSON format) to pass to the /applet/new
                        API method, overriding all other settings
  --run ...             Run the app or applet after building it (options
                        following this are passed to dx run; run at high
                        priority by default)
  --keep-open           Do not close workflow after building it. Cannot be
                        used when building apps, applets or global workflows.

options for creating apps or globalworkflows:
  (Only valid when --app/--create-app/--globalworkflow/--create-
  globalworkflow is specified)

  --publish             Publish the resulting app/globalworkflow and make it
                        the default.
  --from _FROM          ID or path of the source applet/workflow to create an
                        app/globalworkflow from. Source directory src_dir
                        cannot be given when using this option
  -b USER_OR_ORG, --bill-to USER_OR_ORG
                        Entity (of the form user-NAME or org-ORGNAME) to bill
                        for the app/globalworkflow.
  --no-version-autonumbering
                        Only attempt to create the version number supplied in
                        the manifest (that is, do not try to create an
                        autonumbered version such as 1.2.3+git.ab1b1c1d if
                        1.2.3 already exists and is published).
  --no-update           Never update an existing unpublished
                        app/globalworkflow in place.
  --no-temp-build-project
                        When building an app in a single region, build its
                        applet in the current project instead of a temporary
                        project.
  -y, --yes             Do not ask for confirmation for potentially dangerous
                        operations
  --region REGION       Enable the app/globalworkflow in this region. This
                        flag can be specified multiple times to enable the
                        app/globalworkflow in multiple regions. If --region is
                        not specified, then the enabled region(s) will be
                        determined by 'regionalOptions' in dxapp.json, or the
                        project context.

options for creating applets or workflows:
  (Only valid when --app/--create-app/--globalworkflow/--create-
  globalworkflow is NOT specified)

  -d DESTINATION, --destination DESTINATION
                        Specifies the destination project, destination folder,
                        and/or name for the applet, in the form
                        [PROJECT_NAME_OR_ID:][/[FOLDER/][NAME]]. Overrides the
                        project, folder, and name fields of the dxapp.json or
                        dxworkflow.json, if they were supplied.
  -f, --overwrite       Remove existing applet(s) of the same name in the
                        destination folder. This option is not yet supported
                        for workflows.
  -a, --archive         Archive existing applet(s) of the same name in the
                        destination folder. This option is not yet supported
                        for workflows.

add users

usage: dx add users [-h] [--env-help] app authorizedUser [authorizedUser ...]

Add users or orgs to the list of authorized users of an app. Published
versions of the app will only be accessible to users represented by this list
and to developers of the app. Unpublished versions are restricted to the
developers.

positional arguments:
  app             Name or ID of an app
  authorizedUser  One or more users or orgs to add

optional arguments:
  -h, --help      show this help message and exit
  --env-help      Display help message for overriding environment variables

add developers

usage: dx add developers [-h] [--env-help] app developer [developer ...]

Add users or orgs to the list of developers for an app. Developers are able to
build and publish new versions of the app, and add or remove others from the
list of developers and authorized users.

positional arguments:
  app         Name or ID of an app
  developer   One or more users or orgs to add

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

list users

usage: dx list users [-h] [--env-help] app

List the authorized users of an app. Published versions of the app will only
be accessible to users represented by this list and to developers of the app.
Unpublished versions are restricted to the developers

positional arguments:
  app         Name or ID of an app

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

list developers

usage: dx list developers [-h] [--env-help] app

List the developers for an app. Developers are able to build and publish new
versions of the app, and add or remove others from the list of developers and
authorized users.

positional arguments:
  app         Name or ID of an app

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

remove users

usage: dx remove users [-h] [--env-help]
                       app authorizedUser [authorizedUser ...]

Remove users or orgs from the list of authorized users of an app. Published
versions of the app will only be accessible to users represented by this list
and to developers of the app. Unpublished versions are restricted to the
developers

positional arguments:
  app             Name or ID of an app
  authorizedUser  One or more users or orgs to remove

optional arguments:
  -h, --help      show this help message and exit
  --env-help      Display help message for overriding environment variables

remove developers

usage: dx remove developers [-h] [--env-help] app developer [developer ...]

Remove users or orgs from the list of developers for an app. Developers are
able to build and publish new versions of the app, and add or remove others
from the list of developers and authorized users.

positional arguments:
  app         Name or ID of an app
  developer   One or more users to remove

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

publish

usage: dx publish [-h] [--no-default] executable

Release a version of the executable (app or global workflow) to authorized
users.

positional arguments:
  executable    ID or name and version of an app/global workflow, e.g.
                myqc/1.0.0

optional arguments:
  -h, --help    show this help message and exit
  --no-default  Do not set a "default" alias on the published version

install

usage: dx install [-h] [--env-help] app

Install an app by name. To see a list of apps you can install, hit <TAB> twice
after "dx install" or run "dx find apps" to see a list of available apps.

positional arguments:
  app         ID or name of app to install

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

uninstall

usage: dx uninstall [-h] [--env-help] app

Uninstall an app by name.

positional arguments:
  app         ID or name of app to uninstall

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

run

usage: dx run [-i INPUT] [-j INPUT_JSON] [-f FILENAME] [--brief | --verbose]
              [--env-help] [--extra-args EXTRA_ARGS]
              [--instance-type INSTANCE_TYPE_OR_MAPPING]
              [--instance-type-by-executable DOUBLE_MAPPING]
              [--instance-type-help] [--property KEY=VALUE] [--tag TAG]
              [-d DEPENDS_ON] [-h] [--clone CLONE] [--alias ALIAS]
              [--destination PATH] [--batch-folders] [--project PROJECT]
              [--stage-output-folder STAGE_ID FOLDER]
              [--stage-relative-output-folder STAGE_ID FOLDER] [--name NAME]
              [--delay-workspace-destruction] [--priority {low,normal,high}]
              [--head-job-on-demand] [-y] [--wait] [--watch]
              [--allow-ssh [ADDRESS]] [--ssh] [--ssh-proxy <address>:<port>]
              [--debug-on {AppError,AppInternalError,ExecutionError,All}]
              [--ignore-reuse | --ignore-reuse-stage STAGE_ID]
              [--rerun-stage STAGE_ID] [--batch-tsv FILE]
              [--instance-count INSTANCE_COUNT_OR_MAPPING] [--input-help]
              [--detach] [--cost-limit cost_limit] [-r RANK]
              [--max-tree-spot-wait-time MAX_TREE_SPOT_WAIT_TIME]
              [--max-job-spot-wait-time MAX_JOB_SPOT_WAIT_TIME]
              [--detailed-job-metrics]
              [--preserve-job-outputs | --preserve-job-outputs-folder JOB_OUTPUTS_FOLDER]
              [executable]

Run an applet, app, or workflow.  To see a list of executables you can run,
hit <TAB> twice after "dx run" or run "dx find apps" or "dx find
globalworkflows" to see a list of available apps and global workflows.

If any inputs are required but not specified, an interactive mode for
selecting inputs will be launched.  Inputs can be set in multiple ways.  Run
"dx run --input-help" for more details.

Run "dx run --instance-type-help" to see a list of specifications for
computers available to run executables.

positional arguments:
  executable            Name or ID of an applet, app, or workflow to run; must
                        be provided if --clone is not set

optional arguments:
  -i INPUT, --input INPUT
                        An input to be added using "<input
                        name>[:<class>]=<input value>" (provide "class" if
                        there is no input spec; it can be any job IO class,
                        e.g. "string", "array:string", or "array"; if "class"
                        is "array" or not specified, the value will be
                        attempted to be parsed as JSON and is otherwise
                        treated as a string)
  -j INPUT_JSON, --input-json INPUT_JSON
                        The full input JSON (keys=input field names,
                        values=input field values)
  -f FILENAME, --input-json-file FILENAME
                        Load input JSON from FILENAME ("-" to use stdin)
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --extra-args EXTRA_ARGS
                        Arguments (in JSON format) to pass to the underlying
                        API method, overriding the default settings
  --instance-type INSTANCE_TYPE_OR_MAPPING
                        When running an app or applet, the mapping lists
                        executable's entry points or "*" as keys, and instance
                        types to use for these entry points as values.  
                        When
                        running a workflow, the specified instance types can
                        be prefixed by a stage name or stage index followed by
                        "=" to apply to a specific stage, or apply to all
                        workflow stages without such prefix. 
                        The instance
                        type corresponding to the "*" key is applied to all
                        entry points not explicitly mentioned in the
                        --instance-type mapping. Specifying a single instance
                        type is equivalent to using it for all entry points,
                        so "--instance-type mem1_ssd1_v2_x2" is same as
                        "--instance-type '{"*":"mem1_ssd1_v2_x2"}'. 
                        Note that
                        "dx run" calls within the execution subtree may
                        override the values specified at the root of the
                        execution tree.
                        See dx run --instance-type-help for
                        details.
  --instance-type-by-executable DOUBLE_MAPPING
                        Specifies instance types by app or applet id, then by
                        entry point within the executable.
                        The order of
                        priority for this specification is:
                          *
                        --instance-type, systemRequirements and
                        stageSystemRequirements specified at runtime
                          *
                        stage's systemRequirements, systemRequirements
                        supplied to /app/new and /applet/new at
                        workflow/app/applet build time
                          *
                        systemRequirementsByExecutable specified in downstream
                        executions (if any)
                        See dx run --instance-type-help
                        for details.
  --instance-type-help  Print help for specifying instance types
  --property KEY=VALUE  Key-value pair to add as a property; repeat as
                        necessary,
                         e.g. "--property key1=val1 --property key2=val2"
  --tag TAG             Tag for the resulting execution; repeat as necessary,
                         e.g. "--tag tag1 --tag tag2"
  -d DEPENDS_ON, --depends-on DEPENDS_ON
                        ID of job, analysis, or data object that must be in
                        the "done" or "closed" state, as appropriate, before
                        this executable can be run; repeat as necessary (e.g.
                        "--depends-on id1 ... --depends-on idN"). Cannot be
                        supplied when running workflows
  -h, --help            show this help message and exit
  --clone CLONE         Job or analysis ID or name from which to use as
                        default options (will use the exact same executable
                        ID, destination project and folder, job input,
                        instance type requests, and a similar name unless
                        explicitly overridden by command-line arguments. When
                        using an analysis with --clone a workflow executable
                        cannot be overriden and should not be provided.)
  --alias ALIAS, --version ALIAS
                        Alias (tag) or version of the app to run (default:
                        "default" if an app)
  --destination PATH, --folder PATH
                        The full project:folder path in which to output the
                        results. By default, the current working directory
                        will be used.
  --batch-folders       Output results to separate folders, one per batch,
                        using batch ID as the name of the output folder. The
                        batch output folder location will be relative to the
                        path set in --destination
  --project PROJECT     Project name or ID in which to run the executable.
                        This can also be specified together with the output
                        folder in --destination.
  --stage-output-folder STAGE_ID FOLDER
                        A stage identifier (ID, name, or index), and a folder
                        path to use as its output folder
  --stage-relative-output-folder STAGE_ID FOLDER
                        A stage identifier (ID, name, or index), and a
                        relative folder path to the workflow output folder to
                        use as the output folder
  --name NAME           Name for the job (default is the app or applet name)
  --delay-workspace-destruction
                        Whether to keep the job's temporary workspace around
                        for debugging purposes for 3 days after it succeeds or
                        fails
  --priority {low,normal,high}
                        Request a scheduling priority for all resulting jobs.
                        Defaults to high when --watch, --ssh, or --allow-ssh
                        flags are used.
  --head-job-on-demand  Requests that the head job of an app or applet be run
                        in an on-demand instance. Note that
                        --head-job-on-demand option will override the
                        --priority setting for the head job
  -y, --yes             Do not ask for confirmation
  --wait                Wait until the job is done before returning
  --watch               Watch the job after launching it. Defaults --priority to high.
  --allow-ssh [ADDRESS]
                        Configure the job to allow SSH access. Defaults
                        --priority to high. If an argument is supplied, it is
                        interpreted as an IP range, e.g. "--allow-ssh
                        1.2.3.4". If no argument is supplied then the client
                        IP visible to the DNAnexus API server will be used by
                        default
  --ssh                 Configure the job to allow SSH access and connect to
                        it after launching. Defaults --priority to high.
  --ssh-proxy <address>:<port>
                        SSH connect via proxy, argument supplied is used as
                        the proxy address and port
  --debug-on {AppError,AppInternalError,ExecutionError,All}
                        Configure the job to hold for debugging when any of
                        the listed errors occur
  --ignore-reuse        Disable job reuse for execution
  --ignore-reuse-stage STAGE_ID
                        A stage (using its ID, name, or index) for which job
                        reuse should be disabled, if a stage points to another
                        (nested) workflow the ignore reuse option will be
                        applied to the whole subworkflow. This option
                        overwrites any ignoreReuse fields set on app(let)s or
                        the workflow during build time; repeat as necessary
  --rerun-stage STAGE_ID
                        A stage (using its ID, name, or index) to rerun, or
                        "*" to indicate all stages should be rerun; repeat as
                        necessary
  --batch-tsv FILE      A file in tab separated value (tsv) format, with a
                        subset of the executable input arguments. A job will
                        be launched for each table row.
  --instance-count INSTANCE_COUNT_OR_MAPPING
                        Specify spark cluster instance count(s). It can be an
                        int or a mapping of the format '{"entrypoint": <number
                        of instances>}'
  --input-help          Print help and examples for how to specify inputs
  --detach              When invoked from a job, detaches the new job from the
                        creator job so the new job will appear as a typical
                        root execution. Setting DX_RUN_DETACH environment
                        variable to 1 causes this option to be set by default.
  --cost-limit cost_limit
                        Maximum cost of the job before termination. In case of
                        workflows it is cost of the entire analysis job. For
                        batch run, this limit is applied per job.
  -r RANK, --rank RANK  Set the rank of the root execution, integer between
                        -1024 and 1023. Requires executionRankEnabled license
                        feature for the billTo. Default is 0.
  --max-tree-spot-wait-time MAX_TREE_SPOT_WAIT_TIME
                        The amount of time allocated to each path in the root
                        execution's tree to wait for Spot (in seconds, or use
                        suffix s, m, h, d, w, M, y)
  --max-job-spot-wait-time MAX_JOB_SPOT_WAIT_TIME
                        The amount of time allocated to each job in the root
                        execution's tree to wait for Spot (in seconds, or use
                        suffix s, m, h, d, w, M, y)
  --detailed-job-metrics
                        Collect CPU, memory, network and disk metrics every 60
                        seconds
  --preserve-job-outputs
                        Copy cloneable outputs of every non-reused job
                        entering "done" state in this root execution R into
                        the "intermediateJobOutputs" subfolder under R's
                        output folder.  As R's root job or root analysis'
                        stages complete, R's regular outputs will be moved to
                        R's regular output folder.
  --preserve-job-outputs-folder JOB_OUTPUTS_FOLDER
                        Similar to --preserve-job-outputs, copy cloneable
                        outputs of every non-reused job entering "done" state
                        in this root execution to the specified folder in the
                        project.  JOB_OUTPUTS_FOLDER starting with '/' refers
                        to an absolute path within the project, otherwise, it
                        refers to a subfolder under root execution's output
                        folder.

run --input-help

Help: Specifying input for dx run

There are several ways to specify inputs.  In decreasing order of precedence,
they are:

  1) inputs given in the interactive mode
  2) inputs listed individually with the -i/--input command line argument
  3) JSON given in --input-json
  4) JSON given in --input-json-file
  5) if cloning a job with --clone, the input that the job was run with
     (this will get overridden completely if -j/--input-json or
      -f/--input-json-file are provided)
  6) default values set in a workflow or an executable's input spec

SPECIFYING INPUTS BY NAME

  Use the -i/--input flag to specify each input field by name and value.

    Syntax :  -i<input name>=<input value>
    Example:  dx run myApp -inum=34 -istr=ABC -ifiles=reads1.fq.gz -ifiles=reads2.fq.gz

  The example above runs an app called "myApp" with 3 inputs called num (class
  int), str (class string), and files (class array:file).  (For this method to
  work, the app must have an input spec so inputs can be interpreted
  correctly.)  The same input field can be used multiple times if the input
  class is an array.

  Job-based object references can also be provided using the <job id>:<output
  name> syntax:

    Syntax :  -i<input name>=<job id>:<output name>
    Example:  dx run mapper -ireads=job-B0fbxvGY00j9jqGQvj8Q0001:reads

  You can extract an element of an array output using the <job id>:<output
  name>.<element> syntax:

    Syntax :  -i<input name>=<job id>:<output name>.<element>
    Example:  dx run mapper -ireadsfile=job-B0fbxvGY00j9jqGQvj8Q0001:reads.1
              # Extracts second element of array output

  When executing workflows, stage inputs can be specified using the <stage
  key>.<input name>=<value> syntax:

    Syntax :  -i<stage key>.<input name>=<input value>
    Example:  dx run my_workflow -i0.reads="My reads file"

<stage key> may be either the ID of the stage, name of the stage, or the
number of the stage in the workflow (0 indicates first stage)
  If the workflow has explicit, workflow-level inputs, input values must be
  passed to these workflow-level input fields using the <workflow input
  name>=<value> syntax:

    Syntax :  -i<workflow input name>=<input value>
    Example:  dx run my_workflow -ireads="My reads file"

SPECIFYING JSON INPUT

  JSON input can be used directly using the -j/--input-json or
  -f/--input-json-file flags.  When running an app or applet, the keys should
  be the input field names for the app or applet.  When running a workflow,
  the keys should be the input field names for each stage, prefixed by the
  stage key and a period, e.g. "my_stage.reads" for the "reads" input of stage
  "my_stage".

run --instance-type-help

Help: Specifying instance types for dx run

Instance types can be requested with --instance-type-by-executable and
--instance-type arguments, with --instance-type-by-executable specification
taking priority over --instance-type, workflow's stageSystemRequirements, and
specifications provided during app and applet creation.

--instance-type specifications do not propagate to subjobs and sub-analyses
launched from a job with a /executable-xxxx/run call, but
--instance-type-by-executable do (where executable refers to an app, applet or
workflow).

When running an app or an applet, --instance-type lets you specify the
instance type to be used by each entry point.
A single instance type can be requested to be used by all entry points by
providing the instance type name.  Different instance types can also be
requested for different entry points of an app or applet by providing a JSON
string mapping from function names to instance types, e.g.

    {"main": "mem2_hdd2_v2_x2", "other_function": "mem1_ssd1_v2_x2"}

When running a workflow, --instance-type lets you specify instance types for
each entry point of each workflow stage by prepending the request with "<stage
identifier>=" (where a stage identifier is an ID, a numeric index, or a unique
stage name) and repeating the argument for as many stages as desired.  If no
stage identifier is provided, the value is applied as a default for all
stages.

Examples

1. Run the main entry point of applet-xxxx on mem1_ssd1_v2_x2, and all other
entry points on mem1_ssd1_v2_x4
    dx run applet-xxxx --instance-type '{"main": "mem1_ssd1_v2_x2",
                                         "*":    "mem1_ssd1_v2_x4"}'

2. Runs all entry points of the first stage with mem2_hdd2_v2_x2, the main
entry point of the second stage with mem1_ssd1_v2_x4, the stage named "BWA"
with mem1_ssd1_v2_x2, and all other stages with mem2_hdd2_v2_x4

    dx run workflow-xxxx \
     --instance-type 0=mem2_hdd2_v2_x2 \
     --instance-type 1='{"main": "mem1_ssd1_v2_x4"}' \
     --instance-type BWA=mem1_ssd1_v2_x2 \
     --instance-type mem2_hdd2_v2_x4

--instance-type-by-executable argument is a JSON string with a double mapping
that specifies instance types by app or applet id, then by entry point within
the executable.This specification applies across the entire nested execution
tree and is propagated across /executable-xxxx/run calls issued with the
execution tree.

More examples
3. Force every job in the execution tree to use mem2_ssd1_v2_x2

    dx run workflow-xxxx --instance-type-by-executable '{"*": {"*": "mem2_ssd1_v2_x2"}}'

4. Force every job in the execution tree executing applet-xyz1 to use
mem2_ssd1_v2_x2

    dx run workflow-xxxx --instance-type-by-executable '{"applet-xyz1":{"*": "mem2_ssd1_v2_x2"}}'

5. Force every job executing applet-xyz1 to use mem2_ssd1_v2_x4 for the main
entry point and mem2_ssd1_v2_x2 for all other entry points.Also force the
collect entry point of all executables other than applet-xyz1 to use
mem2_ssd1_v2_x8.Other entry points of executable other than applet-xyz1 may be
overridden by lower-priority mechanisms

    dx run workflow-xxxx --instance-type-by-executable \
           '{"applet-xyz1":  {"main":    "mem2_ssd1_v2_x4", "*": "mem2_ssd1_v2_x2"},
             "*":            {"collect": "mem2_ssd1_v2_x8"}}'

6. Force every job executing applet-xxxx to use mem2_ssd1_v2_x2 for all entry
points in the entire execution tree. Also force stage 0 executable to run on
mem2_ssd1_v2_x4, unless stage 0 invokes applet-xxxx, in which case
applet-xxxx's jobs will use mem2_ssd1_v2_x2 as specified by
--instance-type-by-executable.

    dx run workflow-xxxx \
     --instance-type-by-executable  '{"applet-xxxx": {"*": "mem2_ssd1_v2_x2"}}' \
     --instance-type 0=mem2_ssd1_v2_x4

See "Requesting Instance Types" in DNAnexus documentation for more details.

ssh

usage: dx ssh [-h] [--env-help] [--ssh-proxy <address>:<port>]
              [--no-firewall-update | --allow-ssh [ADDRESS]]
              job_id ...

Use an SSH client to connect to a job being executed on the DNAnexus platform.
The job must be launched using "dx run --allow-ssh" or equivalent API options.
Use "dx ssh_config" or the Profile page on the DNAnexus website to configure
SSH for your DNAnexus account.

positional arguments:
  job_id                Name of job to connect to
  ssh_args              Command-line arguments to pass to the SSH client

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  --ssh-proxy <address>:<port>
                        SSH connect via proxy, argument supplied is used as
                        the proxy address and port
  --no-firewall-update  Do not update the allowSSH allowed IP ranges before
                        connecting with ssh
  --allow-ssh [ADDRESS]
                        Configure the job to allow SSH access. from an IP
                        range, e.g. "--allow-ssh 1.2.3.4". If no argument is
                        supplied then the client IP visible to the DNAnexus
                        API server will be used by default

ssh_config

usage: dx ssh_config [-h] [--env-help] [--revoke] ...

Configure SSH access credentials for your DNAnexus account

positional arguments:
  ssh_keygen_args  Command-line arguments to pass to ssh-keygen

optional arguments:
  -h, --help       show this help message and exit
  --env-help       Display help message for overriding environment variables
  --revoke         Revoke SSH public key associated with your DNAnexus
                   account; you will no longer be able to SSH into any jobs.

watch

usage: dx watch [-h] [--env-help] [--color {off,on,auto}]
                [-n NUM_RECENT_MESSAGES] [--tree | --try T]
                [-l {EMERG,ALERT,CRITICAL,ERROR,WARNING,NOTICE,INFO,DEBUG,STDERR,STDOUT,METRICS}]
                [--get-stdout] [--get-stderr] [--get-streams]
                [--no-timestamps] [--job-ids] [--no-job-info] [-q] [-f FORMAT]
                [--no-wait] [--metrics {interspersed,none,top,csv}]
                [--metrics-help]
                jobid

Monitors logging output from a running job

positional arguments:
  jobid                 ID of the job to watch

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  -n NUM_RECENT_MESSAGES, --num-recent-messages NUM_RECENT_MESSAGES
                        Number of recent messages to get
  --tree                Include the entire job tree
  --try T               Allows to watch older tries of a restarted job. T=0 refers to the first try. Default is the
                        last job try.
  -l {EMERG,ALERT,CRITICAL,ERROR,WARNING,NOTICE,INFO,DEBUG,STDERR,STDOUT,METRICS}, --levels {EMERG,ALERT,CRITICAL,ERROR,WARNING,NOTICE,INFO,DEBUG,STDERR,STDOUT,METRICS}
  --get-stdout          Extract stdout only from this job
  --get-stderr          Extract stderr only from this job
  --get-streams         Extract only stdout and stderr from this job
  --no-timestamps       Omit timestamps from messages
  --job-ids             Print job ID in each message
  --no-job-info         Omit job info and status updates
  -q, --quiet           Do not print extra info messages
  -f FORMAT, --format FORMAT
                        Message format. Available fields: job, level, msg,
                        date
  --no-wait, --no-follow
                        Exit after the first new message is received, instead
                        of waiting for all logs
  --metrics {interspersed,none,top,csv}
                        Select display mode for detailed job metrics if they
                        were collected and are available based on retention
                        policy; see --metrics-help for details
  --metrics-help        Print help for displaying detailed job metrics

watch --metrics-help

Help: Displaying detailed job metrics
Detailed job metrics describe job's consumption of CPU, memory, disk, network, etc 
at 60 second intervals.
If collection of job metrics was enabled for a job (e.g with 
dx run --detailed-job-metrics), the metrics can be displayed by "dx watch" for 
15 days from the time the job started running.

Note that all reported data-related values are in base 2 units - i.e. 1 MB = 1024 * 1024 bytes.

The "interspersed" default mode shows METRICS job log messages interspersed with other 
jog log messages.

The "none" mode omits all METRICS messages from "dx watch" output.

The "top" mode interactively shows the latest METRICS message at the top of the screen 
and updates it for running jobs instead of showing every METRICS message interspersed 
with the currently-displayed job log messages. 
For completed jobs, this mode does not show any metrics. 
Built-in help describing key bindings is available by pressing "?".

The "csv" mode outputs the following columns with headers in csv format to stdout:
- timestamp: An integer number representing the number of milliseconds since the Unix epoch.
- cpuCount: A number of CPUs available on the instance that ran the job.
- cpuUsageUser: The percentage of cpu time spent in user mode on the instance during the metric collection period.
- cpuUsageSystem: The percentage of cpu time spent in system mode on the instance during the metric collection period.
- cpuUsageIowait: The percentage of cpu time spent in waiting for I/O operations to complete on the instance during the metric collection period.
- cpuUsageIdle: The percentage of cpu time spent in waiting for I/O operations to complete on the instance during the metric collection period.
- memoryUsedBytes: Bytes of memory used (calculated as total - free - buffers - cache - slab_reclaimable + shared_memory).
- memoryTotalBytes: Total memory available on the instance that ran the job.
- diskUsedBytes: Bytes of storage allocated to the AEE that are used by the filesystem.
- diskTotalBytes: Total bytes of disk space available to the job within the AEE.
- networkOutBytes: Total network bytes transferred out from AEE since the job started. Includes "dx upload" bytes.
- networkInBytes: Total network bytes transferred into AEE since the job started. Includes "dx download" bytes.
- diskReadBytes: Total bytes read from the AEE-accessible disks since the job started.
- diskWriteBytes: Total bytes written to the AEE-accessible disks since the job started.
- diskReadOpsCount: Total disk read operation count against AEE-accessible disk since the job started.
- diskWriteOpsCount: Total disk write operation count against AEE-accessible disk since the job started.

Note 1: cpuUsageUser, cpuUsageSystem, cpuUsageIowait, cpuUsageIdle and memoryUsedBytes metrics reflect usage by processes inside and outside of the AEE which include DNAnexus services responsible for proxying DNAnexus data.
Note 2: cpuUsageUser + cpuUsageSystem + cpuUsageIowait + cpuUsageIdle + cpuUsageSteal = 100. cpuUsageSteal is unreported, but can be derived from the other 4 quantities given that they add up to 100.
Note 3: cpuUsage numbers are rounded to 2 decimal places.
Note 4: networkOutBytes may be larger than job's egressReport which does not include "dx upload" bytes.

The format of METRICS job log lines is defined as follows using the example below:

2023-03-15 12:23:44 some-job-name METRICS ** CPU usr/sys/idl/wai: 24/11/1/64% (4 cores) * Memory: 1566/31649MB * Storage: 19/142GB * Net: 10↓/0↑MBps * Disk: r/w 20/174 MBps iops r/w 8/1300

"2023-03-15 12:23:44" is the metrics collection time.
"METRICS" is a type of job log line containing detailed job metrics.
"CPU usr/sys/idl/wai: 24/11/1/64%" maps to cpuUsageUser, cpuUsageSystem, cpuUsageIdle, cpuUsageIowait values.
"(4 cores)" maps to cpuCount.
"Memory: 1566/31649MB" maps to memoryUsedBytes and memoryTotalBytes.
"Storage: 19/142GB" maps to diskUsedBytes and diskTotalBytes.
"Net: 10↓/0↑MBps" is derived from networkOutBytes and networkInBytes cumulative totals by subtracting previous measurement from the measurement at the metric collection time, and dividing the difference by the time span between the two measurements.
"Disk: r/w 20/174 MBps iops r/w 8/1300" is derived similar to "Net:" from diskReadBytes, diskWriteBytes, diskReadOpsCount, and diskWriteOpsCount.

terminate

usage: dx terminate [-h] [--env-help] jobid [jobid ...]

Terminate one or more jobs or analyses

positional arguments:
  jobid       ID of a job or analysis to terminate

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

find

usage: dx find [-h] category ...

Search functionality over various DNAnexus entities.

positional arguments:
  category
    apps           List available apps
    globalworkflows
                   List available global workflows
    jobs           List jobs in the current project
    analyses       List analyses in the current project
    executions     List executions (jobs and analyses) in the current project
    data           List data objects in the current project
    projects       List projects
    org            List entities within a specific org.
                   
                   	"dx find org members" lists members in the specified org
                   
                   	"dx find org projects" lists projects billed to the specified org
                   
                   	"dx find org apps" lists apps billed to the specified org
                   
                   Please execute "dx find org -h" for more information.
    orgs           List orgs

optional arguments:
  -h, --help       show this help message and exit

find apps

usage: dx find apps [-h] [--brief | --verbose] [--json]
                    [--delimiter [DELIMITER]] [--env-help] [--name NAME]
                    [--category CATEGORY] [--category-help] [-a]
                    [--unpublished] [--installed] [--billed-to BILLED_TO]
                    [--creator CREATOR] [--developer DEVELOPER]
                    [--created-after CREATED_AFTER]
                    [--created-before CREATED_BEFORE] [--mod-after MOD_AFTER]
                    [--mod-before MOD_BEFORE]

Finds apps subject to the given search parameters. Use --category to restrict
by a category; common categories are available as tab completions and can be
listed with --category-help.

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --name NAME           Name of the app
  --category CATEGORY   Category of the app
  --category-help       Print a list of common app categories
  -a, --all             Return all versions of each app
  --unpublished         Return only unpublished apps (if omitted, returns only
                        published apps)
  --installed           Return only installed apps
  --billed-to BILLED_TO
                        User or organization responsible for the app
  --creator CREATOR     Creator of the app version
  --developer DEVELOPER
                        Developer of the app
  --created-after CREATED_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the app version was created (negative number
                        means ms in the past, or use suffix s, m, h, d, w, M,
                        y) Negative input example "--created-after=-2d"
  --created-before CREATED_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before
                        which the app version was created (negative number
                        means ms in the past, or use suffix s, m, h, d, w, M,
                        y) Negative input example "--created-before=-2d"
  --mod-after MOD_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the app was last modified (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--mod-after=-2d"
  --mod-before MOD_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before
                        which the app was last modified (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
                        Negative input example "--mod-before=-2d"
                        

find globalworkflows

usage: dx find globalworkflows [-h] [--brief | --verbose] [--json] [--delimiter [DELIMITER]]
                               [--env-help] [--name NAME] [--category CATEGORY]
                               [--category-help] [-a] [--unpublished]
                               [--billed-to BILLED_TO] [--creator CREATOR]
                               [--developer DEVELOPER] [--created-after CREATED_AFTER]
                               [--created-before CREATED_BEFORE] [--mod-after MOD_AFTER]
                               [--mod-before MOD_BEFORE]

Finds global workflows subject to the given search parameters. Use --category to restrict by
a category; common categories are available as tab completions and can be listed with
--category-help.

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most commands,
                        prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be
                        printed; if no delimiter is provided with this flag, TAB will be
                        used
  --env-help            Display help message for overriding environment variables
  --name NAME           Name of the workflow
  --category CATEGORY   Category of the workflow
  --category-help       Print a list of common global workflow categories
  -a, --all             Return all versions of each workflow
  --unpublished         Return only unpublished workflows (if omitted, returns only
                        published workflows)
  --billed-to BILLED_TO
                        User or organization responsible for the workflow
  --creator CREATOR     Creator of the workflow version
  --developer DEVELOPER
                        Developer of the workflow
  --created-after CREATED_AFTER
                        Date (e.g. --created-after="2021-12-01" or --created-
                        after="2021-12-01 19:01:33") or integer Unix epoch timestamp in
                        milliseconds (e.g. --created-after=1642196636000) after which the
                        workflow was created. You can also specify negative numbers to
                        indicate a time period in the past suffixed by s, m, h, d, w, M or y
                        to indicate seconds, minutes, hours, days, weeks, months or years
                        (e.g. --created-after=-2d for workflows created in the last 2 days).
  --created-before CREATED_BEFORE
                        Date (e.g. --created-before="2021-12-01" or --created-
                        before="2021-12-01 19:01:33") or integer Unix epoch timestamp in
                        milliseconds (e.g. --created-before=1642196636000) before which the
                        workflow was created. You can also specify negative numbers to
                        indicate a time period in the past suffixed by s, m, h, d, w, M or y
                        to indicate seconds, minutes, hours, days, weeks, months or years
                        (e.g. --created-before=-2d for workflows created earlier than 2 days
                        ago)
 --mod-after MOD_AFTER
                        Date (e.g. --mod-after="2021-12-01" or --mod-after="2021-12-01
                        19:01:33") or integer Unix epoch timestamp in milliseconds (e.g.
                        --mod-after=1642196636000) after which the workflow was created. You
                        can also specify negative numbers to indicate a time period in the
                        past suffixed by s, m, h, d, w, M or y to indicate seconds, minutes,
                        hours, days, weeks, months or years (e.g. --mod-after=-2d for
                        workflows modified in the last 2 days)
  --mod-before MOD_BEFORE
                        Date (e.g. --mod-before="2021-12-01" or --mod-before="2021-12-01
                        19:01:33") or integer Unix epoch timestamp in milliseconds (e.g.
                        --mod-before=1642196636000) before which the workflow was created.
                        You can also specify negative numbers to indicate a time period in
                        the past suffixed by s, m, h, d, w, M or y to indicate seconds,
                        minutes, hours, days, weeks, months or years (e.g. --mod-before=-2d
                        for workflows modified earlier than 2 days ago)

find jobs

usage: dx find jobs [-h] [--id ID] [--name NAME] [--user USER]
                    [--project PROJECT] [--all-projects] [--app EXECUTABLE]
                    [--state STATE] [--origin ORIGIN] [--parent PARENT]
                    [--created-after CREATED_AFTER]
                    [--created-before CREATED_BEFORE] [--no-subjobs]
                    [--root-execution ROOT_EXECUTION] [-n N] [-o] [--include-restarted]
                    [--brief | --verbose] [--json] [--color {off,on,auto}]
                    [--delimiter [DELIMITER]] [--env-help]
                    [--property KEY[=VALUE]] [--tag TAG]
                    [--trees | --origin-jobs | --all-jobs]

Finds jobs subject to the given search parameters. By default, output is
formatted to show the last several job trees that you've run in the current
project.

optional arguments:
  -h, --help            show this help message and exit
  --id ID               Show only the job tree or job containing this job ID
  --name NAME           Restrict the search by job name (accepts wildcards "*"
                        and "?")
  --user USER           Username who launched the job (use "self" to ask for
                        your own jobs)
  --project PROJECT     Project context (output project), default is current
                        project if set
  --all-projects, --allprojects
                        Extend search to all projects
  --app EXECUTABLE, --applet EXECUTABLE, --executable EXECUTABLE
                        Applet or App ID that job is running
  --state STATE         State of the job, e.g. "done", "failed"
  --origin ORIGIN       Job ID of the top-level job
  --parent PARENT       Job ID of the parent job; implies --all-jobs
  --created-after CREATED_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after
                        which the job was last created (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
  --created-before CREATED_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before
                        which the job was last created (negative number means
                        ms in the past, or use suffix s, m, h, d, w, M, y)
  --no-subjobs          Do not show any subjobs
  --root-execution ROOT_EXECUTION, --root ROOT_EXECUTION
                        Execution ID of the top-level (user-initiated) job or
                        analysis
  -n N, --num-results N
                        Max number of results (trees or jobs, as according to
                        the search mode) to return (default 10)
  -o, --show-outputs    Show job outputs in results
  --include-restarted   If specified, results will include restarted jobs and job trees rooted in
                        restarted jobs
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout
                        is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --env-help            Display help message for overriding environment
                        variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key;
                        if only a key is provided, matches a result that has
                        the key with any value; repeat as necessary, e.g.
                        "--property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1
                        --tag tag2" will require both tags

Search mode:
  --trees               Show entire job trees for all matching results
                        (default)
  --origin-jobs         Search and display only top-level origin jobs
  --all-jobs            Search for jobs at all depths matching the query (no
                        tree structure shown)

find analyses

usage: dx find analyses [-h] [--id ID] [--name NAME] [--user USER] [--project PROJECT]
                        [--all-projects] [--app EXECUTABLE] [--state STATE]
                        [--origin ORIGIN] [--parent PARENT] [--created-after CREATED_AFTER]
                        [--created-before CREATED_BEFORE] [--no-subjobs]
                        [--root-execution ROOT_EXECUTION] [-n N] [-o] [--include-restarted] [--brief | --verbose]
                        [--json] [--color {off,on,auto}] [--delimiter [DELIMITER]]
                        [--env-help] [--property KEY[=VALUE]] [--tag TAG]
                        [--trees | --origin-jobs | --all-jobs]

Finds analyses subject to the given search parameters. By default, output is formatted to
show the last several job trees that you've run in the current project.

optional arguments:
  -h, --help            show this help message and exit
  --id ID               Show only the job tree or job containing this job ID
  --name NAME           Restrict the search by job name (accepts wildcards "*" and "?")
  --user USER           Username who launched the job (use "self" to ask for your own jobs)
  --project PROJECT     Project context (output project), default is current project if set
  --all-projects, --allprojects
                        Extend search to all projects
  --app EXECUTABLE, --applet EXECUTABLE, --executable EXECUTABLE
                        Applet or App ID that job is running
  --state STATE         State of the job, e.g. "done", "failed"
  --origin ORIGIN       Job ID of the top-level job
  --parent PARENT       Job ID of the parent job; implies --all-jobs
  --created-after CREATED_AFTER
                        Date (e.g. --created-after="2021-12-01" or
                        --created-after="2021-12-01 19:01:33") or integer Unix epoch
                        timestamp in milliseconds (e.g. --created-after=1642196636000) after
                        which the job was last created. You can also specify negative
                        numbers to indicate a time period in the past suffixed by s, m, h,
                        d, w, M or y to indicate seconds, minutes, hours, days, weeks,
                        months or years (e.g. --created-after=-2d for executions created in
                        the last 2 days)
  --created-before CREATED_BEFORE
                        Date (e.g. --created-before="2021-12-01" or
                        --created-before="2021-12-01 19:01:33") or integer Unix epoch
                        timestamp in milliseconds (e.g. --created-before=1642196636000)
                        before which the job was last created. You can also specify negative
                        numbers to indicate a time period in the past suffixed by s, m, h,
                        d, w, M or y to indicate seconds, minutes, hours, days, weeks,
                        months or years (e.g. --created-before=-2d for executions created
                        earlier than 2 days ago)
  --no-subjobs          Do not show any subjobs
  --root-execution ROOT_EXECUTION, --root ROOT_EXECUTION
                        Execution ID of the top-level (user-initiated) job or analysis
  -n N, --num-results N
                        Max number of results (trees or jobs, as according to the search
                        mode) to return (default 10)
  -o, --show-outputs    Show job outputs in results
  --include-restarted   if specified, results will include restarted jobs and job trees rooted in
                        restarted jobs
  --brief               Display a brief version of the return value; for most commands,
                        prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be
                        printed; if no delimiter is provided with this flag, TAB will be
                        used
  --env-help            Display help message for overriding environment variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key; if only a key
                        is provided, matches a result that has the key with any value;
                        repeat as necessary, e.g. "--property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1 --tag tag2" will
                        require both tags

Search mode:
  --trees               Show entire job trees for all matching results (default)
  --origin-jobs         Search and display only top-level origin jobs
  --all-jobs            Search for jobs at all depths matching the query (no tree structure
                        shown)

find executions

usage: dx find executions [-h] [--id ID] [--name NAME] [--user USER] [--project PROJECT]
                          [--all-projects] [--app EXECUTABLE] [--state STATE]
                          [--origin ORIGIN] [--parent PARENT]
                          [--created-after CREATED_AFTER] [--created-before CREATED_BEFORE]
                          [--no-subjobs] [--root-execution ROOT_EXECUTION] [-n N] [-o] [--include-restarted]
                          [--brief | --verbose] [--json] [--color {off,on,auto}]
                          [--delimiter [DELIMITER]] [--env-help] [--property KEY[=VALUE]]
                          [--tag TAG] [--trees | --origin-jobs | --all-jobs]

Finds executions (jobs and analyses) subject to the given search parameters. By default,
output is formatted to show the last several job trees that you've run in the current
project.

optional arguments:
  -h, --help            show this help message and exit
  --id ID               Show only the job tree or job containing this job ID
  --name NAME           Restrict the search by job name (accepts wildcards "*" and "?")
  --user USER           Username who launched the job (use "self" to ask for your own jobs)
  --project PROJECT     Project context (output project), default is current project if set
  --all-projects, --allprojects
                        Extend search to all projects
  --app EXECUTABLE, --applet EXECUTABLE, --executable EXECUTABLE
                        Applet or App ID that job is running
  --state STATE         State of the job, e.g. "done", "failed"
  --origin ORIGIN       Job ID of the top-level job
  --parent PARENT       Job ID of the parent job; implies --all-jobs
  --created-after CREATED_AFTER
                        Date (e.g. --created-after="2021-12-01" or
                        --created-after="2021-12-01 19:01:33") or integer Unix epoch
                        timestamp in milliseconds (e.g. --created-after=1642196636000) after
                        which the job was last created. You can also specify negative
                        numbers to indicate a time period in the past suffixed by s, m, h,
                        d, w, M or y to indicate seconds, minutes, hours, days, weeks,
                        months or years (e.g. --created-after=-2d for executions created in
                        the last 2 days)
  --created-before CREATED_BEFORE
                        Date (e.g. --created-before="2021-12-01" or
                        --created-before="2021-12-01 19:01:33") or integer Unix epoch
                        timestamp in milliseconds (e.g. --created-before=1642196636000)
                        before which the job was last created. You can also specify negative
                        numbers to indicate a time period in the past suffixed by s, m, h,
                        d, w, M or y to indicate seconds, minutes, hours, days, weeks,
                        months or years (e.g. --created-before=-2d for executions created
                        earlier than 2 days ago)
  --no-subjobs          Do not show any subjobs
  --root-execution ROOT_EXECUTION, --root ROOT_EXECUTION
                        Execution ID of the top-level (user-initiated) job or analysis
  -n N, --num-results N
                        Max number of results (trees or jobs, as according to the search
                        mode) to return (default 10)
  -o, --show-outputs    Show job outputs in results
  --include-restarted   if specified, results will include restarted jobs and job trees rooted in
                        restarted jobs
  --brief               Display a brief version of the return value; for most commands,
                        prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --color {off,on,auto}
                        Set when color is used (color=auto is used when stdout is a TTY)
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be
                        printed; if no delimiter is provided with this flag, TAB will be
                        used
  --env-help            Display help message for overriding environment variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key; if only a key
                        is provided, matches a result that has the key with any value;
                        repeat as necessary, e.g. "--property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1 --tag tag2" will
                        require both tags

Search mode:
  --trees               Show entire job trees for all matching results (default)
  --origin-jobs         Search and display only top-level origin jobs
  --all-jobs            Search for jobs at all depths matching the query (no tree structure
                        shown)

generate_batch_inputs

usage: dx generate_batch_inputs [-h] [-i INPUT] [--path PROJECT:FOLDER]
                                [-o OUTPUT_PREFIX]

Generate a table of input files matching desired regular expressions for each
input.

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        An input to be batch-processed "-i<input name>=<input
                        pattern>" where <input_pattern> is a regular
                        expression with a group corresponding to the desired
                        region to match (e.g. "-iinputa=SRR(.*)_1.gz"
                        "-iinputb=SRR(.*)_2.gz")
  --path PROJECT:FOLDER
                        Project and/or folder to which the search for input
                        files will be restricted
  -o OUTPUT_PREFIX, --output_prefix OUTPUT_PREFIX
                        Prefix for output file

Category: orgs

Tools to help org admins manage their orgs.

add member

usage: dx add member [-h] [--brief | --verbose] [--env-help] --level
                     {ADMIN,MEMBER} [--allow-billable-activities]
                     [--no-app-access]
                     [--project-access {ADMINISTER,CONTRIBUTE,UPLOAD,VIEW,NONE}]
                     [--no-email]
                     org_id username_or_user_id

Grant a user membership to an org

positional arguments:
  org_id                ID of the org
  username_or_user_id   Username or ID of user

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --level {ADMIN,MEMBER}
                        Org membership level that will be granted to the
                        specified user
  --allow-billable-activities
                        Grant the specified user "allowBillableActivities" in
                        the org
  --no-app-access       Disable "appAccess" for the specified user in the org
  --project-access {ADMINISTER,CONTRIBUTE,UPLOAD,VIEW,NONE}
                        The default implicit maximum permission the specified
                        user will receive to projects explicitly shared with
                        the org; default CONTRIBUTE
  --no-email            Disable org invitation email notification to the
                        specified user

remove member

usage: dx remove member [-h] [--brief | --verbose] [--env-help]
                        [--keep-explicit-project-permissions]
                        [--keep-explicit-app-permissions] [-y]
                        org_id username_or_user_id

Revoke the org membership of a user

positional arguments:
  org_id                ID of the org
  username_or_user_id   Username or ID of user

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --keep-explicit-project-permissions
                        Disable revocation of explicit project permissions of
                        the specified user to projects billed to the org;
                        implicit project permissions (i.e. those granted to
                        the specified user via his membership in this org)
                        will always be revoked
  --keep-explicit-app-permissions
                        Disable revocation of explicit app developer and user
                        permissions of the specified user to apps billed to
                        the org; implicit app permissions (i.e. those granted
                        to the specified user via his membership in this org)
                        will always be revoked
  -y, --yes             Do not ask for confirmation

update org

usage: dx update org [-h] [--brief | --verbose] [--env-help] [--name NAME]
                     [--member-list-visibility {ADMIN,MEMBER,PUBLIC}]
                     [--project-transfer-ability {ADMIN,MEMBER}]
                     [--saml-idp SAML_IDP]
                     [--detailed-job-metrics-collect-default {true,false}]
                     [--enable-job-reuse | --disable-job-reuse]
                     [--job-logs-forwarding-json JLF]
                     org_id

Update information about an org

positional arguments:
  org_id                ID of the org

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --name NAME           New name of the org
  --member-list-visibility {ADMIN,MEMBER,PUBLIC}
                        New org membership level that is required to be able
                        to view the membership level and/or permissions of any
                        other member in the specified org (corresponds to the
                        memberListVisibility org policy)
  --project-transfer-ability {ADMIN,MEMBER}
                        New org membership level that is required to be able
                        to change the billing account of a project that is
                        billed to the specified org, to some other entity
                        (corresponds to the restrictProjectTransfer org
                        policy)
  --saml-idp SAML_IDP   New SAML identity provider
  --detailed-job-metrics-collect-default {true,false}
                        If set to true, jobs launched in the projects billed
                        to this org will collect detailed job metrics by
                        default
  --enable-job-reuse    Enable job reuse for projects where the org is the
                        billTo
  --disable-job-reuse   Disable job reuse for projects where the org is the
                        billTo
  --job-logs-forwarding-json JLF
                        JLF is a JSON string with url and token enabling
                        forwarding of job logs to Splunk, e.g.
                        '{"url":"https://http-inputs-
                        acme.splunkcloud.com/event/collector","token":"splunk-
                        hec-token"}'

update member

usage: dx update member [-h] [--brief | --verbose] [--env-help] [--level {ADMIN,MEMBER}]
                        [--allow-billable-activities {true,false}] [--app-access {true,false}]
                        [--project-access {ADMINISTER,CONTRIBUTE,UPLOAD,VIEW,NONE}]
                        org_id username_or_user_id

Update the membership of a user in an org

positional arguments:
  org_id                ID of the org
  username_or_user_id   Username or ID of user

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment variables
  --level {ADMIN,MEMBER}
                        The new org membership level of the specified user
  --allow-billable-activities {true,false}
                        The new "allowBillableActivities" membership permission of the specified user in the org; default
                        false if demoting the specified user from ADMIN to MEMBER
  --app-access {true,false}
                        The new "appAccess" membership permission of the specified user in the org; default true if
                        demoting the specified user from ADMIN to MEMBER
  --project-access {ADMINISTER,CONTRIBUTE,UPLOAD,VIEW,NONE}
                        The new default implicit maximum permission the specified user will receive to projects explicitly
                        shared with the org; default CONTRIBUTE if demoting the specified user from ADMIN to MEMBER

find org members

usage: dx find org members [-h] [--brief | --verbose] [--json] [--delimiter [DELIMITER]] [--env-help]
                           [--level {ADMIN,MEMBER}]
                           org_id

Finds members in the specified org subject to the given search parameters

positional arguments:
  org_id                Org ID

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be printed; if no delimiter is provided
                        with this flag, TAB will be used
  --env-help            Display help message for overriding environment variables
  --level {ADMIN,MEMBER}
                        Restrict the result set to contain only members at the specified membership level.

find org entities

usage: dx find org [-h] entities ...


List entities within a specific org.


positional arguments:
  entities
    members   List members in the specified org
    projects  List projects billed to the specified org
    apps      List apps billed to the specified org


optional arguments:
  -h, --help  show this help message and exitusage: dx find org [-h] entities ...

find org projects

usage: dx find org projects [-h] [--brief | --verbose] [--json] [--delimiter [DELIMITER]] [--env-help]
                            [--property KEY[=VALUE]] [--tag TAG] [--phi {true,false}] [--name NAME] [--ids IDS [IDS ...]]
                            [--public-only | --private-only] [--created-after CREATED_AFTER]
                            [--created-before CREATED_BEFORE] [--region REGION]
                            org_id

Finds projects billed to the specified org subject to the given search parameters. You must be an ADMIN of the specified
org to use this command. It allows you to identify projects billed to the org that have not been shared with you
explicitly.

positional arguments:
  org_id                Org ID

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be printed; if no delimiter is provided
                        with this flag, TAB will be used
  --env-help            Display help message for overriding environment variables
  --property KEY[=VALUE]
                        Key-value pair of a property or simply a property key; if only a key is provided, matches a result
                        that has the key with any value; repeat as necessary, e.g. "--property key1=val1 --property key2"
  --tag TAG             Tag to match; repeat as necessary, e.g. "--tag tag1 --tag tag2" will require both tags
  --phi {true,false}    If set to true, only projects that contain PHI data will be retrieved. If set to false, only
                        projects that do not contain PHI data will be retrieved.
  --name NAME           Name of the projects
  --ids IDS [IDS ...]   Possible project IDs. May be specified like "--ids project-1 project-2"
  --public-only         Include only public projects
  --private-only        Include only private projects
  --created-after CREATED_AFTER
                        Date (e.g. --created-after="2021-12-01" or --created-after="2021-12-01 19:01:33") or integer Unix
                        epoch timestamp in milliseconds (e.g. --created-after=1642196636000) after which the project was
                        created. You can also specify negative numbers to indicate a time period in the past suffixed by s,
                        m, h, d, w, M or y to indicate seconds, minutes, hours, days, weeks, months or years (e.g.
                        --created-after=-2d for projects created in the last 2 days).
  --created-before CREATED_BEFORE
                        Date (e.g. --created-before="2021-12-01" or --created-before="2021-12-01 19:01:33") or integer Unix
                        epoch timestamp in milliseconds (e.g. --created-before=1642196636000) before which the project was
                        created. You can also specify negative numbers to indicate a time period in the past suffixed by s,
                        m, h, d, w, M or y to indicate seconds, minutes, hours, days, weeks, months or years (e.g.
                        --created-before=-2d for projects created earlier than 2 days ago)
  --region REGION       Restrict the search to the provided region

find org apps

dx find org apps -h

                        [--developer DEVELOPER] [--created-after CREATED_AFTER] [--created-before CREATED_BEFORE]
                        [--mod-after MOD_AFTER] [--mod-before MOD_BEFORE]
                        org_id

Finds apps billed to the specified org subject to the given search parameters. You must be an ADMIN of the specified org to
use this command. It allows you to identify apps billed to the org that have not been shared with you explicitly.

positional arguments:
  org_id                Org ID

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --json                Display return value in JSON
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields to be printed; if no delimiter is provided
                        with this flag, TAB will be used
  --env-help            Display help message for overriding environment variables
  --name NAME           Name of the apps
  --category CATEGORY   Category of the app
  --category-help       Print a list of common app categories
  -a, --all             Return all versions of each app
  --unpublished         Return only unpublished apps (if omitted, returns all apps)
  --installed           Return only installed apps
  --creator CREATOR     Creator of the app version
  --developer DEVELOPER
                        Developer of the app
  --created-after CREATED_AFTER
                        Date (e.g. --created-after="2021-12-01" or --created-after="2021-12-01 19:01:33") or integer Unix
                        epoch timestamp in milliseconds (e.g. --created-after=1642196636000) after which the app was
                        created. You can also specify negative numbers to indicate a time period in the past suffixed by s,
                        m, h, d, w, M or y to indicate seconds, minutes, hours, days, weeks, months or years (e.g.
                        --created-after=-2d for apps created in the last 2 days).
  --created-before CREATED_BEFORE
                        Date (e.g. --created-before="2021-12-01" or --created-before="2021-12-01 19:01:33") or integer Unix
                        epoch timestamp in milliseconds (e.g. --created-before=1642196636000) before which the app was
                        created. You can also specify negative numbers to indicate a time period in the past suffixed by s,
                        m, h, d, w, M or y to indicate seconds, minutes, hours, days, weeks, months or years (e.g.
                        --created-before=-2d for apps created earlier than 2 days ago)
  --mod-after MOD_AFTER
                        Date (e.g. 2012-01-01) or integer timestamp after which the app was last modified (negative number
                        means seconds in the past, or use suffix s, m, h, d, w, M, y) Negative input example "--mod-
                        after=-2d"
  --mod-before MOD_BEFORE
                        Date (e.g. 2012-01-01) or integer timestamp before which the app was last modified (negative number
                        means seconds in the past, or use suffix s, m, h, d, w, M, y) Negative input example "--mod-
                        before=-2d"

find orgs

usage: dx find orgs [-h] [--brief | --verbose] [--env-help]
                    [--delimiter [DELIMITER]] [--json] --level {ADMIN,MEMBER}
                    [--with-billable-activities | --without-billable-activities]

Finds orgs subject to the given search parameters.

optional arguments:
  -h, --help            show this help message and exit
  --brief               Display a brief version of the return value; for most
                        commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment
                        variables
  --delimiter [DELIMITER], --delim [DELIMITER]
                        Always use exactly one of DELIMITER to separate fields
                        to be printed; if no delimiter is provided with this
                        flag, TAB will be used
  --json                Display return value in JSON
  --level {ADMIN,MEMBER}
                        Restrict the result set to contain only orgs in which
                        the requesting user has at least the specified
                        membership level
  --with-billable-activities
                        Restrict the result set to contain only orgs in which
                        the requesting user can perform billable activities;
                        mutually exclusive with --without-billable-activities
  --without-billable-activities
                        Restrict the result set to contain only orgs in which
                        the requesting user **cannot** perform billable
                        activities; mutually exclusive with --with-billable-
                        activities

Category: other

Miscellaneous advanced utilities.

invite

usage: dx invite [-h] [--env-help] [--no-email]
                 invitee [project] [{VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}]

Invite a DNAnexus entity to a project. If the invitee is not recognized as a
DNAnexus ID, it will be treated as a username, i.e. "dx invite alice : VIEW"
is equivalent to inviting the user with user ID "user-alice" to view your
current default project.

positional arguments:
  invitee               Entity to invite
  project               Project to invite the invitee to
  {VIEW,UPLOAD,CONTRIBUTE,ADMINISTER}
                        Permissions level the new member should have

optional arguments:
  -h, --help            show this help message and exit
  --env-help            Display help message for overriding environment
                        variables
  --no-email            Disable email notifications to invitee

uninvite

usage: dx uninvite [-h] [--env-help] entity [project]

Revoke others' permissions on a project you administer. If the entity is not
recognized as a DNAnexus ID, it will be treated as a username, i.e. "dx
uninvite alice :" is equivalent to revoking the permissions of the user with
user ID "user-alice" to your current default project.

positional arguments:
  entity      Entity to uninvite
  project     Project to revoke permissions from

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment variables

api

usage: dx api [-h] [--env-help] [--input INPUT] resource method [input_json]

Call an API method directly.  The JSON response from the API server will be
returned if successful.  No name resolution is performed; DNAnexus IDs must
always be provided.  The API specification can be found at

https://documentation.dnanexus.com/developer/api

EXAMPLE

  In the following example, a project's description is changed.

  $ dx api project-B0VK6F6gpqG6z7JGkbqQ000Q update '{"description": "desc"}'
  {
      "id": "project-B0VK6F6gpqG6z7JGkbqQ000Q"
  }

positional arguments:
  resource       One of "system", a class name (e.g. "record"), or an entity
                 ID such as "record-xxxx".  Use "app-name/1.0.0" to refer to
                 version "1.0.0" of the app named "name".
  method         Method name for the resource as documented by the API
                 specification
  input_json     JSON input for the method (if not given, "{}" is used)

optional arguments:
  -h, --help     show this help message and exit
  --env-help     Display help message for overriding environment
                 variables
  --input INPUT  Load JSON input from FILENAME ("-" to use stdin)

upgrade

dx upgrade is removed in v0.379.0. See this documentation for guidance on installing and upgrading dxpy using pip3.

add stage

usage: dx add stage [-h] [-i INPUT] [-j INPUT_JSON] [-f FILENAME] [--brief | --verbose] [--env-help]
                    [--instance-type INSTANCE_TYPE_OR_MAPPING] [--instance-type-help] [--alias ALIAS] [--name NAME]
                    [--id STAGE_ID] [--output-folder OUTPUT_FOLDER | --relative-output-folder RELATIVE_OUTPUT_FOLDER]
                    workflow executable

Add a stage to a workflow. Default inputs for the stage can also be set at the same time.

positional arguments:
  workflow              Name or ID of a workflow
  executable            Name or ID of an executable to add as a stage in the workflow

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        An input to be added using "<input name>[:<class>]=<input value>" (provide "class" if there
                        is no input spec; it can be any job IO class, e.g. "string", "array:string", or "array"; if
                        "class" is "array" or not specified, the value will be attempted to be parsed as JSON and is
                        otherwise treated as a string)
  -j INPUT_JSON, --input-json INPUT_JSON
                        The full input JSON (keys=input field names, values=input field values)
  -f FILENAME, --input-json-file FILENAME
                        Load input JSON from FILENAME ("-" to use stdin)
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
--verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment variables
  --instance-type INSTANCE_TYPE_OR_MAPPING
                        Specify instance type(s) for jobs this executable will run; see --instance-type-help for more
                        details
  --instance-type-help  Print help for specifying instance types
  --alias ALIAS, --version ALIAS, --tag ALIAS
                        Tag or version of the app to add if the executable is an app (default: "default" if an app)
  --name NAME           Stage name
  --id STAGE_ID         Stage ID
  --output-folder OUTPUT_FOLDER
                        Path to the output folder for the stage (interpreted as an absolute path)
  --relative-output-folder RELATIVE_OUTPUT_FOLDER
                        A relative folder path for the stage (interpreted as relative to the workflow's output
                        folder)

remove stage

usage: dx remove stage [-h] [--brief | --verbose] [--env-help] workflow stage

Remove a stage from a workflow. The stage should be indicated either by an integer (0-indexed, i.e. "0" for the first
stage), or a stage ID.


positional arguments:
  workflow    Name or ID of a workflow
  stage       Stage (index or ID) of the workflow to remove


optional arguments:
  -h, --help  show this help message and exit
  --brief     Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose   If available, displays extra verbose output
  --env-help  Display help message for overriding environment variablesCategory: orgs

update stage

usage: dx update stage [-h] [-i INPUT] [-j INPUT_JSON] [-f FILENAME] [--brief | --verbose] [--env-help]
                       [--instance-type INSTANCE_TYPE_OR_MAPPING] [--instance-type-help] [--executable EXECUTABLE]
                       [--alias ALIAS] [--force] [--name NAME | --no-name]
                       [--output-folder OUTPUT_FOLDER | --relative-output-folder RELATIVE_OUTPUT_FOLDER]
                       workflow stage

Update the metadata for a stage in a workflow

positional arguments:
  workflow              Name or ID of a workflow
  stage                 Stage (index or ID) of the workflow to update

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        An input to be added using "<input name>[:<class>]=<input value>" (provide "class" if there
                        is no input spec; it can be any job IO class, e.g. "string", "array:string", or "array"; if
                        "class" is "array" or not specified, the value will be attempted to be parsed as JSON and is
                        otherwise treated as a string)
  -j INPUT_JSON, --input-json INPUT_JSON
                        The full input JSON (keys=input field names, values=input field values)
  -f FILENAME, --input-json-file FILENAME
                        Load input JSON from FILENAME ("-" to use stdin)
  --brief               Display a brief version of the return value; for most commands, prints a DNAnexus ID per line
  --verbose             If available, displays extra verbose output
  --env-help            Display help message for overriding environment variables
  --instance-type INSTANCE_TYPE_OR_MAPPING
                        Specify instance type(s) for jobs this executable will run; see --instance-type-help for more
                        details
  --instance-type-help  Print help for specifying instance types
  --executable EXECUTABLE
                        Name or ID of an executable to replace in the stage
  --alias ALIAS, --version ALIAS, --tag ALIAS
                        Tag or version of the app to use if replacing the stage executable with an app (default:
                        "default" if an app)
  --force               Whether to replace the executable even if it the new one cannot be verified as compatible
                        with the previous version
  --name NAME           Stage name
  --no-name             Unset the stage name
  --output-folder OUTPUT_FOLDER
                        Path to the output folder for the stage (interpreted as an absolute path)
  --relative-output-folder RELATIVE_OUTPUT_FOLDER
                        A relative folder path for the stage (interpreted as relative to the workflow's output
                        folder)

Helpstrings of SDK Command-Line Utilities

Below is a list of some of the various command-line utilities available in the SDK and some brief documentation for their usage.

General purpose dx utilities

dx

usage: dx [-h] [--version] command ...

DNAnexus Command-Line Client, API v1.0.0, client v0.322.1

dx is a command-line client for interacting with the DNAnexus platform.  You
can log in, navigate, upload, organize and share your data, launch analyses,
and more.  For a quick tour of what the tool can do, see

  https://documentation.dnanexus.com/getting-started/tutorials/cli-quickstart#quickstart-for-cli

For a breakdown of dx commands by category, run "dx help".

dx exits with exit code 3 if invalid input is provided or an invalid operation
is requested, and exit code 1 if an internal error is encountered.  The latter
usually indicate bugs in dx; please report them at

  https://github.com/dnanexus/dx-toolkit/issues

optional arguments:
  -h, --help  show this help message and exit
  --env-help  Display help message for overriding environment
              variables
  --version   show program's version number and exit

dx-app-wizard

usage: dx-app-wizard [-h] [--json-file JSON_FILE] [--language LANGUAGE]
                     [--template {basic,parallelized,scatter-process-gather}]
                     [name]

Create a source code directory for a DNAnexus app. You will be prompted for
various metadata for the app as well as for its input and output
specifications.

positional arguments:
  name                  Name of your app

optional arguments:
  -h, --help            show this help message and exit
  --json-file JSON_FILE
                        Use the metadata and IO spec found in the given file
  --language LANGUAGE   Programming language of your app
  --template {basic,parallelized,scatter-process-gather}
                        Execution pattern of your app

dx-fetch-bundled-depends

usage: dx-fetch-bundled-depends [-h]

Downloads the contents of runSpec.bundledDepends of a job running in the
execution environment.

optional arguments:
  -h, --help  show this help message and exit

dx-generate-dxapp

usage: dx-generate-dxapp [-h] [-m TARGET_MODULE] [-f TARGET_FUNCTION]
                         [-x TARGET_EXECUTABLE] [-s SUBCOMMAND]
                         [-o OUTPUT_FILE]
                         [-p OUTPUT_PARAMS [OUTPUT_PARAMS ...]]
                         [-r OUTPUT_PARAM_REGEXP] [-n {bash,python3}]
                         [-i INSTANCE_TYPE] [-t TIMEOUT]
                         [--distribution DISTRIBUTION] [--release RELEASE]
                         [--runspec-version RUNSPEC_VERSION]

optional arguments:
  -h, --help            show this help message and exit
  -m TARGET_MODULE, --target-module TARGET_MODULE
                        The fully-qualified module that contains the target
                        method.
  -f TARGET_FUNCTION, --target-function TARGET_FUNCTION
                        The main function that is called by the target
                        executable. This should bewhere the ArgumentParser is
                        configured.
  -x TARGET_EXECUTABLE, --target-executable TARGET_EXECUTABLE
                        The name of the executable. This is used in the
                        dxapp.json runSpec.
  -s SUBCOMMAND, --subcommand SUBCOMMAND
                        Subcommand to pass to the target method, if required.
  -o OUTPUT_FILE, --output-file OUTPUT_FILE
                        The output dxapp.json file. If not specified, output
                        will go to stdout.
  -p OUTPUT_PARAMS [OUTPUT_PARAMS ...], --output-params OUTPUT_PARAMS [OUTPUT_PARAMS ...]
                        Names of output parameters (in case they can't be
                        autodetected).
  -r OUTPUT_PARAM_REGEXP, --output-param-regexp OUTPUT_PARAM_REGEXP
                        Regular expression that identifies output parameter
                        names.
  -n {bash,python3}, --interpreter {bash,python3}
                        Type of script that will wrap the executable.
  -i INSTANCE_TYPE, --instance-type INSTANCE_TYPE
                        AWS instance type to use.
  -t TIMEOUT, --timeout TIMEOUT
                        Max runtime of this app (in hours).
  --distribution DISTRIBUTION
                        Distribution to use for the machine image.
  --release RELEASE     Distribution release to use for the machine image.
  --runspec-version RUNSPEC_VERSION
                        Version of the application execution environment
                        inside the runSpec block.

dx-jobutil-add-output

usage: dx-jobutil-add-output [-h] [--class [CLASSNAME]] [--array] name value

Reads and modifies job_output.json in your home directory to be a JSON hash
with key *name* and value  *value*.

If --class is not provided or is set to "auto", auto-detection of the output
format will occur.  In particular, it will treat it as a number, hash, or
boolean if it can be successfully parsed as such.  If it is a string which
matches the pattern for a data object ID, it will encapsulate it in a DNAnexus
link hash; otherwise it is treated as a simple string.

Use --array to append to an array of values or prepend "array:" to the --class
argument.

To use the output of another job as part of your output, use --class=jobref
(which will throw an error if it is not formatted correctly), or use the
automatic parsing which will recognize anything starting with a job ID as a
job-based object reference.  You should format the value as follows:

  Format: <job ID>:<output field name>
  Example: dx-jobutil-add-output myoutput --class=jobref \
             job-B2JKYqK4Zg2K915yQxPQ0024:other_output

Analysis references can be specified similarly with --class=analysisref and
formatted as:

  Format: <analysis ID>:<stage ID>.<output field name>
          <analysis ID>:<exported output field name>
  Example: dx-jobutil-add-output myoutput --class=analysisref \
             analysis-B2JKYqK4Zg2K915yQxPQ0024:some_output

positional arguments:
  name                 Name of the output field name
  value                Value of the output field

optional arguments:
  -h, --help           show this help message and exit
  --class [CLASSNAME]  Class of output for formatting purposes, e.g. "int";
                       default "auto"
  --array              Output field is an array
usage: dx-jobutil-dxlink [-h] object

Creates a DNAnexus link from an object ID or "<project ID>:<object ID>"
string. The result is of the form {"$dnanexus_link": "<object ID>"} or
{"$dnanexus_link": {"project": <project ID>, "id": <object ID>}}, as
appropriate.

positional arguments:
  object      Data object ID or "<Project ID>:<Data object ID>" to package
              into a DNAnexus link

optional arguments:
  -h, --help  show this help message and exit

dx-jobutil-new-job

usage: dx-jobutil-new-job [-h] [-i INPUT] [-j INPUT_JSON] [-f FILENAME]
                          [--instance-type INSTANCE_TYPE_OR_MAPPING]
                          [--instance-type-by-executable DOUBLE_MAPPING]
                          [--instance-type-help] [--extra-args EXTRA_ARGS]
                          [--property KEY=VALUE] [--tag TAG] [--name NAME]
                          [--depends-on [JOB_OR_OBJECT_ID [JOB_OR_OBJECT_ID ...]]]
                          [--head-job-on-demand]
                          function

Creates a new job to run the named function with the specified input. If
successful, prints the ID of the new job.

positional arguments:
  function              Name of the function to run

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        An input to be added using "<input
                        name>[:<class>]=<input value>" (provide "class" if
                        there is no input spec; it can be any job IO class,
                        e.g. "string", "array:string", or "array"; if "class"
                        is "array" or not specified, the value will be
                        attempted to be parsed as JSON and is otherwise
                        treated as a string)
  -j INPUT_JSON, --input-json INPUT_JSON
                        The full input JSON (keys=input field names,
                        values=input field values)
  -f FILENAME, --input-json-file FILENAME
                        Load input JSON from FILENAME ("-" to use stdin)
  --instance-type INSTANCE_TYPE_OR_MAPPING
                        When running an app or applet, the mapping lists
                        executable's entry points or "*" as keys, and instance
                        types to use for these entry points as values. When
                        running a workflow, the specified instance types can
                        be prefixed by a stage name or stage index followed by
                        "=" to apply to a specific stage, or apply to all
                        workflow stages without such prefix. The instance type
                        corresponding to the "*" key is applied to all entry
                        points not explicitly mentioned in the --instance-type
                        mapping. Specifying a single instance type is
                        equivalent to using it for all entry points, so "--
                        instance-type mem1_ssd1_v2_x2" is same as "--instance-
                        type '{"*":"mem1_ssd1_v2_x2"}'. Note that "dx run"
                        calls within the execution subtree may override the
                        values specified at the root of the execution tree.
                        See dx run --instance-type-help for details.
  --instance-type-by-executable DOUBLE_MAPPING
                        Specifies instance types by app or applet id, then by
                        entry point within the executable. The order of
                        priority for this specification is: * --instance-type,
                        systemRequirements and stageSystemRequirements
                        specified at runtime * stage's systemRequirements,
                        systemRequirements supplied to /app/new and
                        /applet/new at workflow/app/applet build time *
                        systemRequirementsByExecutable specified in downstream
                        executions (if any) See dx run --instance-type-help
                        for details.
  --instance-type-help  Print help for specifying instance types
  --extra-args EXTRA_ARGS
                        Arguments (in JSON format) to pass to the underlying
                        API method, overriding the default settings
  --property KEY=VALUE  Key-value pair to add as a property; repeat as
                        necessary, e.g. "--property key1=val1 --property
                        key2=val2"
  --tag TAG             Tag for the resulting execution; repeat as necessary,
                        e.g. "--tag tag1 --tag tag2"
  --name NAME           Name for the new job (default is the current job name,
                        plus ":<function>")
  --depends-on [JOB_OR_OBJECT_ID [JOB_OR_OBJECT_ID ...]]
                        Job and/or data object IDs that must finish or close
                        before the new job should be run. WARNING: For proper
                        parsing, do not use this flag directly before the
                        *function* parameter.
  --head-job-on-demand  Whether the head job should be run on an on-demand
                        instance
usage: dx-jobutil-parse-link [-h] [--no-project] dxlink

Parse a dxlink JSON hash into an object ID or project:object-id tuple

positional arguments:
  dxlink        Link to parse

optional arguments:
  -h, --help    show this help message and exit
  --no-project  Ignore project ID in an extended dxlink - just print the
                object ID

dx-jobutil-report-error

usage: dx-jobutil-report-error [-h] message [{AppInternalError,AppError}]

Creates job_error.json in your home directory, a JSON file to include the
error type and message for the running job. There are two types of errors you
may report: 1) AppError (the default) for recognized actionable errors, and 2)
AppInternalError for unexpected application errors.

positional arguments:
  message               Error message for the job
  {AppInternalError,AppError}
                        Error type

optional arguments:
  -h, --help            show this help message and exit

dx-jobutil-get-identity-token

usage: dx-jobutil-get-identity-token [-h] --aud AUD [--subject_claims <subject_claims>]

calls job-xxxx/getIdentityToken and retrieves a JWT token based on aud and subject claims input

optional arguments:
  -h, --help            show this help message and exit
  --aud AUD             Audience URI the JWT is intended for
  --subject_claims <subject_claims>
            	            Defines the subject claims to be validated by the cloud provider

dx-log-stream

usage: dx-log-stream [-h] [-l {critical,error,warning,info,debug}]
                     [-s {DX_APP,DX_APP_STREAM}]

Redirects stdin to a DNAnexus log socket in the execution environment.

Valid logging levels:

┌─────────────────────────┬────────────────┬────────────┐
│ --source                │ --level        │ Appears as │
├─────────────────────────┼────────────────┼────────────┤
│ DX_APP_STREAM (default) │ info (default) │ STDOUT     │
│ DX_APP_STREAM (default) │ error          │ STDERR     │
├─────────────────────────┼────────────────┼────────────┤
│ DX_APP                  │ debug          │ DEBUG      │
│ DX_APP                  │ info (default) │ INFO       │
│ DX_APP                  │ warning        │ WARNING    │
│ DX_APP                  │ error          │ ERROR      │
│ DX_APP                  │ critical       │ CRITICAL   │
└─────────────────────────┴────────────────┴────────────┘

optional arguments:
  -h, --help            show this help message and exit
  -l {critical,error,warning,info,debug}, --level {critical,error,warning,info,debug}
                        Logging level to use
  -s {DX_APP,DX_APP_STREAM}, --source {DX_APP,DX_APP_STREAM}
                        Source ID to use

dx-mount-all-inputs

usage: dx-mount-all-inputs [-h] [--except EXCLUDE] [--verbose]

Note: this is a utility for use by bash apps running in the DNAnexus Platform.

Mounts all files that were supplied as inputs to the app.  By convention, if
an input parameter "FOO" has value

    {"$dnanexus_link": "file-xxxx"}

and filename INPUT.TXT, then the linked file will be mounted into the path:

    $HOME/in/FOO/INPUT.TXT

If an input is an array of files, then all files will be placed into numbered
subdirectories under a parent directory named for the input. For example, if
the input key is FOO, and the inputs are {A, B, C}.vcf then, the directory
structure will be:

    $HOME/in/FOO/0/A.vcf
                 1/B.vcf
                 2/C.vcf

Zero padding is used to ensure argument order. For example, if there are 12
input files {A, B, C, D, E, F, G, H, I, J, K, L}.txt, the directory structure
will be:

    $HOME/in/FOO/00/A.vcf
                 ...
                 11/L.vcf

This allows using shell globbing (FOO/*/*.vcf) to get all the files in the
input order.

optional arguments:
  -h, --help        show this help message and exit
  --except EXCLUDE  Do not mount the input with this name. (May be used
                    multiple times.)
  --verbose         Start dxfuse with '-verbose 2' logging

dx-notebook-reconnect

usage: dx-notebook-reconnect [-h] [--port PORT] job_id

Reconnect to a notebook job

positional arguments:
  job_id            Job-id of the notebook job to reconnect to.

optional arguments:
  -h, --help   show this help message and exit
  --port PORT  Local port to use for connecting.

dx-print-bash-vars

usage: dx-print-bash-vars [-h]

Parses $HOME/job_input.json and prints the bash variables that would be
available in the execution environment.

optional arguments:
  -h, --help  show this help message and exit

dx-verify-file

Usage: dx-verify-file [options] -r <remote_file1_id> -l <local_file1> [-r <remote_file2_id> -l <local_file2> ...]

Available options:
  -h [ --help ]            Produce a help message
  --version                Print the version
  -e [ --env ]             Print environment information
  -a [ --auth-token ] arg  Specify the authentication token
  -r [ --remote-file ] arg ID of the remote file
  -l [ --local-file ] arg  Local file path
  --read-threads arg (=1)  Number of parallel disk read threads
  --md5-threads arg (=1)   Number of parallel MD5 compute threads
  -v [ --verbose ]         Verbose logging

Utilities useful in writing bash apps and applets

dx-download-all-inputs

usage: dx-download-all-inputs [-h] [--except EXCLUDE] [--parallel]
                              [--sequential]

Note: this is a utility for use by bash apps running in the DNAnexus Platform.

Downloads all files that were supplied as inputs to the app.  By convention,
if an input parameter "FOO" has value

    {"$dnanexus_link": "file-xxxx"}

and filename INPUT.TXT, then the linked file will be downloaded into the path:

    $HOME/in/FOO/INPUT.TXT

If an input is an array of files, then all files will be placed into numbered
subdirectories under a parent directory named for the input. For example, if
the input key is FOO, and the inputs are {A, B, C}.vcf then, the directory
structure will be:

    $HOME/in/FOO/0/A.vcf
                 1/B.vcf
                 2/C.vcf

Zero padding is used to ensure argument order. For example, if there are 12
input files {A, B, C, D, E, F, G, H, I, J, K, L}.txt, the directory structure
will be:

    $HOME/in/FOO/00/A.vcf
                 ...
                 11/L.vcf

This allows using shell globbing (FOO/*/*.vcf) to get all the files in the
input order.

optional arguments:
  -h, --help        show this help message and exit
  --except EXCLUDE  Do not download the input with this name. (May be used
                    multiple times.)
  --parallel        Download the files in parallel
  --sequential      Download the files sequentially

dx-upload-all-outputs

usage: dx-upload-all-outputs [-h] [--except EXCLUDE] [--parallel]
                             [--sequential] [--clearJSON CLEARJSON]
                             [--wait-on-close] [--xattr-properties]

Note: this is a utility for use by bash apps running in the DNAnexus Platform.

Uploads all files and subdirectories in the directory $HOME/out, as described
below. It also adds relevant entries into the job_output.json file.

By convention, only directories with names equal to output parameter names are
expected to be found in the output directory, and any file(s) found in those
subdirectories will be uploaded as the corresponding outputs.  For example, a
file with the path

    $HOME/out/FOO/OUTPUT.TXT

will be uploaded, and the key "FOO" will be added to the job_output.json file
with value

    {"$dnanexus_link": "file-xxxx"}

where "file-xxxx" is the ID of the newly uploaded file. If multiple files are
found, they will be added as an array output (in unspecified order). If
subdirectories are found under $HOME/out/FOO, then they are uploaded in their
entirety to the workspace, and values are added to FOO in the job_output.json
file. For example, the path:

    $HOME/out/FOO/BAR/XXX.TXT

will be uploaded to /BAR/XXX.TXT.

optional arguments:
  -h, --help            show this help message and exit
  --except EXCLUDE      Do not upload the input with this name. (May be used
                        multiple times.)
  --parallel            Upload the files in parallel
  --sequential          Upload the files sequentially
  --clearJSON CLEARJSON
                        Clears the output JSON file prior to starting upload.
  --wait-on-close       Wait for files to close, default is not to wait
  --xattr-properties    Get filesystem attributes and set them as properties on each file uploaded

Last updated

Copyright 2024 DNAnexus