DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Running in Interactive Mode
  • Running in Non-interactive Mode
  • Naming Each Input
  • Specifying Array Input
  • Job-Based Object References (JBORs)
  • Advanced Options
  • Quiet Output
  • Rerunning a Job With the Same Settings
  • Specifying the Job Output Folder
  • Specifying a Different Instance Type
  • Adding Metadata to a Job
  • Specifying an App Version
  • Watching a Job
  • Providing Input JSON
  • Getting Additional Information on dx run
  • Cost Run Limits
  • Job Runtime Limits

Was this helpful?

Export as PDF
  1. User
  2. Running Apps and Workflows

Running Apps and Applets

Last updated 1 year ago

Was this helpful?

You can run apps and applets from the command-line using the command . The inputs to these app(let)s can be from any project for which you have VIEW access. Or

Running in Interactive Mode

If dx run is run without specifying any inputs, interactive mode will be launched. When you run this command, the platform prompts you for each required input, followed by a prompt to set any optional parameters. As shown below using the (platform login required to access this link), after you are done entering inputs, you must confirm that you want the applet/app to be run with the inputs you have selected.

$ dx run app-bwa_mem_fastq_read_mapper
Entering interactive mode for input selection.

Input:   Reads (reads_fastqgz)
Class:   file

Enter file ID or path ((<TAB> twice for compatible files in current directory, '?' for more options)
reads_fastqgz: reads.fastq.gz

Input:   BWA reference genome index (genomeindex_targz)
Class:   file
Suggestions:
    project-BQpp3Y804Y0xbyG4GJPQ01xv://file-* (DNAnexus Reference Genomes)

Enter file ID or path (<TAB> twice for compatible files in current directory, '?' for more options)
genomeindex_targz: "Reference Genome Files:/H. Sapiens - hg19 (UCSC)/ucsc_hg19.bwa-index.tar.gz"

Select an optional parameter to set by its # (^D or <ENTER> to finish):

 [0] Reads (right mates) (reads2_fastqgz)
 [1] Add read group information to the mappings (required by downstream GATK)? (add_read_group) [default=true]
 [2] Read group id (read_group_id) [default={"$dnanexus_link": {"input": "reads_fastqgz", "metadata": "name"}}]
 [3] Read group platform (read_group_platform) [default="ILLUMINA"]
 [4] Read group platform unit (read_group_platform_unit) [default="None"]
 [5] Read group library (read_group_library) [default="1"]
 [6] Read group sample (read_group_sample) [default="1"]
 [7] Output all alignments for single/unpaired reads? (all_alignments)
 [8] Mark shorter split hits as secondary? (mark_as_secondary) [default=true]
 [9] Advanced command line options (advanced_options)

Optional param #: <ENTER>

Using input JSON:
{
    "reads_fastqgz": {
        "$dnanexus_link": {
            "project": "project-xxxx",
            "id": "file-xxxx"
        }
    },
    "genomeindex_targz": {
        "$dnanexus_link": {
            "project": "project-xxxx",
            "id": "file-xxxx"
        }
    }
}

Confirm running the applet/app with this input [Y/n]: <ENTER>
Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx
Watch launched job now? [Y/n] n

Running in Non-interactive Mode

Naming Each Input

You can also specify each input parameter by name using the ‑i or ‑‑input flags with syntax ‑i<input name>=<input value>. Names of data objects in your project will be resolved to the appropriate IDs and packaged correctly for the API method as shown below.

$ dx run app-swiss-army-knife -h
usage: dx run app-swiss-army-knife [-iINPUT_NAME=VALUE ...]

App: Swiss Army Knife

Version: 4.9.1 (published)

A multi-purpose tool for all your basic analysis needs

See the app page for more information:
  https://platform.dnanexus.com/app/swiss-army-knife

Inputs:
  Input files: [-iin=(file) [-iin=... [...]]]
        (Optional) Files to download to instance temporary folder before command is executed.

  Command line: -icmd=(string)
        Command to execute on instance. View the app readme for details.

  Whether to use "dx-mount-all-inputs"?: [-imount_inputs=(boolean, default=false)]
        (Optional) Whether to mount all files that were supplied as inputs to the app instead of
        downloading them to the local storage of the execution worker.

  Public Docker image identifier: [-iimage=(string)]
        (Optional) Instead of using the default Ubuntu 20.04 environment, the input command <CMD>
        will be run using the specified publicly accessible Docker image <IMAGE> as it would be when
        running 'docker run <IMAGE> <CMD>'. Example image identifiers are 'ubuntu:22.04',
        'quay.io/ucsc_cgl/samtools'. Cannot be specified together with 'image_file'. This input
        relies on access to internet and is unusable in an internet-restricted project.

  Platform file containing Docker image accepted by `docker load`: [-iimage_file=(file)]
        (Optional) Instead of using the default Ubuntu 20.04 environment, the input command <CMD>
        will be run using the Docker image <IMAGE> loaded from the specified image file <IMAGE_FILE>
        as it would be when running 'docker load -i <IMAGE_FILE> && docker run <IMAGE> <CMD>'.
        Cannot be specified together with 'image'.

Outputs:
  Output files: [out (array:file)]
        (Optional) New files that were created in temporary folder.

The help message describes the inputs and outputs of the app, their types, and how to identify them when running the app from the command line. For example, from the above help message, we learn that the Swiss Army Knife app has two primary inputs: one or more file and a string to be executed on the command line, to be specified as -iin=file-xxxx and icmd=<string>, respectively.

The example below shows you how to run the same Swiss Army Knife app to sort a small BAM file using these inputs.

$ dx run app-swiss-army-knife \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXVY0093Jk1K VY1J082y7v \
-icmd="samtools sort -T /tmp/aln.sorted -o SRR100022_chrom20_mapped_to_b37.sorted.bam \
SRR100022_chrom20_mapped_to_b37.bam" -y

Using input JSON:
{
    "cmd": "samtools sort -T /tmp/aln.sorted -o SRR100022_chrom20_mapped_to_b37.sorted.bam SRR100022_chrom20_mapped_to_b37.bam",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

Specifying Array Input

$ dx run app-swiss-army-knife \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXVY0093Jk1KVY1J082y7v \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BZ9YGpj0x05xKxZ42QPqZkJY \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BZ9YGzj0x05b66kqQv51011q \
-icmd="ls *.bam | xargs -n1 -P5 samtools index" -y

Using input JSON:
{
    "cmd": "ls *.bam | xargs -n1 -P5 samtools index",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGzj0x05b66kqQv51011q"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

Job-Based Object References (JBORs)

$ dx run app-swiss-army-knife \
-iin=$(dx run app-bwa_mem_fastq_read_mapper -ireads_fastqgz=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXKk80fPFj4Jbfpxb6Ffv2 -igenomeindex_targz=project-BQpp3Y804Y0xbyG4GJPQ01xv:file-B6qq53v2J35Qyg04XxG0000V -y --brief):sorted_bam \
-icmd="samtools index *.bam" -y

Using input JSON:
{
    "in": [
        {
            "$dnanexus_link": {
                "field": "sorted_bam",
                "job": "job-xxxx"
            }
        }
    ],
    "cmd": "samtools index *.bam"
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

Advanced Options

Some examples of additional functionalities provided by dx run are listed below.

Quiet Output

Regardless of whether you run a job interactively or non-interactively, the command dx run will always print the exact input JSON with which it is calling the applet or app. If you don't want to print this verbose output, you can use the --brief flag which tells dx to print out only the job ID instead. This job ID can then be saved.

$ dx run app-bwa_mem_fastq_read_mapper \
-ireads_fastqgz="project-BQbJpBj0bvygyQxgQ1800Jkk:/SRR100022/SRR100022_1.filt.fastq.gz" \
-ireads_fastqgz="project-BQbJpBj0bvygyQxgQ1800Jkk:/SRR100022/SRR100022_2.filt.fastq.gz" \
-igenomeindex_targz="project-BQpp3Y804Y0xbyG4GJPQ01xv:file-B6ZY4942J35xX095VZyQBk0v" \
--destination "mappings" -y --brief

TIP: When running jobs, you can use the -y/--yes option to bypass the prompts asking you to confirm running the job and whether or not you want to watch the job. This is useful for scripting jobs. If you want to confirm running the job and immediately start watching the job, you can use -y --watch.

Rerunning a Job With the Same Settings

If you are debugging applet-xxxx and wish to rerun a job you previously ran, using the same settings (destination project and folder, inputs, instance type requests), but use a new executable applet-yyyy, you can use the --clone flag.

$ dx run app-swiss-army-knife --clone job-xxxx -y

Using input JSON:
{
    "cmd": "ls *.bam | xargs -n1 -P5 samtools index",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGzj0x05b66kqQv51011q"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

If you want to modify some but not all settings from the previous job, you can simply run dx run <executable> --clone job-xxxx [options]. The command-line arguments you provide in [options] will override the settings reused from --clone. For example, this is useful if you want to rerun a job with the same executable and inputs but a different instance type, or if you want to run an executable with the same settings but slightly different inputs.

The example shown below redirects the outputs of the job to the folder "outputs/".

$ dx run app-swiss-army-knife \
--clone job-xxx --destination project-xxxx:/output -y

Specifying the Job Output Folder

$ dx run app-bwa_mem_fastq_read_mapper \
-ireads_fastqgz="project-BQbJpBj0bvygyQxgQ1800Jkk:/SRR100022/SRR100022_1.filt.fastq.gz" \
-ireads_fastqgz="project-BQbJpBj0bvygyQxgQ1800Jkk:/SRR100022/SRR100022_2.filt.fastq.gz" \
-igenomeindex_targz="project-BQpp3Y804Y0xbyG4GJPQ01xv:file-B6ZY4942J35xX095VZyQBk0v" \
--destination "mappings" -y --brief

In the above command, the flag --destination project-xxxx:/mappings instructs the job to output all results into the "mappings" folder of project-xxxx.

Specifying a Different Instance Type

The dx run --instance-type command allows you to specify the instance type(s) to be used for the job. More information can be found by running the command dx run --instance-type-help.

$ dx run parliament -iillumina_bam=illumina.bam -iref_fasta=ref.fa.gz \
--instance-type '{"honey":"mem1_ssd1_x32", "ssake":"mem1_ssd1_x8", "ssake_insert":"mem1_ssd1_x32", "main":"mem1_ssd1_x16"}' -y --brief

Adding Metadata to a Job

If you are running many jobs that have varying purposes, you can organize the jobs using metadata. There are two types of metadata on the DNAnexus platform: properties and tags.

Properties are key-value pairs that can be attached to any object on the platform, whereas tags are strings associated with objects on the platform. The --property flag allows you to attach a property to a job, and the --tag flag allows you to tag a job.

Adding metadata to executions does not affect the metadata of the executions' output files. Metadata on jobs make it easier for you to search for a particular job in your job history (e.g., if you wanted to tag all jobs run with a particular sample).

$ dx run app-swiss-army-knife \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXVY0093Jk1KVY1J082y7v \
-icmd="samtools sort -T /tmp/aln.sorted -o \
SRR100022_chrom20_mapped_to_b37.sorted.bam SRR100022_chrom20_mapped_to_b37.bam" \
--property foo=bar --tag dna -y

Specifying an App Version

If your current workflow is not using the most up-to-date version of an app, you can specify an older version when running your job by appending the app name with the version required, e.g. app-xxx/0.0.1 if the current version is app-xxx/1.0.0.

$ dx run app-swiss-army-knife/2.0.1 \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXVY0093Jk1KVY1J082y7v \
-icmd="samtools sort -T /tmp/aln.sorted -o SRR100022_chrom20_mapped_to_b37.sorted.bam SRR100022_chrom20_mapped_to_b37.bam" \
-y --brief

Watching a Job

If you would like to keep an eye on your job as it runs, you can use the --watch flag to ask the job to print its logs in your terminal window as it progresses.

$ dx run app-swiss-army-knife \
-iin=project-BQbJpBj0bvygyQxgQ1800Jkk:file-BQbXVY0093Jk1KVY1J082y7v \
-icmd="samtools sort -T /tmp/aln.sorted -o SRR100022_chrom20_mapped_to_b37.sorted.bam SRR100022_chrom20_mapped_to_b37.bam" \
--watch -y --brief
job-xxxx

Job Log
-------
Watching job job-xxxx. Press Ctrl+C to stop.

Providing Input JSON

From the CLI

If using the CLI to enter the full input JSON, you must use the flag ‑j/‑‑input‑json followed by the JSON in single quotes. Only single quotes should be used to wrap the JSON to avoid interfering with the double quotes used by the JSON itself.

$ dx run app-swiss-army-knife -j '{ "cmd": "ls *.bam | xargs -n1 -P5 samtools index", "in": [ { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BQbXVY0093Jk1KVY1J082y7v" } }, { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY" } }, { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BZ9YGzj0x05b66kqQv51011q" } } ] }' -y

Using input JSON:
{
    "cmd": "ls *.bam | xargs -n1 -P5 samtools index",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGzj0x05b66kqQv51011q"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

From a File

If using a file to enter the input JSON, you must use the flag ‑f/‑‑input‑json‑file followed by the name of the JSON file.

$ dx run app-swiss-army-knife -f input.json

Using input JSON:
{
    "cmd": "ls *.bam | xargs -n1 -P5 samtools index",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGzj0x05b66kqQv51011q"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

From stdin

Entering the input JSON file using stdin is done in much the same way as entering the file using the -f flag with the small substitution of using "-"as the filename. Below is an example that demonstrates how to echo the input JSON to stdin and pipe the output to the input of dx run. As before, single quotes should be used to wrap the JSON input to avoid interfering with the double quotes used by the JSON itself.

$ echo '{ "cmd": "ls *.bam | xargs -n1 -P5 samtools index", "in": [ { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BQbXVY0093Jk1KVY1J082y7v" } }, { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY" } }, { "$dnanexus_link": { "project": "project-BQbJpBj0bvygyQxgQ1800Jkk", "id": "file-BZ9YGzj0x05b66kqQv51011q" } } ] }' | dx run app-swiss-army-knife -f - -y

Using input JSON:
{
    "cmd": "ls *.bam | xargs -n1 -P5 samtools index",
    "in": [
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BQbXVY0093Jk1KVY1J082y7v"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGpj0x05xKxZ42QPqZkJY"
            }
        },
        {
            "$dnanexus_link": {
                "project": "project-BQbJpBj0bvygyQxgQ1800Jkk",
                "id": "file-BZ9YGzj0x05b66kqQv51011q"
            }
        }
    ]
}

Calling app-xxxx with output destination project-xxxx:/

Job ID: job-xxxx

Getting Additional Information on dx run

Cost Run Limits

The --cost-limit cost_limit sets the maximum cost of the job before termination. In case of workflows it is cost of the entire analysis job. For batch run, this limit is applied per job. See the dx run --help command for more information.

Job Runtime Limits

On the DNAnexus Platform, jobs are limited to a runtime of 30 days. Jobs running longer than 30 days will be automatically terminated.

When specifying input parameters using the ‑i/‑‑input flag, you must use the input field names (not to be confused with their human-readable labels). To look up the input field names for an app, applet, or workflow, you can run the command dx run app(let)-xxxx -h, as shown below using the (platform login required to access this link).

To specify array inputs, reuse the ‑i/‑‑input flag for each input in the array and each file specified will be appended into an array in same order as it was entered on the command line. Below is an example of how to use the to index multiple BAM files (platform login required to access this link).

(JBORs) can also be provided using the -i flag with syntax ‑i<input name>=<job id>:<output name>. Combined with the --brief flag (which allows dx run to output just the job ID) and the -y flag (to skip confirmation), you can string together two jobs using one command.

Below is an example of how to run the (platform login required to access this link), producing the output named "sorted_bam" as described in the app's helpstring by executing the command dx run app-bwa_mem_fastq_read_mapper -h. The "sorted_bam" output will then be used as input for the (platform login required to access this link).

In the above command, the command overrides the --clone job-xxxx command to use the executable (platform login required to access this link) rather than that used by the job.

While the --clone job-xxxx flag will copy the applet, instance type, and inputs, it will not copy usage of the --allow-ssh or --debug-on flags. These will have to be re-specified for each job run. For more information, see the page.

The --destination flag allows you to specify the full project-ID:/folder/ path in which to output the results of the app(let). If this flag is unspecified, the output of the job will default to the present working directory, which can be determined by running .

Some apps and applets have multiple , meaning that different instance types can be specified for different functions executed by the app(let). In the example below, we run the (platform login required to access this link) while specifying the instance types for the entry points "honey," "ssake," "ssake_insert," and "main." Specifying the instance types for each entry point requires a JSON-like string, meaning that the string should be wrapped in single quotes, as explained earlier, and demonstrated below.

You can also specify the input JSON in its entirety. To specify a data object, you must wrap it in (a key-value pair with a key of "$dnanexus_link" and value of the data object's ID). Because you are already providing the JSON in its entirety, as long as the applet/app ID can be resolved and the JSON can be parsed, you will not be prompted to confirm before the job is started. There are three methods for entering the full input JSON, which we discuss in separate sections below.

Executing the dx run --help command will show all of the flags available to use in conjunction with dx run. The message printed by this command is identical to the one displayed in the brief description of .

Swiss Army Knife app
Swiss Army Knife app
BWA-MEM FASTQ Read Mapper app
Swiss Army Knife app
Swiss Army Knife app
Connecting to Jobs
run from UI.
BWA-MEM FASTQ Read Mapper app
Parliament app
Job-based object references
DNAnexus link form
entry points
dx run
dx pwd
dx run