DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Overview
  • Generate Batch File
  • Run a Batch Job
  • Setting Output Folders for Batch Jobs
  • Batching Multiple Inputs
  • Additional Resources

Was this helpful?

Export as PDF
  1. User
  2. Running Apps and Workflows

Running Batch Jobs

Last updated 3 days ago

Was this helpful?

To launch a DNAnexus application or workflow on many files automatically, one may write a short script to loop over the desired files in a project and launch jobs or analyses. Alternatively, the provides a few handy utilities for batch processing. To use the GUI to run in batch mode, see these .

Overview

In this tutorial, we'll batch process a series of sample FASTQs (forward and reverse reads). We'll use the dx generate_batch_inputs command to generate a batch file -- a tab-delimited (TSV) file where each row corresponds to a single run in our batch. Then we'll process our batch using the command with the --batch-tsv options.

Generate Batch File

In the project My Research Project we have the following files in our root directory:

$ dx select "My Research Project"
Selected project My Research Project
$ dx ls /
RP10B_S1_R1_001.fastq.gz
RP10B_S1_R2_001.fastq.gz
RP10T_S5_R1_001.fastq.gz
RP10T_S5_R2_001.fastq.gz
RP15B_S4_R1_002.fastq.gz
RP15B_S4_R2_002.fastq.gz
RP15T_S8_R1_002.fastq.gz
RP15T_S8_R2_002.fastq.gz

We want to batch process these read pairs using (link requires platform login). For a single execution of the BWA-MEM app, we need to specify the following inputs:

  • reads_fastqgzs - FASTQ containing the left mates

  • reads2_fastqgzs - FASTQ containing the right mates

  • genomeindex_targz - BWA reference genome index

$ dx generate_batch_inputs \
    -i reads_fastqgzs='RP(.*)_R1_(.*).fastq.gz' \
    -i reads2_fastqgzs='RP(.*)_R2_(.*).fastq.gz'
Found 4 valid batch IDs matching desired pattern.
Created batch file dx_batch.0000.tsv

CREATED 1 batch files each with at most 500 batch IDs.

You can optionally provide a --path argument and provide a specific file and folder to search for recursively within your project. Specifically, the value for --path must be a directory specified as:

/path/to/directory or project-xxxx:/path/to/directory

Any file present within this directory or recursively within any subdirectory of this directory will be considered a candidate for a batch run.

The (.*) are regular expression groups. You can provide arbitrary regular expressions as input. The first match in the group will be the pattern used to group pairs in the batch, these matches are called batch identifiers (batch IDs). To explain this behavior in more detail, We will use the output of the dx generate_batch_inputs command above:

The dx generate_batch_inputs command creates the dx_batch.0000.tsv that looks like:

$ cat dx_batch.0000.tsv
batch ID  reads_fastqgzs              reads2_fastqgzs              pair1 ID    pair2 ID
10B_S1    RP10B_S1_R1_001.fastq.gz    RP10B_S1_R2_001.fastq.gz     file-aaa    file-bbb
10T_S5    RP10T_S5_R1_001.fastq.gz    RP10T_S5_R2_001.fastq.gz     file-ccc    file-ddd
15B_S4    RP15B_S4_R1_002.fastq.gz    RP15B_S4_R2_002.fastq.gz     file-eee    file-fff
15T_S8    RP15T_S8_R1_002.fastq.gz    RP15T_S8_R2_002.fastq.gz     file-ggg    file-hhh

Recall the regular expression was RP(.*)_R1_(.*).fastq.gz. Although there are two grouped matches in this example, only the first one is used as the pattern for the batch ID. For example, the pattern identified for RP10B_S1_R1_001.fastq.gz is 10B_S1 which corresponds to the first grouped match while the second one is ignored.

Examining the TSV file above, the files are grouped as expected, with the first match labeling the identifier of the group within the batch. The next two columns show the file names. The last two columns contain the IDs of the files on the DNAnexus platform. You can either edit this file directly or import it into a spreadsheet to make any subsequent changes.

Note that if an input for the app is an array, the input file IDs within the batch.tsv file need to be in square brackets in order to work. The following bash command will add brackets to the file IDs to column 4 and 5. You may need to change the variables in the below command ("$4" and "$5") to match the correct columns in your file. The command's output file, "new.tsv", is ready for the dx run --batch-tsv command.

head -n 1 dx_batch.0000.tsv > temp.tsv && \
tail -n +2 dx_batch.0000.tsv | \
awk '{sub($4, "[&]"); print}' | \
awk '{sub($5, "[&]"); print}' >> temp.tsv && \
tr -d '\r' < temp.tsv > new.tsv && \
rm temp.tsv

Note that the example above is for a case where all files have been paired properly. dx generate_batch_inputs will create a TSV for all files that can be successfully matched for a particular batch ID. There are two classes of errors for batch IDs that are not successfully matched:

  • A particular input is missing (e.g. reads_fastqgzs has a pattern but no corresponding match can be found for reads2_fastqgzs)

  • More than one file ID matches the exact same name

For both of these cases, dx generate_batch_inputs returns a description of these errors to STDERR.

If you match more than 500 files, multiple batch files will be generated in groups of 500 to limit the number of jobs in a single batch run.

Run a Batch Job

We have our batch file so now we can execute our BWA-MEM batch process:

dx run bwa_mem_fastq_read_mapper \
  -igenomeindex_targz="Reference Genome Files":\
"/H. Sapiens - GRCh38/GRCh38.no_alt_analysis_set.bwa-index.tar.gz" \
  --batch-tsv dx_batch.0000.tsv

Here, genomeindex_targz is a parameter set at execution time that is common to all groups in the batch and --batch-tsv corresponds to the input file generated above.

To monitor a batch job, simply use the 'Monitor' tab like you normally would for jobs you launch.

Setting Output Folders for Batch Jobs

In order to direct the output of each run into a separate folder, the --batch-folders flag can be used, for example:

dx run bwa_mem_fastq_read_mapper \
  -igenomeindex_targz="project-BQpp3Y804Y0xbyG4GJPQ01xv:\
file-BFBy4G805pXZKqV1ZVGQ0FG8" \
  --batch-tsv dx_batch.0000.tsv \
  --batch-folders

This will output the results for each sample in folders named after batch IDs, in our case the folders: "/10B_S1/", "/10T_S5/", "/15B_S4/", and "/15T_S8/". If the folders do not exist, they will be created.

The output folders are created under a path defined with --destination, which by default is set to current project and the "/" folder. For example, this command will output the result files in "/run_01/10B_S1/", "/run_01/10T_S5/", etc.:

dx run bwa_mem_fastq_read_mapper \
  -igenomeindex_targz="project-BQpp3Y804Y0xbyG4GJPQ01xv:\
file-BFBy4G805pXZKqV1ZVGQ0FG8" \
  --batch-tsv dx_batch.0000.tsv \
  --batch-folders \
  --destination=My_project:/run_01

Batching Multiple Inputs

dx generate_batch_inputs is limited to starting runs that differ only in input fields of type file. Use a more flexible for loop construct if you want batch runs that differ in string, file array or other non-file type inputs.

Additionally, a for loop allows you to specify other dx run arguments such as name for every run:

for i in 1 2; do
    dx run swiss-army-knife -icmd="wc *>${i}.out" -iin="fileinput_batch${i}a" -iin="file_input_batch${i}b" --name "sak_batch${i}"
done

You can also use the dx run command in order to use stage_id . For example, if you create a workflow called "Trio Exome Workflow - Jan 1st 2020 9:00am" in your project, you can run it from the command line:

dx login
dx run "Trio Exome Workflow - Jan 1st 2020 9\:00am"

Note the \ that is needed to escape the : in the workflow name.

Inputs to the workflow can be specified using dx run <workflow> --input name=stage_id:value, where stage_id is a numeric ID starting at 0. More help can be found by running the commands dx run --help and dx run <workflow> --help.

To batch multiple inputs then, do the following:

dx cd /path/to/inputs
for i in $(dx ls); do
    dx run "Trio Exome Workflow - Jan 1st 2020 9\:00am" --input 0.reads="$i"
done

Additional Resources

We'll use the BWA reference genome index from the public (requires platform login) project for all runs; however, for the forward and reverse reads we want read pairs used to vary from run to run. To generate a batch file that pairs our input reads:

For additional information and examples of how to run batch jobs, may be useful. Note that this material is not a part of the official DNAnexus documentation and is for reference only.

Reference Genome
Chapter 6 of this reference guide
DNAnexus SDK
BWA-MEM
instructions
dx run