DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Finding the Right App or Workflow
  • Running Apps and Workflows
  • Launching a Tool
  • Configure Inputs and Outputs
  • Configure Runtime Settings
  • Batch Run
  • Specify Batch Inputs
  • Configure Batch Inputs
  • Starting and Monitoring Your Analysis
  • Learn More

Was this helpful?

Export as PDF
  1. Getting Started
  2. Key Concepts

Apps and Workflows

Every analysis in DNAnexus is run using apps. Apps can be linked together to create workflows. Learn the basics of using both.

Last updated 1 year ago

Was this helpful?

You must set up billing for your account before you can perform an analysis, or upload or egress data.

Finding the Right App or Workflow

The Tools Library provides a list of available apps and workflows. To see this list, select Tools Library from the Tools entry in the main Platform menu.

On the DNAnexus Platform, apps and workflows are generically referred to as "tools."

To find the tool you're looking for in the Tools Library, you can use search filters. Filtering enables you to find tools with a specific name, in a specific category, or of a specific type:

To see what inputs a tool requires, and what outputs it generates, select that tool's row in the list. The row will be highlighted in blue; the tool's inputs and outputs will be displayed in a pane to the right of the list:

To make sure you can find a tool later, "pin" it to the top of the list. Click the "..." icon at the far right end of the row showing the tool's name and key details about it. Then click Add Pin:

To learn more about a tool, click on its name in the list. The tool's detail page will open, showing a wide range of info, including guidance in how to use it, version history, pricing, and more:

Running Apps and Workflows

Launching a Tool

Launching from the Tools Library

You can quickly launch the latest version of any given tool from the Tools Library page. Or, navigate into the Details page of the tool and click the Run button:

Launching from a Project

From within a project, navigate to the Manage pane, then click the Start Analysis button.

A dialogue window will open, showing a list of tools. These will include the same tools as shown in the Tools Library, as well as workflows and applets specifically available in the current project. Select the tool you want to run, then click Run Selected:

Workflows and applets can be launched directly from where they reside within a project. Select the workflow or applet in their folder location, and click Run.

Launch Configuration

Confirm details of the tool you are about to run. Note that selection of a project location is required for any tool to be run. You will need at minimum Contributor access level to the project.

Configure Inputs and Outputs

The tool may require specific inputs to be filled in before starting the run. You can quickly identify the required inputs by looking for the highlighted areas that are marked Inputs Required on the page.

You can access help information about each input or output by inspecting the label of each item. If a detailed README is provided for the executable, you can click the View Documentation icon to open the app or workflow info pane.

To configure instance type settings for a given tool or stage, click the Instance Type icon located on the top-right corner of the stage.

To configure output location and view info regarding output items, go to the Outputs tab under each stage. For workflows, output location can be specified separately for each stage.

The I/O graph provides an overview of the input/output structure of the tool. The graph is available for any tool and can be accessed via the Actions/Workflow Actions menu.

Once all required inputs have been configured, the page will indicate that the run is ready to start. Click on Start Analysis to proceed to the final step.

Configure Runtime Settings

As the last step before launching the tool, you can review and confirm various runtime settings, including execution name, output location, priority, job rank, spending limit, etc. You can also review and modify instance type settings before starting the run.

Once you have confirmed final details, click Launch Analysis to start the run.

Batch Run

Batch run allows users to run the same app or workflow multiple times, with specific inputs varying between runs.

Specify Batch Inputs

To enable batch run, start from any input that you wish to specify for batch run, and open its I/O Options menu on the right hand side. From the list of options available, select Enable Batch Run.

Input fields with batch run enabled will be highlighted with a Batch label. Click any of the batch enabled input fields to enter the batch run configuration page.

Not all input classes are supported for batch run configuration. See table below.

Input Class
Batch Run Support

Files and other data objects

Yes

Files and other data objects (array)

Partially supported. Can accept entry of a single-value array

String

Yes

Integer

Yes

Float

Yes

Boolean

Yes

String (array)

No

Integer (array)

No

Float (array)

No

Boolean (array)

No

Hash

No

Configure Batch Inputs

The batch run configuration page allows specifying inputs across multiple runs. Interact with each table cell to fill in desired values for any run or field.

Similar to configuration of inputs for non-batch runs, you will need to fill all the required input fields to proceed to next steps. Optional inputs, or required inputs with a predefined default value, can be left empty.

Once all required fields (for both batch inputs and non-batch inputs) have been configured, you can proceed to start the run via the Start Analysis button.

Starting and Monitoring Your Analysis

Learn More

Specialized tools, such as and , require special licenses to run.

A license is required to use the Job Ranking feature. for more information.

Once you've finished setting up your tool, start your analysis by clicking the Start Analysis button. .

, leveraging advanced techniques like Smart Reuse.

.

.

.

.

JupyterLab
Spark Apps
Contact DNAnexus Support
Learn in depth about running apps and workflows
Learn how to build a simple app
Learn more about building apps using Bash or Python
Learn in depth about building and deploying apps, including Spark apps
Learn in depth about importing, building, and running workflows
Follow these instructions to set up billing.
Follow these instructions to monitor the job as it runs
Configure name and output location before the launch
Fill in the Input Required items before starting the run
Help information for each field and the tool overall
Show / Hide instance type settings
Configure output locations for each stage of the workflow
I/O graph display
The tool has been fully configured and ready to start the run
Review and confirm runtime settings before starting the run
Enable Batch Run from the I/O Options menu
The total 10 batch runs have been fully configured and ready to launch