DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Stream BAM file from the platform to a worker
  • Output BAM file read count
  • Stream the result file to the platform
  • Wait for background processes
  • How is the SAMtools dependency provided?

Was this helpful?

Export as PDF
  1. Getting Started
  2. Developer Tutorials
  3. Bash

Mkfifo and dx cat

Last updated 5 years ago

Was this helpful?

This applet performs a SAMtools count on an input file while minimizing disk usage. For additional details on using FIFO (named pipes) special files, run the command man fifo in your shell.Warning: Named pipes require BOTH a stdin and stdout or they will block a process. In these examples, we place incomplete named pipes in background processes so the foreground script process does not block.

To approach this use case, let’s focus on what we want our applet to do:

  1. Stream the BAM file from the platform to a worker.

  2. As the BAM is streamed, count the number of reads present.

  3. Output the result into a file.

  4. Stream the result file to the platform.

Stream BAM file from the platform to a worker

First, we establish a named pipe on the worker. Then, we stream to stdin of the named pipe and download the file as a stream from the platform using .

First, we establish a named pipe on the worker. Then, we stream to stdin of the named pipe and download the file as a stream from the platform using .

  mkdir workspace
  mappings_fifo_path="workspace/${mappings_bam_name}"
  mkfifo "${mappings_fifo_path}" # FIFO file is created
  dx cat "${mappings_bam}" > "${mappings_fifo_path}" &
  input_pid="$!"

FIFO

stdin

stdout

BAM file

YES

NO

Output BAM file read count

Now that we have created our FIFO special file representing the streamed BAM, we can just call the samtools command as we normally would. The samtools command reading the BAM would provide our BAM FIFO file with a stdout. However, keep in mind that we want to stream the output back to the platform. We must create a named pipe representing our output file too.

  mkdir -p ./out/counts_txt/

  counts_fifo_path="./out/counts_txt/${mappings_bam_prefix}_counts.txt"

  mkfifo "${counts_fifo_path}" # FIFO file is created, readcount.txt
  samtools view -c "${mappings_fifo_path}" > "${counts_fifo_path}" &
  process_pid="$!"

FIFO

stdin

stdout

BAM file

YES

YES

output file

YES

NO

The directory structure created here (~/out/counts_txt) is required to use the dx-upload-all-outputs command in the next step. All files found in the path ~/out/<output name> will be uploaded to the corresponding <output name> specified in the dxapp.json.

Stream the result file to the platform

Currently, we’ve established a stream from the platform, piped the stream into a samtools command, and finally outputting the results to another named pipe. However, our background process is still blocked since we lack a stdout for our output file. Luckily, creating an upload stream to the platform will resolve this.

We can upload as a stream to the platform using the commands dx-upload-all-outputs or dx upload -. Make sure to specify --buffer-size if needed.

  mkdir -p ./out/counts_txt/

  counts_fifo_path="./out/counts_txt/${mappings_bam_prefix}_counts.txt"

  mkfifo "${counts_fifo_path}" # FIFO file is created, readcount.txt
  samtools view -c "${mappings_fifo_path}" > "${counts_fifo_path}" &
  process_pid="$!"

FIFO

stdin

stdout

BAM file

YES

YES

output file

YES

YES

Note: Alternatively, dx upload - can upload directly from stdin. In this example, we would no longer need to have the directory structure required for dx-upload-all-outputs.Warning: When uploading a file that exists on disk, dx upload is aware of the file size and automatically handles any cloud service provider upload chunk requirements. When uploading as a stream, the file size is not automatically known and dx upload uses default parameters. While these parameters are fine for most use cases, you may need to specify upload part size with the --buffer-size option.

Wait for background processes

Now that our background processes are no longer blocking the rest of the applet’s execution, we simply wait in the foreground for those processes to finish.

  wait -n  # "$input_pid"
  wait -n  # "$process_pid"
  wait -n  # "$upload_pid"

Note: If we didn’t wait the app script would running in the foreground would finish and terminate the job! We wouldn’t want that.

How is the SAMtools dependency provided?

The SAMtools compiled binary is placed directly in the <applet dir>/resources directory. Any files found in the resources/ directory will be uploaded so that they will be present in the worker’s root directory. In our case:

├── Applet dir
│   ├── src
│   ├── dxapp.json
│   ├── resources
│       ├── usr
│           ├── bin
│               ├── < samtools binary >

When this applet is run on a worker, the resources/ folder will be placed in the worker’s root directory /:

/
├── usr
│   ├── bin
│       ├── < samtools binary >
├── home
│   ├── dnanexus

/usr/bin is part of the $PATH variable, so we can reference the samtools command directly in our script as samtools view -c ...

The directory structure created here (~/out/counts_txt) is required to use the command in the next step. All files found in the path ~/out/<output name> will be uploaded to the corresponding <output name> specified in the dxapp.json.

We can upload as a stream to the platform using the commands or . Make sure to specify --buffer-size if needed.

View full source code on GitHub
dx cat
dx cat
dx-upload-all-outputs
dx-upload-all-outputs
dx upload -