DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Asset Directory Structure
  • Asset Metadata
  • Additional Resources
  • Makefile
  • Building an Asset

Was this helpful?

Export as PDF
  1. Developer
  2. Apps
  3. Dependency Management

Asset Build Process

This page describes the structure of an asset source directory and shows you how to begin building an asset bundle.

Asset Directory Structure

A DNAnexus asset can be built using the dx build_asset utility in the DNAnexus SDK, which builds an asset bundle in an isolated environment. The utility expects a directory with the following structure:

asset_name
├── dxasset.json                          # This is the only required file
├── resources
│   └── ...
└── Makefile

In order to build this asset, run the command dx build_asset asset_name

Each component of the directory is described in detail below.

Asset Metadata

The dxasset.json file contains essential metadata that define the asset. This file is required to build an asset and therefore must be present in the asset source directory. The following is a sample dxasset.json file:

{
  "name": "asset_name",  # Asset bundle name
  "title": "Example Asset", # Human-readable name
  "description": "Libraries required for a tool you've built", # A detailed description of the asset bundle and its contents
  "version": "0.0.1", # Version number
  "distribution": "Ubuntu", # The flavor and version of Linux that the asset is targeted to (default Ubuntu)
  "instanceType": "mem2_ssd1_x4", # The instance type on which the asset bundle will be built
  "release": "24.04", # The version of Linux flavor
  "excludeResource": [ # Files and directories that should not be included
    "/src/my.ccp", # in the asset bundle (optional)
    "/scripts"
  ],

  "execDepends": [ # The list of packages on which the asset depends
    {"name": "samtools", "package_manager": "apt"},
    {"name": "pandas","package_manager": "pip"}
  ]
}

Additional Resources

If anything is present in the resources/ subdirectory of the asset directory, it will be archived by dx build_asset and unpacked into the root of a clean Ubuntu installation during the asset build process. Files here can be propagated directly into the asset, or they can be used to build other artifacts that are themselves propagated into the asset.

For example, if you have some C or C++ source files that should be compiled so that their binary output can be included in the asset bundle, you can do one of the following:

  • Put your source files somewhere in the resources/ directory, such as resources/example_dir/.

  • Invoke your build process from the Makefile included in the asset source directory (see below). In the asset build environment, your source files are available in the directory /example_dir.

By default, the PATH in the execution environment (inherited from Ubuntu's defaults) is the following:

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Makefile

The optional Makefile contains instructions to build and copy additional resources required by the asset. If present in the asset source directory, the Makefile is copied to the working directory /home/dnanexus of the asset build environment and executed. Here is an example Makefile that downloads and installs various resources:

SHELL=/bin/bash -e -x -o pipefail

VERSION=2.30.0
all:
	apt-get update -y
	apt-get install build-essential git zlib1g zlib1g-dev bzip2 libbz2-dev liblzma-dev wget python python-dev -y
	wget https://github.com/arq5x/bedtools2/releases/download/v${VERSION}/bedtools-${VERSION}.tar.gz
	tar zxvf bedtools-${VERSION}.tar.gz
	cd bedtools2 && make && make install && cd ../
	rm -fr bedtools-${VERSION}.tar.gz bedtools2s

Building an Asset

Assuming the directory asset_name contains a dxasset.json file, you can use the following command to build your asset:

$ dx build_asset asset_name

A virtual worker starts a new DNAnexus job in order to provide an isolated build environment in which the following steps are performed:

  1. A snapshot is taken of the worker's filesystem. During this process, all the directories, files, and symbolic links are recorded, except those mentioned in the excludeResource field of the dxasset.json file. Files in directories matching the following paths are excluded: /proc*, /tmp*, /run*, /boot*, /home/dnanexus*, /sys*. On Ubuntu 24.04 and 20.04 paths /bin,/sbin are also excluded as they are symlinks to /usr/bin,/usr/sbin

  2. Any packages mentioned in the execDepends field of the dxasset.json are installed in the worker's execution environment.

  3. If a resources/ directory is present in the asset source directory, its contents are copied to the root directory / of the worker's execution environment.

  4. If a Makefile is present in the asset source directory, it is copied to the /home/dnanexus on the worker and executed as sudo make -C /home/dnanexus.

  5. A second snapshot of the worker's execution environment is taken.

  6. All new and modified (files with different timestamps from those on the earlier snapshot) files are packaged in the resulting asset bundle.

Last updated 2 months ago

Was this helpful?

To learn more about the Makefile in general, see the .

Wikipedia page