DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Containers for Analyses
  • Container API Method Specifications
  • API method: /container-xxxx/describe

Was this helpful?

Export as PDF
  1. Developer
  2. DNAnexus API
  3. Running Analyses

Containers for Execution

Last updated 9 months ago

Was this helpful?

There are three types of containers related to job execution:

  1. : created whenever an app, applet is run

  2. : container in which data can be cached for future execution by the same version of an app; it is always associated with a particular project

  3. : created during app creation, containing any resources the app requires for execution

The table below summarizes the permissions that a job running the associated applet or app will receive, as well as the maximum permissions a member of the project context can receive (they must have equal or greater permissions in the project).

Container

Job

Project Member

Temporary workspace

CONTRIBUTE

VIEW

Project cache

CONTRIBUTE

CONTRIBUTE

Resources container

VIEW

VIEW if app developer; NONE otherwise

Jobs running other applets or apps may in some cases also be able to access the resources container of an app with VIEW permissions, but only if they have been given access to act on behalf of the user as a developer.

Containers for Analyses

Analyses also have temporary workspace containers that are created on their behalf. These containers are used primarily by the system for storing intermediate results for the analysis, and are not meant to be accessed directly by users or jobs. These containers are cleaned up after the associated analysis has transitioned to a terminal state.

Container API Method Specifications

API method: /container-xxxx/describe

Specification

Describes the specified container.

Inputs

  • fields mapping (optional) Restrict the output of this method to have only the provided keys in this field

    • key Desired output field; see the "Outputs" section below for valid values here

    • value boolean The value true

Outputs

  • id string ID of the container (i.e. the string "container-xxxx")

The following fields are included by default (but can be disabled using fields):

  • class string The value "container"

  • name string The name of the container

  • billTo string ID of the account to which any costs associated with this container will be billed

  • type string The type of container: one of "temporary" (for applet/app execution), "cache" (for apps), or "resources" (for apps)

  • created timestamp Time at which this container was created

  • modified timestamp Time at which this container was last updated

  • dataUsage number Data usage in GB (not including sponsored data). A short amount of time may elapse between changes to the container and when this number is updated.

  • sponsoredDataUsage number Sum of DNAnexus-sponsored and third-party sponsored data usage in GB. A short amount of time may elapse between changes to the container and when this number is updated.

The following fields (included by default) are available if type is "temporary" or "cache":

  • project string The associated project context

The following fields (included by default) are available if type is "cache" or "resources":

  • app string ID of the associated app

  • appName string Name of the associated app

The following fields (included by default) are available if type is "temporary", the associated job was run with delayWorkspaceDestruction set to true, and the job has finished (successfully or not):

  • destroyAt timestamp Time after which this container will be

    automatically destroyed

The following fields are only returned if the corresponding field in the fields input is set to true:

  • folders array of strings List of all folders in the container

  • egressBillTo string. The value of the egressBillTo property, inherited from the project that the container is associated with at the time of container's creation.

    • minimumPartSize int minimum part size, in bytes, that applies to all parts except the part with the highest index (Clients may assume that if emptyLastPartAllowed is false, then minimumPartSize will be at least 1, that is, the constraint on the last part is no stronger than the constraint on previous parts.)

    • maximumPartSize int maximum part size, in bytes

    • maximumNumParts int the maximum number of parts that may be uploaded (also equal to the largest permissible part index)

    • maximumFileSize int the maximum size of the file, in bytes

Errors

  • ResourceNotFound (the specified container does not exist)

  • InvalidInput (the input is not a hash, folders (if provided) is not a boolean)

  • PermissionDenied (VIEW access required)

region string The region this container resides in. For more information about regions, see .

level string The highest permissions level that the requesting user has; this field is only returned when called directly, i.e. it does not show up in the describe values returned in

fileUploadParameters mapping Information about what part sizes and numbers should be used to files in this container. See the section of the Files API for more information about interpreting this. Mapping with the key/values:

emptyLastPartAllowed boolean If true, then the the minimum number of parts is 1 and the part with the highest index may contain 0 bytes. If false, then the minimum number of parts is 0 and the part with the highest index must contain at least 1 byte. Note that all parts other than the part with the highest index must still have minimum size given by minimumPartSize. (If true, then the client can upload a 0-byte file by invoking once with a part of size 0. If false, then the client can upload a 0-byte file by not invoking at all.)

Regions
Temporary workspace
/file-xxxx/upload
Limits on Parts
/file-xxxx/upload
/file-xxxx/upload
/system/findProjects
Project cache container
Resources container