DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Using the Table Exporter App
  • Launching the App
  • Inputs
  • Processes
  • Outputs
  • Best Practices

Was this helpful?

Export as PDF
  1. Developer
  2. Apps
  3. Spark Apps

Table Exporter

Learn to use Table Exporter to extract data from an Apollo Dataset, cohort, or dashboard into a delimited file for use in analysis, or download.

Last updated 1 month ago

Was this helpful?

An Apollo license is required to use Table Exporter on the DNAnexus Platform. Org approval may also be required. for more information.

Using the Table Exporter App

Launching the App

To launch the app, enter this command via the command line:

dx run table-exporter

Inputs

The Table Exporter app requires as an input:

  • dataset_or_cohort_or_dashboard : The Apollo Dataset, cohort, or dashboard to export from. This input must use the format.

  • output : The name of the file to create. Based on the Output File Format, the extension will be added automatically.

Additional optional inputs are:

  • output_format : [choices: "CSV", "TSV"] [default: "CSV"] The file format to use for the generated file:

    • CSV - Comma-separated values

    • TSV - Tab-separated values

  • coding_option : [choices: "REPLACE", "RAW", "EXCLUDE"] [default: "REPLACE"] How to handle coded fields:

    • REPLACE - If a coding value exists, replace the raw value with the code.

    • RAW - Export the raw values of the field.

    • EXCLUDE - If a coding value exists, do not export the value. Most commonly used with sparse fields.

  • header_style : [choices: "FIELD-NAME", "FIELD-TITLE", "UKB-FORMAT", "NONE"] [default: "FIELD-NAME"] The format to use for the headers in the exported file:

    • FIELD-NAME - Use the name of the field in the database.

    • FIELD-TITLE - Use the name of the field as shown in the cohort browser.

    • NONE - Do not include headers.

    • UKB-FORMAT - This assumes the field name follows the pattern p<field id>_i<instance>_a<array>. Names are then converted into UK Biobank style - for example, field p123_i0_a0 becomes 123-0.0.

  • entity : The name of the entity you would like to export. If no entity is specified, the primary entity will be exported.

  • field_names_file_txt: A file containing the names of the fields to export. Each line should contain one field name. If an entity is specified and field names are not, all fields from the specified entity will be exported. Only one of the following three input options should be provided: “Field Names”, or “Field Titles”, or “File Containing Field Names.” When more than one of these input options is specified, the app will fail with an error message. If all options are left blank, and only an entity is specified, all fields in the entity will be exported.

    • field_names: The names of the fields to export. To specify multiple field names in the UI, enter a comma separated string (e.g. field1,field2,field3...). To specify multiple field names in the CLI, use multiple assignments to the input argument -ifield_names, like so: -ifield_names="field1" -ifield_names="field2" -ifield_names="field3". If an entity is specified and field names are not, all fields from the specified entity will be exported.

  • field_titles : The title of the fields to export. Note that the listed titles should be the field titles (as they shown in the Cohort Browser) when selecting a given field. When running the app through the UI, field titles containing commas should must have a backslash (\) escape character before the comma, and multiple title entries should be delimited by a comma (e.g. field1,field2,fielda\,b...). The escape character is not needed while entering comma containing field titles in the CLI. To specify multiple field titles in the CLI, use multiple assignments to the input argument -ifield_titles, like so: -ifield_titles="field1" -ifield_titles="field2" -ifield_titles="fielda,b". If an entity is specified and field titles are not, all fields from the specified entity will be exported.

  • cohort_table_entity_names: The cohort tables, based on entity names, that you would like to export. If no names are specified, all of the cohort tables will be exported.

  • cohort_table_entity_titles: The cohort tables, based on entity titles, that you would like to export. If no titles are specified, all of the cohort tables are exported.

Processes

  • If only the Entity field is specified and Field Titles/ or Field Names fields are not specified, all the fields in the specified entity will be exported.

  • If both the Entity and Field Titles or Field Names are specified, only the specified fields will be exported.

  • If neither Entity nor Field Titles/Names are specified, then:

    • If the input is a dashboard or a cohort, the columns specified in the cohort table are used to generate the exported file. One output file for every entity in the dashboard will be created.

    • If the input is a Dataset and if the Dataset has a default dashboard, the columns specified in the cohort table of the default dashboard are used to generate the exported file. One output file for every entity in the dashboard will be created.

    • If the input is a Dataset and if the Dataset doesn’t have a default dashboard, the primary entity and related fields are used to generate the exported file.

Outputs

  • csv or tsv: The generated CSV or TSV file

    • For cohort_table_entity_names and cohort_table_entity_titles inputs with cohorts/dashboards containing a ted_container the file naming convention is <output>_<enitity_name>.<file_format>

    • For non ted_container cohorts/dashboards the naming convention is <output>.<file_format>

    • For dataset inputs the naming convention is <output>.<file_format>

Best Practices

  1. For extremely large entities (thousands of columns with hundreds of thousands of rows), using "Replace" codings will significantly increase runtime and cost. It is recommended that in those instances you export without coding replacement.

  2. If you are exporting on a dataset that has databases in a controlled project where DB UI View Only permission is set, the application must be run in the project with the restricted database to execute successfully.

  3. If you encounter a failed job as the result of insufficient disk space, please rerun the job using additional compute resources. I.e, increase the default instance type to have more memory and/or storage.

Contact DNAnexus Sales
v3.0 Apollo Dataset