DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Overview
  • Using the Dataset Extender App
  • Launching the App
  • Inputs
  • Process
  • Outputs
  • Automatically Detected Data Types
  • Best Practices

Was this helpful?

Export as PDF
  1. Developer
  2. Ingesting Data

Dataset Extender

Learn to use Dataset Extender, which allows you to expand a core Apollo dataset, then access the newly added data.

Last updated 1 year ago

Was this helpful?

An Apollo license is required to use Dataset Extender on the DNAnexus Platform. Org approval may also be required. for more information.

Overview

  • The primary target of this application is for ingestion of analysis results or newly derived phenotypes.

  • The application allows the user to ingest raw data and have the system automatically type cast, build categorically, and link the data with the core data, even across multiple datasets.

  • The result of the application is:

    • The new data is ingested into the Apollo database.

    • A new dataset is created with access to previously ingested data and the newly extended datasets.

  • The application is not meant as a permanent expansion to core data given that it has limited configurations over how data is ingested, and is intended to help grow a team's dataset with minimal effort.

    • Note that each run of the Dataset Extender app does result in a new raw table being created so heavy use on the same growing dataset can lead to degraded performance.

Using the Dataset Extender App

Launching the App

To launch the Dataset Extender app, enter this command via the command line:

dx run dataset-extender

Inputs

The Dataset Extender app requires as an input:

  • Source Data - a delimited (CSV, TSV) or gzip delimited file that contains the dataset to extend the data with. Note that this file can be no larger than 400 columns by 10,000,000 rows and the file must have a header.

  • Target Dataset Record - the dataset that will act as the dataset to be extended.

  • Instance Type - while a default should be sufficient for most small to medium datasets, if input files large, ensure that the instance is increased to help efficient complete the process.

Additional Optional Inputs are:

  • Output Dataset Name - name of the dataset that will be created that includes the newly ingested Source Data.

  • Database name - the name of the database to use (or create) if the data is to be written to a database that's different than the main database used in the Target Dataset Record. Note that if this is left blank and the main database used in the Target Dataset Record is not writeable, a new database will be automatically created and named db_<epoch>_<uuid>

  • Table Name - the tablename to which the Source Data will be written. If left blank, the basename of the Source Data file is used as the table name. If provided, ensure that the tablename is unique for the database.

  • Target Entity Name - the entity which the source data will be linked to. If left blank, the data will be linked to the main entity of the dataset. The entity that is to be joined to must contain a local (or global) unique key.

  • Join Relationship - how the Source Data joins to the Target Entity. By default this is automatic, but a relationship of one-to-one or many-to-one can be forced.

  • Build New Entity - when set to False (default), the Target Entity is extended. If the Source Data does not have a one-to-one relationship with the Target Entity, this will lead to an error. When set to True, a new entity will be added to the dataset.

  • New Entity Name - the logical entity name of the Source Data. If left empty, the entity name will be <target_entity_name>_extended_<epoch>. Note that the entity name must be unique for the dataset.

  • Source Data Delimiter - how the Source Data is delimited. By default this is set to comma (",") but this can be adjusted to tab ("\t").

  • See app documentation for further granular configurations.

Process

  1. The Dataset Extender app loads the source data into a Spark table.

  2. The app configurations are used to automatically generate dictionary information and the coding information (if Infer Categorical Code is > 0).

  3. From the input target dataset, the app joins the dataset with the newly ingested data and generates a novel dataset with the combined information.

Outputs

  • Database - the ID of the database to which the Source Data was written.

  • Dataset - the dataset record created.

  • Logs - available under Project: .csv-loader/<job-id>-clusterlogs.tar.gz.

    • Spark cluster logs - for advanced troubleshooting.

Automatically Detected Data Types

  • Integer

  • Float

  • Date

    • yyyy-[m]m-[d]d

  • DateTime

    • yyyy-[m]m-[d]d

    • yyyy-[m]m-[d]d [h]h:[m]m:[s]s.[ms][ms][ms][us][us][us]

    • yyyy-[m]m-[d]dT[h]h:[m]m:[s]s.[ms][ms][ms][us][us][us]

    • yyyy-[m]m-[d]dT[h]h:[m]m:[s]s.[ms][ms][ms][us][us][us]Z

    • yyyy-[m]m-[d]dT[h]h:[m]m:[s]s.[ms][ms][ms][us][us][us]-[h]h:[m]m

    • yyyy-[m]m-[d]dT[h]h:[m]m:[s]s.[ms][ms][ms][us][us][us]+[h]h:[m]m

  • String Categorical

    • This is detected based on the Infer Categorical Code input.

Best Practices

  1. Ensure that your column headers are database friendly and do not include special characters or spaces.

  2. When extending an entity, ensure that your column names are unique.

  3. Adjust the Infer Categorical Code based on the number of rows you are adding in your extension. If you are extending and adding 500,000 rows, you likely want the number higher.

  4. For best performance, aggregate your planned extensions together so that you are adding multiple columns at a time versus running the app multiple times and adding just 1 column at a time.

  5. Ensure that the delimiter set matches your file, the default is comma.

  6. When specifying a target entity, ensure you are using the entity title and not the display name available in the cohort browser.

The is an application meant to help expand your core dataset so that you and the entire team can access newly generated data. It is a lightweight app focused on quickly expanding core datasets with newly generated or acquired data that is to be shared with collaborators.

Expansion of a core, controlled dataset is meant to be performed using the to allow for greater control over system interpretation of data.

Folder Path - the folder path shown in the Cohort Browser field explorer. If left empty, the folder path will be the New Entity Name. Note that you can nest the new fields being created in an existing folder using the same notation used for (e.g. "Main Folder>Nested Folder").

Infer Categorical Code - a setting that converts a string field to a . This is the maximum number of distinct codes to treat as categorical. If this value is set to 0, no coding will be inferred.

Dataset Extender application
Data Model Loader
Contact DNAnexus Sales
data loading
Overview of all file inputs for the Dataset Extender app
A representation of the dataset extender's goal: to add new data to existing datasets.
string categorical