DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Data Ingestion
  • Phenotypic / Clinical Data Ingestion
  • Ingesting a Novel Small Dataset
  • Ingesting a Novel Large Dataset
  • Large or Technical Clinical Data Additions
  • Minor Extensions of Existing Datasets
  • Molecular (Assay) Data Ingestion
  • Genetic Variation Assay Model
  • Molecular Expression Assay Model

Was this helpful?

Export as PDF
  1. Developer

Ingesting Data

Overview of common use cases for data ingestion.

Last updated 1 year ago

Was this helpful?

An Apollo license is required to use the features described on this page. for more information.

Data Ingestion

In Apollo, data ingestion is the process by which data is transformed and stored, an is created, and the data is made available to the end user for scalable, repeatable, reproducible data consumption. Data ingestion loads data into the Apollo database which is backed by Parquet. When paired with a Spark-based analysis framework, this combination supports analysis scalability and performance at population scale - often, data representing hundreds of thousands, or even millions of participants). Once the data has been ingested and a Dataset created, the data can quickly and repeatedly be used with various Platform tools, such as the Cohort Browser, dxdata, and other Dataset-enabled apps, applets, and workflows, for rapid, delightful, and exceptionally scalable analysis.

Phenotypic / Clinical Data Ingestion

Phenotypic data generally refers to any data related to an individual's observable traits. The “individual” may be a participant, a sample, a project, or any desired primary focal point of a Dataset. Phenotypic data may contain a wide range of data; determinants, status, and measures of health, to documentation of care delivery, such as clinical data, general practitioner’s (GP) notes, or even telemetrics. It may also contain molecular biomarker data converted to a phenotypic style for easier analysis and categorization. As Apollo has a bring-your-own-schema structure, phenotypic data ingestion can support most data structures with single paths from the main entity to other entities (no circular references).

Ingesting a Novel Small Dataset

Small datasets are datasets with a high degree of quality/predictability with only a few logical entities that have less than a hundred features (columns) and usually no more than a few hundred thousand examples (rows) in each entity. These datasets can represent some analysis that's been performed, a sample of a larger dataset, or just limited availability.

This type of a dataset is a great dataset to get use for getting familiar with data ingestion tools before moving on to a larger dataset as managing, prepping, and ingesting the dataset can be done all at once.

For a small dataset, the application can be used to ingest the data files along with a data dictionary and optional coding. This will ensure that the ingested data is properly ingested into the database and a dataset is created so that the data can then be used with the , various apps, and is available in a structured manner through dxdata for use in or other command line environments.

Ingesting a Novel Large Dataset

Large datasets are datasets of varying quality that span many logical entities, can have hundreds or thousands of features (columns) and can have millions of examples (rows) in each entities. These datasets can include extracts of the following:

  • EHR data

  • biobank data

  • large clinical datasets

  • core company data

  • other large, mature datasets

Datasets of this size may conform to ontologies such as OMOP, SNOMED, or MedDRA or be predictably structured such as UKBiobank. These datasets often require greater data engineering consideration to outline the data structures and logical entities and can require harmonization or cleansing before the ingestion process begins.

Large or Technical Clinical Data Additions

Minor Extensions of Existing Datasets

Molecular (Assay) Data Ingestion

Molecular or assay data refers to the qualitative and/or quantitative representation of molecular features. For example, single nucleotide polymorphisms (SNPs) derived from whole exome sequencing (WES) of germline DNA, or bulk-mRNA transcript expression counts as derived from RNA-seq of tissue samples, are two possible types of data. Assay data tends to be well-defined by the community and often has standardized data structures and formats. Given this defined nature, we provide explicit support for the ingestion of commonly used assay types for stand-alone use in a novel Dataset and/or integration with existing Datasets to optimize data organization and query performance for downstream analysis. Datasets may contain none, one, or many assay instances, and assays may be of the same type or of different types. Representation of various assay types are provided below through the following assay models.

Genetic Variation Assay Model

Molecular Expression Assay Model

Once the data is cleansed and structured, the Data Model Loader application can be used to ingest the data files along with a data dictionary and optional coding. A more incremental ingestion strategy is recommended to ensure iterative success and easier troubleshooting should issues arise. Often for ingestions of this magnitude, customers rely on help from the team, to ensure an optimal experience.

When the data generated becomes too complex (e.g. multi-Entity data, data types requiring custom coding, extremely wide new Entities) or if large amounts of new data become available, the app may no longer provide enough control for extending your Apollo Dataset. The new data being added also may contain multiple Entities worth of data and may relate either to the main Entity or relate to an existing secondary Entity. To add this data to an existing Dataset, ingest the new data as if it is a novel Dataset using and then use the to link the new clinical data to the existing Dataset. The newly generated Dataset will contain all of the original data and the new Entities all in the same for use with the Cohort Browser, various apps, and all of the data is available in a structured manner through dxdata for use in Jupyter.

Through the process of translational research, new data can become available or is generated. To facilitate smoother usage usage of the data, the user may desire to append the data to an existing dataset for further use. This type of data is usually only representative of a single entity (or may be an extension of an existing ingested entity) and consists of no more than a few hundred features (columns) and no more than a few million examples (rows). To extend an existing dataset, the app can be used to rapidly ingest delimited files and append them to an existing dataset with minimal configuration for use with the , various apps, and is available in a structured manner through dxdata for use in or other command line environments.

The “Genetic Variant” assay model provides support for genetic variant (SNP) resolution at the sample level. Population level summaries are provided through the Cohort Browser for filter building and cohort validation. During ingestion, homozygous reference variants are intentionally filtered out to focus on non-reference variants that are annotated with structural and functional information. Population scale SNP arrays, whole exome sequencing (WES), and even whole genome sequencing (WGS) are most commonly ingested into this format. Assistance from the is currently required for setting up this type of assay Dataset.

The “Molecular Expression” assay model provides support for the quantitative assessment of multiple features per sample. An example of this could be expression counts for all mRNA transcripts for each individual’s liver tissue sample in a patient population. Typically, input for this model may be a matrix of counts, where column headers are the individual sample ID and row names are the respective feature IDs. For a detailed explanation of the model, as well as accepted inputs and examples of how to ingest data using the model, please refer to the application documentation.

Contact DNAnexus Sale
Apollo Dataset
Data Model Loader
Cohort Browser
Jupyter
DNAnexus Professional Services
Dataset Extender
Data Model Loader
Clinical Dataset Merger
Dataset Extender
Cohort Browser
Jupyter
DNAnexus Professional Services team
Molecular Expression Assay Loader