DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Overview
  • Running and Using DXJupyterLab Spark Cluster
  • Instantiating the Spark Context
  • Basic Operations on DNAnexus Databases
  • Exploring Existing Databases
  • Using Hail
  • Using VEP with Hail
  • Behind the Scenes
  • Accessing AWS S3 Buckets
  • Public Bucket Access
  • Private Bucket Access

Was this helpful?

Export as PDF
  1. User
  2. Using DXJupyterLab

Spark Cluster-Enabled DXJupyterLab

Learn to use the DXJupyterlab Spark Cluster app.

Last updated 1 year ago

Was this helpful?

DXJupyterLab is accessible to all users of the UK Biobank Research Analysis Platform and the Our Future Health Trusted Research Environment.

A license is required to access DXJupyterLab on the DNAnexus Platform. for more information.

Overview

The DXJupyterlab Spark Cluster App is a that runs a fully-managed standalone Spark/Hadoop cluster. This cluster enables distributed data processing and analysis from directly within the Jupyterlab application. In the JupyterLab session, you can interactively create and query DNAnexus databases or run any analysis on the Spark cluster.

In addition to the core Jupyterlab features, the Spark cluster-enabled JupyterLab app allows you to:

  • Explore the available databases and get an overview of the available datasets

  • Perform analyses and visualizations directly on data available in the database

  • Create databases

  • Submit data analysis jobs to the Spark cluster

Check the general for an introduction to DNAnexus JupyterLab products.

Running and Using DXJupyterLab Spark Cluster

The page contains information on how to start a JupyterLab session and create notebooks on the DNAnexus platform. The page has additional useful tips for using the environment.

Instantiating the Spark Context

Having created your notebook in the project, you can populate your first cells as below. It is good practice to instantiate your Spark context at the very beginning of your analyses, as shown below.

import pyspark
sc = pyspark.SparkContext()
spark = pyspark.sql.SparkSession(sc)

Basic Operations on DNAnexus Databases

Exploring Existing Databases

To view any databases to which you have access to in your current region and project context, run a cell with the following code:

spark.sql("show databases").show(truncate=False)

A sample output should be:

+------------------------------------------------------------+
|namespace                                                   |
+------------------------------------------------------------+
|database_xxxx__brca_pheno                                   |
|database_yyyy__gwas_vitamind_chr1                           |
|database_zzzz__meta_data                                    |
|database_tttt__genomics_180820                              |
+------------------------------------------------------------+

You can inspect one of the returned databases by running:

db = "database_xxxx__brca_pheno"
spark.sql(f"SHOW TABLES FROM {db}").show(truncate=False)

which should return an output similar to:

+------------------------------------+-----------+-----------+
|namespace                           |tableName  |isTemporary|
+------------------------------------+-----------+-----------+
|database_xxxx__brca_pheno           |cna        |false      |
|database_xxxx__brca_pheno           |methylation|false      |
|database_xxxx__brca_pheno           |mrna       |false      |
|database_xxxx__brca_pheno           |mutations  |false      |
|database_xxxx__brca_pheno           |patient    |false      |
|database_xxxx__brca_pheno           |sample     |false      |
+------------------------------------+-----------+-----------+

To find a database in your current region that may be in a different project than your current context, run the following code:

show databases like "<project_id_pattern>:<database_name_pattern>";
show databases like "project-*:<database_name>";

A sample output should be:

+------------------------------------------------------------+
|namespace                                                   |
+------------------------------------------------------------+
|database_xxxx__brca_pheno                                   |
|database_yyyy__gwas_vitamind_chr1                           |
|database_zzzz__meta_data                                    |
|database_tttt__genomics_180820                              |
+------------------------------------------------------------+

You can inspect one of the returned databases by running (note that using the database name instead of unique database name here will only return the databases within the project scope):

db = "database_xxxx__brca_pheno"
spark.sql(f"SHOW TABLES FROM {db}").show(truncate=False)Creating databases

See below for an example of how to create and populate your own database.

# Create a database
my_database = "my_database"
spark.sql("create database " + my_database + " location 'dnax://'")
spark.sql("create table " + my_database + ".foo (k string, v string) using parquet")
spark.sql("insert into table " + my_database + ".foo values ('1', '2')")
sql("select * from " + my_database + ".foo")

You may separate each line of code into different cells to view the outputs iteratively.

Using Hail

Initialize the context when beginning to use Hail. It's important to pass previously started Spark Context sc as an argument:

import hail as hl
hl.init(sc=sc)
# Download example data from 1k genomes project and inspect the matrix table
hl.utils.get_1kg('data/')
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
mt = hl.read_matrix_table('data/1kg.mt')
mt.rows().select().show(5)

Using VEP with Hail

# Annotate hail matrix table with VEP and LoF using configuration specified in the
# vep-GRCh38.json file in the project you're working in.
#
# Annotation process relies on "dnanexus/dxjupyterlab-vep" docker container 
# as well as VEP and LoF resources that are pre-installed on every Spark node when
# HAIL-VEP feature is selected. 
annotated_mt = hl.vep(mt, "file:///mnt/project/vep-GRCh38.json")
% cat /mnt/project/vep-GRCh38.json
{"command": [
     "docker", "run", "-i", "-v", "/cluster/vep:/root/.vep", "dnanexus/dxjupyterlab-vep",
     "./vep", "--format", "vcf", "__OUTPUT_FORMAT_FLAG__", "--everything", "--allele_number",
     "--no_stats", "--cache", "--offline", "--minimal", "--assembly", "GRCh38", "-o", "STDOUT",
     "--check_existing", "--dir_cache", "/root/.vep/",
     "--fasta", "/root/.vep/homo_sapiens/109_GRCh38/Homo_sapiens.GRCh38.dna.toplevel.fa.gz",
    "--plugin", "LoF,loftee_path:/root/.vep/Plugins/loftee,human_ancestor_fa:/root/.vep/human_ancestor.fa,conservation_file:/root/.vep/loftee.sql,gerp_bigwig:/root/.vep/gerp_conservation_scores.homo_sapiens.GRCh38.bw"],
  "env": {
      "PERL5LIB": "/root/.vep/Plugins"
  },
  "vep_json_schema": "Struct{assembly_name:String,allele_string:String,ancestral:String,colocated_variants:Array[Struct{aa_allele:String,aa_maf:Float64,afr_allele:String,afr_maf:Float64,allele_string:String,amr_allele: String,amr_maf:Float64,clin_sig:Array[String],end:Int32,eas_allele:String,eas_maf:Float64,ea_allele:String,ea_maf:Float64,eur_allele:String,eur_maf:Float64,exac_adj_allele:String,exac_adj_maf:Float64,exac_allele:      String,exac_afr_allele:String,exac_afr_maf:Float64,exac_amr_allele:String,exac_amr_maf:Float64,exac_eas_allele:String,exac_eas_maf:Float64,exac_fin_allele:String,exac_fin_maf:Float64,exac_maf:Float64,exac_nfe_allele:  String,exac_nfe_maf:Float64,exac_oth_allele:String,exac_oth_maf:Float64,exac_sas_allele:String,exac_sas_maf:Float64,id:String,minor_allele:String,minor_allele_freq:Float64,phenotype_or_disease:Int32,pubmed:            Array[Int32],sas_allele:String,sas_maf:Float64,somatic:Int32,start:Int32,strand:Int32}],context:String,end:Int32,id:String,input:String,intergenic_consequences:Array[Struct{allele_num:Int32,consequence_terms:          Array[String],impact:String,minimised:Int32,variant_allele:String}],most_severe_consequence:String,motif_feature_consequences:Array[Struct{allele_num:Int32,consequence_terms:Array[String],high_inf_pos:String,impact:   String,minimised:Int32,motif_feature_id:String,motif_name:String,motif_pos:Int32,motif_score_change:Float64,strand:Int32,variant_allele:String}],regulatory_feature_consequences:Array[Struct{allele_num:Int32,biotype:   String,consequence_terms:Array[String],impact:String,minimised:Int32,regulatory_feature_id:String,variant_allele:String}],seq_region_name:String,start:Int32,strand:Int32,transcript_consequences:                        Array[Struct{allele_num:Int32,amino_acids:String,appris:String,biotype:String,canonical:Int32,ccds:String,cdna_start:Int32,cdna_end:Int32,cds_end:Int32,cds_start:Int32,codons:String,consequence_terms:Array[String],    distance:Int32,domains:Array[Struct{db:String,name:String}],exon:String,gene_id:String,gene_pheno:Int32,gene_symbol:String,gene_symbol_source:String,hgnc_id:String,hgvsc:String,hgvsp:String,hgvs_offset:Int32,impact:   String,intron:String,lof:String,lof_flags:String,lof_filter:String,lof_info:String,minimised:Int32,polyphen_prediction:String,polyphen_score:Float64,protein_end:Int32,protein_start:Int32,protein_id:String,             sift_prediction:String,sift_score:Float64,strand:Int32,swissprot:String,transcript_id:String,trembl:String,tsl:Int32,uniparc:String,variant_allele:String}],variant_class:String}"
 }

Behind the Scenes

The Spark cluster app is a Docker-based app which runs the JupyterLab server in a Docker container.

The JupyterLab instance runs on port 443. Because it is an HTTPS app, you can bring up the JupyterLab environment in a web browser using the URL https://job-xxxx.dnanexus.cloud, where job-xxxx is the ID of the job that runs the app.

The script run at the instantiation of the container, /opt/start_jupyterlab.sh, configures the environment and starts the server needed to connect to the Spark cluster. The environment variables needed are set by sourcing two scripts, bind-mounted into the container:

source /home/dnanexus/environment
source /cluster/dx-cluster.environment

The default user in the container is root.

The option --network host is used when starting Docker in order to remove the network isolation between the host and the docker container, which allows the container to bind to the host's network and access Sparks master port directly.

Accessing AWS S3 Buckets

S3 buckets can have private or public access. Either the s3 or the s3a scheme can be used to access S3 buckets. Note that the s3 scheme is automatically aliased to s3a in all Apollo Spark Clusters.

Public Bucket Access

To access public s3 buckets, you do not need to have s3 credentials. The example below shows how to access the public 1000Genomes bucket in a JupyterLab notebook:

#read csv from public bucket
df = spark.read.options(delimiter='\t', header='True', inferSchema='True').csv("s3://1000genomes/20131219.populations.tsv")
df.select(df.columns[:4]).show(10, False)

When the above is run in a notebook, you will see the following:

Private Bucket Access

To access private buckets, see the example code below. The example assumes that a Spark session has been created as shown above.

#access private data in S3 by first unsetting the default credentials provider
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', '') 

# replace "redacted" with your keys
sc._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'redacted')
sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', 'redacted')
df=spark.read.csv("s3a://your_private_bucket/your_path_to_csv") 
df.select(df.columns[:5]).show(10, False)

is an open-source, scalable framework for exploring and analyzing genomic data. It is designed to run primarily on a Spark cluster and is available with DXJupyterLab Spark Cluster. It is included in the app and can be used when the app is run with the feature input set to HAIL (set as default).

We recommend continuing your exploration of Hail with the . For example:

To use (Ensemble Variant Effect Predictor) with HAIL, select "Feature," then "HAIL" when launching Spark Cluster-Enabled DXJupyterLab via the CLI.

VEP can predict the functional effects of genomic variants on genes, transcripts, protein sequence, and regulatory regions. The is included as well, and is used when VEP configuration includes LoF plugin as shown in the configuration file below.

Contact DNAnexus Sales
Spark application
Overview
Quickstart
References
Hail
GWAS using Hail tutorial
VEP
LoF plugin