DXJupyterLab is accessible to all users of the UK Biobank Research Analysis Platform and the Our Future Health Trusted Research Environment.
A license is required to access DXJupyterLab on the DNAnexus Platform. Contact DNAnexus Sales for more information.
Overview
The DXJupyterlab Spark Cluster App is a Spark application that runs a fully-managed standalone Spark/Hadoop cluster. This cluster enables distributed data processing and analysis from directly within the Jupyterlab application. In the JupyterLab session, you can interactively create and query DNAnexus databases or run any analysis on the Spark cluster.
In addition to the core Jupyterlab features, the Spark cluster-enabled JupyterLab app allows you to:
Explore the available databases and get an overview of the available datasets
Perform analyses and visualizations directly on data available in the database
Create databases
Submit data analysis jobs to the Spark cluster
Check the general Overview for an introduction to DNAnexus JupyterLab products.
Running and Using DXJupyterLab Spark Cluster
The Quickstart page contains information on how to start a JupyterLab session and create notebooks on the DNAnexus platform. The References page has additional useful tips for using the environment.
Instantiating the Spark Context
Having created your notebook in the project, you can populate your first cells as below. It is good practice to instantiate your Spark context at the very beginning of your analyses, as shown below.
You can inspect one of the returned databases by running (note that using the database name instead of unique database name here will only return the databases within the project scope):
db = "database_xxxx__brca_pheno"
spark.sql(f"SHOW TABLES FROM {db}").show(truncate=False)Creating databases
See below for an example of how to create and populate your own database.
# Create a databasemy_database ="my_database"spark.sql("create database "+ my_database +" location 'dnax://'")spark.sql("create table "+ my_database +".foo (k string, v string) using parquet")spark.sql("insert into table "+ my_database +".foo values ('1', '2')")sql("select * from "+ my_database +".foo")
You may separate each line of code into different cells to view the outputs iteratively.
Using Hail
Hail is an open-source, scalable framework for exploring and analyzing genomic data. It is designed to run primarily on a Spark cluster and is available with DXJupyterLab Spark Cluster. It is included in the app and can be used when the app is run with the feature input set to HAIL (set as default).
Initialize the context when beginning to use Hail. It's important to pass previously started Spark Context sc as an argument:
import hail as hl
hl.init(sc=sc)
We recommend continuing your exploration of Hail with the GWAS using Hail tutorial. For example:
# Download example data from 1k genomes project and inspect the matrix tablehl.utils.get_1kg('data/')hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)mt = hl.read_matrix_table('data/1kg.mt')mt.rows().select().show(5)
Using VEP with Hail
To use VEP (Ensemble Variant Effect Predictor) with HAIL, select "Feature," then "HAIL" when launching Spark Cluster-Enabled DXJupyterLab via the CLI.
VEP can predict the functional effects of genomic variants on genes, transcripts, protein sequence, and regulatory regions. The LoF plugin is included as well, and is used when VEP configuration includes LoF plugin as shown in the configuration file below.
# Annotate hail matrix table with VEP and LoF using configuration specified in the# vep-GRCh38.json file in the project you're working in.## Annotation process relies on "dnanexus/dxjupyterlab-vep" docker container # as well as VEP and LoF resources that are pre-installed on every Spark node when# HAIL-VEP feature is selected. annotated_mt = hl.vep(mt, "file:///mnt/project/vep-GRCh38.json")% cat /mnt/project/vep-GRCh38.json{"command": ["docker","run","-i","-v","/cluster/vep:/root/.vep","dnanexus/dxjupyterlab-vep","./vep","--format","vcf","__OUTPUT_FORMAT_FLAG__","--everything","--allele_number","--no_stats","--cache","--offline","--minimal","--assembly","GRCh38","-o","STDOUT","--check_existing","--dir_cache","/root/.vep/","--fasta","/root/.vep/homo_sapiens/109_GRCh38/Homo_sapiens.GRCh38.dna.toplevel.fa.gz", "--plugin", "LoF,loftee_path:/root/.vep/Plugins/loftee,human_ancestor_fa:/root/.vep/human_ancestor.fa,conservation_file:/root/.vep/loftee.sql,gerp_bigwig:/root/.vep/gerp_conservation_scores.homo_sapiens.GRCh38.bw"],
"env":{"PERL5LIB":"/root/.vep/Plugins"}, "vep_json_schema": "Struct{assembly_name:String,allele_string:String,ancestral:String,colocated_variants:Array[Struct{aa_allele:String,aa_maf:Float64,afr_allele:String,afr_maf:Float64,allele_string:String,amr_allele: String,amr_maf:Float64,clin_sig:Array[String],end:Int32,eas_allele:String,eas_maf:Float64,ea_allele:String,ea_maf:Float64,eur_allele:String,eur_maf:Float64,exac_adj_allele:String,exac_adj_maf:Float64,exac_allele: String,exac_afr_allele:String,exac_afr_maf:Float64,exac_amr_allele:String,exac_amr_maf:Float64,exac_eas_allele:String,exac_eas_maf:Float64,exac_fin_allele:String,exac_fin_maf:Float64,exac_maf:Float64,exac_nfe_allele: String,exac_nfe_maf:Float64,exac_oth_allele:String,exac_oth_maf:Float64,exac_sas_allele:String,exac_sas_maf:Float64,id:String,minor_allele:String,minor_allele_freq:Float64,phenotype_or_disease:Int32,pubmed: Array[Int32],sas_allele:String,sas_maf:Float64,somatic:Int32,start:Int32,strand:Int32}],context:String,end:Int32,id:String,input:String,intergenic_consequences:Array[Struct{allele_num:Int32,consequence_terms: Array[String],impact:String,minimised:Int32,variant_allele:String}],most_severe_consequence:String,motif_feature_consequences:Array[Struct{allele_num:Int32,consequence_terms:Array[String],high_inf_pos:String,impact: String,minimised:Int32,motif_feature_id:String,motif_name:String,motif_pos:Int32,motif_score_change:Float64,strand:Int32,variant_allele:String}],regulatory_feature_consequences:Array[Struct{allele_num:Int32,biotype: String,consequence_terms:Array[String],impact:String,minimised:Int32,regulatory_feature_id:String,variant_allele:String}],seq_region_name:String,start:Int32,strand:Int32,transcript_consequences: Array[Struct{allele_num:Int32,amino_acids:String,appris:String,biotype:String,canonical:Int32,ccds:String,cdna_start:Int32,cdna_end:Int32,cds_end:Int32,cds_start:Int32,codons:String,consequence_terms:Array[String], distance:Int32,domains:Array[Struct{db:String,name:String}],exon:String,gene_id:String,gene_pheno:Int32,gene_symbol:String,gene_symbol_source:String,hgnc_id:String,hgvsc:String,hgvsp:String,hgvs_offset:Int32,impact: String,intron:String,lof:String,lof_flags:String,lof_filter:String,lof_info:String,minimised:Int32,polyphen_prediction:String,polyphen_score:Float64,protein_end:Int32,protein_start:Int32,protein_id:String, sift_prediction:String,sift_score:Float64,strand:Int32,swissprot:String,transcript_id:String,trembl:String,tsl:Int32,uniparc:String,variant_allele:String}],variant_class:String}"
}
Behind the Scenes
The Spark cluster app is a Docker-based app which runs the JupyterLab server in a Docker container.
The JupyterLab instance runs on port 443. Because it is an HTTPS app, you can bring up the JupyterLab environment in a web browser using the URL https://job-xxxx.dnanexus.cloud, where job-xxxx is the ID of the job that runs the app.
The script run at the instantiation of the container, /opt/start_jupyterlab.sh, configures the environment and starts the server needed to connect to the Spark cluster. The environment variables needed are set by sourcing two scripts, bind-mounted into the container:
The option --network host is used when starting Docker in order to remove the network isolation between the host and the docker container, which allows the container to bind to the host's network and access Sparks master port directly.
Accessing AWS S3 Buckets
S3 buckets can have private or public access. Either the s3 or the s3a scheme can be used to access S3 buckets. Note that the s3 scheme is automatically aliased to s3a in all Apollo Spark Clusters.
Public Bucket Access
To access public s3 buckets, you do not need to have s3 credentials. The example below shows how to access the public 1000Genomes bucket in a JupyterLab notebook:
#read csv from public bucketdf = spark.read.options(delimiter='\t', header='True', inferSchema='True').csv("s3://1000genomes/20131219.populations.tsv")
df.select(df.columns[:4]).show(10, False)
When the above is run in a notebook, you will see the following:
Private Bucket Access
To access private buckets, see the example code below. The example assumes that a Spark session has been created as shown above.
#access private data in S3 by first unsetting the default credentials providersc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', '')# replace "redacted" with your keyssc._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'redacted')sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', 'redacted')df=spark.read.csv("s3a://your_private_bucket/your_path_to_csv")df.select(df.columns[:5]).show(10, False)