Spark Cluster Enabled DXJupyterLab


The DXJupyterlab Spark Cluster App is a Spark application that runs a fully-managed standalone Spark/Hadoop cluster. This cluster enables distributed data processing and analysis from directly within the Jupyterlab application. In the JupyterLab session, you can interactively create and query DNAnexus databases or run any analysis on the Spark cluster. Access to this app is provided with the DNAnexus Apollo framework.

In addition to the core Jupyterlab features, the Spark cluster-enabled JupyterLab app allows you to:

  • explore the available databases and get an overview of the available datasets

  • perform analyses and visualizations directly on data available in the database

  • create databases

  • submit data analysis jobs to the Spark cluster

Check the general Overview for an introduction to DNAnexus JupyterLab products.

NOTE: A license is required to access this app. Please contact for more information.

Running and using DXJupyterLab Spark Cluster

The Quickstart page contains information on how to start a JupyterLab session and create notebooks on the DNAnexus platform. The References page has additional useful tips for using the environment.

Instantiating the Spark context

Having created your notebook in the project, you can populate your first cells as below. It is good practice to instantiate your Spark context at the very beginning of your analyses, as shown below.

import findspark
import pyspark
spark = pyspark.sql.SparkSession.builder \
.enableHiveSupport() \
sc = spark.sparkContext

Basic Operations on DNAnexus Databases

Exploring existing databases

To view any databases to which you have access, run a cell with the following code:

spark.sql("show databases").show(truncate=False)

A sample output should be:

|databaseName |
|database_xxxx__brca_pheno |
|database_yyyy__gwas_vitamind_chr1 |
|database_zzzz__meta_data |
|database_tttt__genomics_180820 |

You can inspect one of the returned databases by running:

db = "database_xxxx__brca_pheno"
sql(f"SHOW TABLES FROM {db}")

which should return an output similar to:

|database |tableName |isTemporary|
|database_xxxx__brca_pheno |cna |false |
|database_xxxx__brca_pheno |methylation|false |
|database_xxxx__brca_pheno |mrna |false |
|database_xxxx__brca_pheno |mutations |false |
|database_xxxx__brca_pheno |patient |false |
|database_xxxx__brca_pheno |sample |false |

Creating databases

See below for an example of how to create and populate your own database.

# Create a database
my_database = "my_database"
spark.sql("create database " + my_database + " location 'dnax://'")
spark.sql("create table " + my_database + ".foo (k string, v string) using parquet")
spark.sql("insert into table " + my_database + ".foo values ('1', '2')")
sql("select * from " + my_database + ".foo")

You may separate each line of code into different cells to view the outputs iteratively.

Using Hail

Hail is an open-source, scalable framework for exploring and analyzing genomic data. It is designed to run primarily on a Spark cluster and is available with DXJupyterLab Spark Cluster.

Initialize the context when beginning to use Hail. It's important to pass previously started Spark Context sc as an argument:

import hail as hl
hc = hl.context.HailContext(sc)

We recommend continuing your exploration of Hail with the GWAS using Hail tutorial. For example:

# Download example data from 1k genomes project and inspect the matrix table
hl.import_vcf('data/1kg.vcf.bgz').write('data/', overwrite=True)
mt = hl.read_matrix_table('data/')

Behind the Scenes

The Spark cluster app is a Docker-based app which runs the JupyterLab server in a Docker container.

The JupyterLab instance runs on port 443. Because it is an HTTPS app, you can bring up the JupyterLab environment in a web browser using the URL, where job-xxxx is the ID of the job that runs the app.

The script run at the instantiation of the container, /opt/, configures the environment and starts the server needed to connect to the Spark cluster. The environment variables needed are set by sourcing two scripts, bind-mounted into the container:

source /home/dnanexus/environment
source /cluster/dx-cluster.environment

The default user in the container is root.

The option --network host is used when starting Docker in order to remove the network isolation between the host and the docker container, which allows the container to bind to the host's network and access Sparks master port directly.