DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Running from the UI
  • 1. Select Tools > JupyterLab from the Main Menu
  • 2. Click on the New JupyterLab Button in the Top Right Corner
  • 3. Initiate the Session by Clicking Start Environment
  • 4. Open a JupyterLab Environment in Your Browser When the State is Set to "Ready"
  • Running DXJupyterLab from the CLI
  • Next Steps

Was this helpful?

Export as PDF
  1. User
  2. Using DXJupyterLab

Running DXJupyterLab

Learn to launch a JupyterLab session on the DNAnexus Platform, via the DXJupyterLab app.

Last updated 1 year ago

Was this helpful?

DXJupyterLab is accessible to all users of the UK Biobank Research Analysis Platform and the Our Future Health Trusted Research Environment.

For DNAnexus Platform users, a license is required to access DXJupyterLab. for more information.

Running from the UI

1. Select Tools > JupyterLab from the Main Menu

If you have used DXJupyterLab before, the page will display a list of your previous sessions run across different projects.

2. Click on the New JupyterLab Button in the Top Right Corner

This will open a window from which you can start a new JupyterLab environment. In this window, you can configure your session, e.g. specify its name, select an instance type, and choose the project in which JupyterLab should be started.

If a snapshot file is provided, a DXJupyterLab environment saved previously will be loaded from that file. A tarball file can be created when running a JupyterLab session.

Snapshots created using older versions of DXJupyterLab are incompatible with the current version. if you need to use an older DXJupyterLab snapshot.

You can adjust the duration of the session, after which the environment will automatically shut down. Based on this duration and the instance type, the estimation of the price will be shown in the bottom-left corner (if you have access to the billing information for the selected project).

If you select Enable Spark Cluster, a JupyterLab environment with a standalone Spark cluster will be started. With this option, you can also set the number of nodes in the cluster. This number includes the master (one node) and the worker nodes.

The feature options available are PYTHON_R, ML, IMAGE_PROCESSING, and STATA. Selecting thePYTHON_R feature (default option) loads the environment with Python3 and R kernel and interpreter. Selecting the ML feature loads the environment with Python3 and Machine Learning packages such as TensorFlow, PyTorch, CNTK as well as Image Processing package Nipype but it does not contain R. Selecting the IMAGE_PROCESSING feature loads the environment with Python3 and Image Processing packages such as Nipype, FreeSurfer and FSL but it does not contain R. The FreeSurfer package requires a license to run. Details about License creation and usage can be found here. The STATA feature requires a . For a detailed list of libraries included in each of these feature options, see the .

3. Initiate the Session by Clicking Start Environment

First, the JupyterLab will be in an "Initializing" state, where it waits for the worker to spin up and for the JupyterLab server to be up and running. Clicking on the row corresponding to your session and the i icon in the top right corner will display more information corresponding to the JupyterLab job.

4. Open a JupyterLab Environment in Your Browser When the State is Set to "Ready"

Once the JupyterLab server is running, the session state will change to Ready and the name of the session will turn into a link. By clicking this link, you can open a JupyterLab environment page in your browser. You can access your job via the URL https://job-xxxx.dnanexus.cloud, where job-xxxx is the ID of the DXJupyterLab's job.

Running DXJupyterLab from the CLI

You can start the JupyterLab environment directly from the command line by running the app:

$ dx run app-dxjupyterlab

Once the app starts, you may check if the JupyterLab server is ready to server connections, which will be indicated by the job's property httpsAppState set to running. Once it is running, you can open your browser and go to:

https://job-xxxx.dnanexus.cloud

where job-xxxx is the ID of the job running the app.

In order to run the Spark version of the app, use the command:

$ dx run app-dxjupyterlab_spark_cluster

You can check the optional input parameters for the apps on the DNAnexus platform (platform login required to access the links):

From the CLI, you can learn more about dx run with the following command:

$ dx run -h APP_NAME

where APP_NAME is either app-dxjupyterlab or app-dxjupyterlab_spark_cluster.

Next Steps

See the and pages for more details on how to use DXJupyterLab.

DXJupyterLab App
DXJupyterLab Spark Cluster Enabled App
Quickstart
References
Contact DNAnexus Sales
license to run
in-product documentation
snapshot
See these guidelines