DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • About the DNAnexus Thrift Server
  • Connecting to Thrift Server
  • Generate a DNAnexus Platform Authentication Token
  • Getting the Project ID
  • Using Beeline

Was this helpful?

Export as PDF
  1. User
  2. Using Spark

Connect to Thrift

Learn about the DNAnexus Thrift server, a service that allows JDBC and ODBC clients to run Spark SQL queries.

Last updated 3 months ago

Was this helpful?

A license is required to access Spark functionality on the DNAnexus Platform. for more information.

About the DNAnexus Thrift Server

The DNAnexus Thrift server connects to a high availability Apache Spark cluster integrated with the platform. It leverages the same security, permissions, and sharing features built into DNAnexus.

Connecting to Thrift Server

In order to connect to the Thrift server, we need the following:

  1. The JDBC url:

     AWS US (East): jdbc:hive2://query.us-east-1.apollo.dnanexus.com:10000/;ssl=true
     AWS London (General): jdbc:hive2://query.eu-west-2-g.apollo.dnanexus.com:10000/;ssl=true
     AWS London (UKB): jdbc:hive2://query.eu-west-2.apollo.dnanexus.com:10000/;ssl=true
     Azure US (West): jdbc:hive2://query.westus.apollo.dnanexus.com:10001/;ssl=true;transportMode=http;httpPath=cliservice
     AWS Frankfurt (General): jdbc:hive2://query.eu-central-1.apollo.dnanexus.com:10000/;ssl=true

    Note: Azure UK South (OFH) region does not support access to Thrift.

  2. We support the following format of the username:

    • TOKEN__PROJECTID : TOKEN is DNAnexus user generated token and PROJECTID is a DNAnexus project ID used as the project context (when you create databases). Note the double underscore between the token and the project ID.

    • Additionally, both the Thrift server that the user wants to connect to and the project must be from the same region.

Generate a DNAnexus Platform Authentication Token

Getting the Project ID

  1. Go to Projects -> your project -> Settings -> Project ID and click on Copy to Clipboard.

Using Beeline

Beeline is a JDBC client bundled with Apache Spark that can be used to run interactive queries on the command line.

Installing Apache Spark

$ tar -zxvf spark-3.5.2-bin-hadoop3.tgz

You need to have Java installed in your system PATH, or the JAVA_HOME environment variable pointing to a Java installation.

Single Command Connection

If you already have beeline installed and all of the credentials, you can quickly connect with the following command:

<beeline> -u <thrift path> -n <token>__<project-id>

In the following AWS example, note that some characters are escaped (; with \)

$SPARK_HOME/bin/beeline -u jdbc:hive2://query.us-east-1.apollo.dnanexus.com:10000/\;ssl=true -n yourToken__project-xxxx

Note that the command for connecting to Thrift is different for Azure, as seen below:

$SPARK_HOME/bin/beeline -u jdbc:hive2://query.westus.apollo.dnanexus.com:10001/\;ssl=true\;transportMode=http\;httpPath=cliservice -n yourToken__project-xxxx

Running Beeline Guided Connection

The beeline client is located under $SPARK_HOME/bin/.

$ cd spark-3.5.2-bin-hadoop3/bin
$ ./beeline

Connect to beeline using the JDBC URL:

beeline> !connect jdbc:hive2://query.us-east-1.apollo.dnanexus.com:10000/;ssl=true

Enter username: <TOKEN__PROJECTID>
Enter password: <empty - press RETURN>

Once successfully connected, you should see the message:

Connected to: Spark SQL (version 3.5.2)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ

Querying in Beeline

You are now connected to the Thrift server using your credentials and will be able to see all databases to which you have access to within your current region.

0: jdbc:hive2://query.us-east-1.apollo.dnanex> show databases;
+---------------------------------------------------------+--+
|                      databaseName                       |
+---------------------------------------------------------+--+
| database_fj7q18009xxzzzx0gjfk6vfz__genomics_180718_01   |
| database_fj8gygj0v10vj50j0gyfqk1x__af_result_180719_01  |
| database_fj96qx00v10vj50j0gyfv00z__af_result2           |
| database_fjf3y28066y5jxj2b0gz4g85__metabric_data        |
| database_fjj1jkj0v10p8pvx78vkkpz3__pchr1_test           |
| database_fjpz6fj0v10fjy3fjy282ybz__af_result1           |
+---------------------------------------------------------+--+

You can query using the unique database name including the downcased database ID, for example database_fjf3y28066y5jxj2b0gz4g85__metabric_data.If the database is within the same username and project you used to connect to the Thrift server, you can query using only the database name, for example metabric_data. If the database is located outside the project, you will need to use the unique database name.

0: jdbc:hive2://query.us-east-1.apollo.dnanex> use metabric_data;

You may also find databases stored in other projects if you specify the project context in the LIKE option of SHOW DATABASES using the format '<project-id>:<database pattern>' like so:

0: jdbc:hive2://query.us-east-1.apollo.dnanex> SHOW DATABASES LIKE 'project-xxx:af*';
+---------------------------------------------------------+--+
|                      databaseName                       |
+---------------------------------------------------------+--+
| database_fj8gygj0v10vj50j0gyfqk1x__af_result_180719_01  |
| database_fj96qx00v10vj50j0gyfv00z__af_result2           |
| database_fjpz6fj0v10fjy3fjy282ybz__af_result1           |
+---------------------------------------------------------+--+

Now you can run SQL queries.

0: jdbc:hive2://query.us-east-1.apollo.dnanex> select * from cna limit 10;
+--------------+-----------------+------------+--------+--+
| hugo_symbol  | entrez_gene_id  | sample_id  | value  |
+--------------+-----------------+------------+--------+--+
| MIR3675      | NULL            | MB-6179    | -1     |
| MIR3675      | NULL            | MB-6181    | 0      |
| MIR3675      | NULL            | MB-6182    | 0      |
| MIR3675      | NULL            | MB-6183    | 0      |
| MIR3675      | NULL            | MB-6184    | 0      |
| MIR3675      | NULL            | MB-6185    | -1     |
| MIR3675      | NULL            | MB-6187    | 0      |
| MIR3675      | NULL            | MB-6188    | 0      |
| MIR3675      | NULL            | MB-6189    | 0      |
| MIR3675      | NULL            | MB-6190    | 0      |
+--------------+-----------------+------------+--------+--+

See the page.

Navigate to and login using your username and password.

You can download Apache Spark 3.5.2 for Hadoop 3.x .

Contact DNAnexus Sales
https://platform.dnanexus.com
from here
Authentication tokens