DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Overview
  • How to Run Spark SQL Runner
  • Basic Run
  • Examples
  • sqlfile
  • How sqlfile is Processed
  • Variable Substitution
  • Export
  • User Configuration
  • Output Files
  • Export Files
  • SQL Report File
  • SQL Errors

Was this helpful?

Export as PDF
  1. User
  2. Using Spark
  3. Example Applications

SQL Runner

Last updated 3 months ago

Was this helpful?

A license is required to access Spark functionality on the DNAnexus Platform. for more information.

Overview

The Spark SQL Runner application brings up a Spark cluster and executes your provided list of SQL queries. This is especially useful if you need to perform a sequence repeatedly or if you need to run a complex set of queries. You can vary the size of your cluster to speed up your tasks.

How to Run Spark SQL Runner

Input:

  • sqlfile : [Required] A SQL file which contains an ordered list of SQL queries.

  • substitutions : A JSON file which contains the variable substitutions.

  • user_config : User configuration JSON file, in case you want to set or override certain Spark configurations.

Other Options:

  • export : (boolean) default false . Will export output files with results for the queries in the sqlfile

  • export_options : A JSON file which contains the export configurations.

  • collect_logs : (boolean) default false . Collects cluster logs from all nodes.

  • executor_memory : (string) Amount of memory to use per executor process, in MiB unless otherwise specified. (e.g. 2g, 8g). This is passed as --executor-memory to Spark submit.

  • executor_cores : (integer) Number of cores to use per executor process. This is passed as --executor-cores to Spark submit.

  • driver_memory : (string) Amount of memory to use for the driver process (e.g. 2g, 8g). This is passed as --driver-memory to Spark submit.

  • log_level : (string) default INFO . Logging level for both driver and executors. [ALL, TRACE, DEBUG, INFO]

Output:

  • output_files : Output files include report SQL file and query export files.

Basic Run

dx run spark-sql-runner \
   -i sqlfile=file-FQ4by2Q0Yy3pGp21F7vp8XGK \
   -i paramfile=file-FK7Qpj00GQ8Q7ybZ0pqYJj6G \
   -i export=true

Examples

sqlfile

SELECT * FROM ${srcdb}.${patient_table};
DROP DATABASE IF EXISTS ${dstdb} CASCADE;
CREATE DATABASE IF NOT EXISTS ${dstdb} LOCATION 'dnax://';
CREATE VIEW ${dstdb}.patient_view AS SELECT * FROM ${srcdb}.patient;
SELECT * FROM ${dstdb}.patient_view;

How sqlfile is Processed

  1. The SQL runner extracts each command in sqlfile and runs them in sequential order.

  2. Every SQL command needs to be separated with a semicolon ;.

SHOW DATABASES;
SELECT * FROM dbname.tablename1;
SELECT * FROM 
dbname.tablename2;
DESCRIBE DATABASE EXTENDED dbname;
  1. Any command starting with -- is ignored (comments). Any comment within a command should be inside /*...*/ The following are examples of valid comments:

-- SHOW DATABASES;
-- SELECT * FROM dbname.tablename1;
SHOW /* this is valid comment */ TABLES;

Variable Substitution

Variable substitution can be done by specifying the variables to replace in substitutions.

{
    "srcdb": "sskrdemo1",
    "dstdb": "sskrtest201",
    "patient": "patient_new",
    "f2c":"patient_f2c",
    "derived":"patient_derived",
    "composed":"patient_composed",
    "complex":"patient_complex",
    "patient_view": "patient_newview",
    "brca": "brca_new",
    "patient_table":"patient",
    "cna": "cna_new"
}

In the above example, each reference to srcdb in sqlfile within ${...} will be substituted with sskrdemo1. For example, select * from ${srcdb}.${patient_table};. The script adds the set command before executing any of the SQL commands in sqlfile. So select * from ${srcdb}.${patient_table}; would translate to:

set srcdb=sskrdemo1;
set patient_table=patient;
select * from ${srcdb}.${patient_table};

Export

If enabled, the results of the SQL commands will be exported to a CSV file. export_options defines an export configuration.

{
   "num_files" : 2,
   "fileprefix":"demo",
   "header": true
}
  1. num_files : default 1. This is specified to define the maximum output files you want to generate. This generally depends on how many executors you are running in the cluster as well as how many partitions of this file exist in the system. Each output file corresponds to a part file in parquet.

  2. fileprefix : The filename prefix for every SQL output file. By default the output files will be prefixed with query_id which is the order in which the queries are listed in sqlfile (starting with 1). For example 1-out.csv. If we specify prefix, it will generate output files like <prefix>-1-out.csv.

  3. header : Default is true. If true, it add header to each exported file.

User Configuration

These values in spark-defaults.conf will override or add to the default Spark configuration.

{
  "spark-defaults.conf": [
    {
      "name": "spark.app.name",
      "value": "SparkAppName"
    },
    {
      "name": "spark.test.conf",
      "value": true
    }
  ]
}

Output Files

$ dx tree export
export
├── job-FFp7K2j0xppVXZ791fFxp2Bg-export.tar
├── job-FFp7K2j0xppVXZ791fFxp2Bg-debug.sql

There are two files generated in export folder:

  • <JobId>-export.tar : Contains all the query results.

  • <JobId>-outfile.sql : SQL debug file.

Export Files

Extracting the export tar file will look like:

├── demo-0
│   ├── demo-0-out.csv
│   │   ├── _SUCCESS
│   │   ├── part-00000-1e2c301e-6b28-47de-b261-c74249cc6724-c000.csv
│   │   └── part-00001-1e2c301e-6b28-47de-b261-c74249cc6724-c000.csv
│   └── demo-0.sql
├── demo-1
│   ├── demo-1-out.csv
│   │   ├── _SUCCESS
│   │   └── part-00000-b21522da-0e5f-42ba-8197-e475841ba9c3-c000.csv
│   └── demo-1.sql
├── demo-2
│   ├── demo-2-out.csv
│   │   ├── _SUCCESS
│   │   ├── part-00000-e61c6eff-5448-4c39-8c72-546279d8ce6f-c000.csv
│   │   └── part-00001-e61c6eff-5448-4c39-8c72-546279d8ce6f-c000.csv
│   └── demo-3.sql
├── demo-3
│   ├── demo-3-out.csv
│   │   ├── _SUCCESS
│   │   └── part-00000-5a48ba0f-d761-4aa5-bdfa-b184ca7948b5-c000.csv
│   └── demo-3.sql

In the above example, demo is the fileprefix used. We have one folder for each query, and each folder has a .sql file containing the query executed and a .csv folder containing the result csv.

SQL Report File

Every SQL run execution generates a SQL runner debug report file. This is a .sql file.

-- [SQL Runner Report] --;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set f2c=patient_f2c;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set srcdb=sskrdemosrcdb1_13;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient=patient_new;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set derived=patient_derived;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set composed=patient_composed;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_table=patient;
-- [SUCCESS][TimeTaken: 1.19209289551e-06 secs ] set complex=patient_complex;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_view=patient_newview;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set cna=cna_new;
-- [SUCCESS][TimeTaken: 0.0 secs ] set brca=brca_new;
-- [SUCCESS][TimeTaken: 2.14576721191e-06 secs ] set dstdb=sskrdemodstdb1_13;
-- [SUCCESS][OutputFile: demo-0-out.csv, TimeTaken: 8.83630990982 secs] SHOW DATABASES;
-- [SUCCESS][OutputFile: demo-1-out.csv, TimeTaken: 3.85295510292 secs] create database sskrdemo2 location 'dnax://';
-- [SUCCESS][OutputFile: demo-2-out.csv, TimeTaken: 4.8106200695 secs] use sskrdemo2;
-- [SUCCESS][OutputFile: demo-3-out.csv , TimeTaken: 1.00737595558 secs] create table patient (first_name string, last_name string, age int, glucose int, temperature int, dob string, temp_metric string) stored as parquet;

It lists all the queries executed and status of the execution (Success or Fail). It also lists the name of the output file for that command and the time taken. If there are any failures, it will report the query and stop executing subsequent commands.

SQL Errors

While executing the series of SQL commands, one of the commands could fail (error, syntax, etc). In that case the app will quit and upload a SQL debug file to the project:

-- [SQL Runner Report] --;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set f2c=patient_f2c;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set srcdb=sskrdemosrcdb1_13;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient=patient_new;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set derived=patient_derived;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set composed=patient_composed;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_table=patient;
-- [SUCCESS][TimeTaken: 1.19209289551e-06 secs ] set complex=patient_complex;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_view=patient_newview;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set cna=cna_new;
-- [SUCCESS][TimeTaken: 0.0 secs ] set brca=brca_new;
-- [SUCCESS][TimeTaken: 2.14576721191e-06 secs ] set dstdb=sskrdemodstdb1_13;
-- [SUCCESS][OutputFile: demo-0-out.csv, TimeTaken: 8.83630990982 secs] select * from ${srcdb}.${patient_table};
-- [FAIL] SQL ERROR while below command [ Reason: u"\nextraneous input '`' expecting <EOF>(line 1, pos 45)\n\n== SQL ==\ndrop database if exists sskrtest2011 cascade `\n---------------------------------------------^^^\n"];
drop database if exists ${dstdb} cascade `;
create database if not exists ${dstdb} location 'dnax://';
create view ${dstdb}.patient_view as select * from ${srcdb}.patient;
select * from ${dstdb}.patient_view;

As you can see, it identifies the line with the SQL error and its response.

drop database if exists ${dstdb} cascade `;

Now we can fix the query in the .sql file and even use this report file as an input for a subsequent run -- picking up where it left off.

Contact DNAnexus Sales