DNAnexus Documentation
APIDownloadsIndex of dx CommandsLegal
  • Overview
  • Getting Started
    • DNAnexus Essentials
    • Key Concepts
      • Projects
      • Organizations
      • Apps and Workflows
    • User Interface Quickstart
    • Command Line Quickstart
    • Developer Quickstart
    • Developer Tutorials
      • Bash
        • Bash Helpers
        • Distributed by Chr (sh)
        • Distributed by Region (sh)
        • SAMtools count
        • TensorBoard Example Web App
        • Git Dependency
        • Mkfifo and dx cat
        • Parallel by Region (sh)
        • Parallel xargs by Chr
        • Precompiled Binary
        • R Shiny Example Web App
      • Python
        • Dash Example Web App
        • Distributed by Region (py)
        • Parallel by Chr (py)
        • Parallel by Region (py)
        • Pysam
      • Web App(let) Tutorials
        • Dash Example Web App
        • TensorBoard Example Web App
      • Concurrent Computing Tutorials
        • Distributed
          • Distributed by Region (sh)
          • Distributed by Chr (sh)
          • Distributed by Region (py)
        • Parallel
          • Parallel by Chr (py)
          • Parallel by Region (py)
          • Parallel by Region (sh)
          • Parallel xargs by Chr
  • User
    • Login and Logout
    • Projects
      • Project Navigation
      • Path Resolution
    • Running Apps and Workflows
      • Running Apps and Applets
      • Running Workflows
      • Running Nextflow Pipelines
      • Running Batch Jobs
      • Monitoring Executions
      • Job Notifications
      • Job Lifecycle
      • Executions and Time Limits
      • Executions and Cost and Spending Limits
      • Smart Reuse (Job Reuse)
      • Apps and Workflows Glossary
      • Tools List
    • Cohort Browser
      • Chart Types
        • Row Chart
        • Histogram
        • Box Plot
        • List View
        • Grouped Box Plot
        • Stacked Row Chart
        • Scatter Plot
        • Kaplan-Meier Survival Curve
      • Locus Details Page
    • Using DXJupyterLab
      • DXJupyterLab Quickstart
      • Running DXJupyterLab
        • FreeSurfer in DXJupyterLab
      • Spark Cluster-Enabled DXJupyterLab
        • Exploring and Querying Datasets
      • Stata in DXJupyterLab
      • Running Older Versions of DXJupyterLab
      • DXJupyterLab Reference
    • Using Spark
      • Apollo Apps
      • Connect to Thrift
      • Example Applications
        • CSV Loader
        • SQL Runner
        • VCF Loader
      • VCF Preprocessing
    • Environment Variables
    • Objects
      • Describing Data Objects
      • Searching Data Objects
      • Visualizing Data
      • Filtering Objects and Jobs
      • Archiving Files
      • Relational Database Clusters
      • Symlinks
      • Uploading and Downloading Files
        • Small File Sets
          • dx upload
          • dx download
        • Batch
          • Upload Agent
          • Download Agent
    • Platform IDs
    • Organization Member Guide
    • Index of dx commands
  • Developer
    • Developing Portable Pipelines
      • dxCompiler
    • Cloud Workstation
    • Apps
      • Introduction to Building Apps
      • App Build Process
      • Advanced Applet Tutorial
      • Bash Apps
      • Python Apps
      • Spark Apps
        • Table Exporter
        • DX Spark Submit Utility
      • HTTPS Apps
        • Isolated Browsing for HTTPS Apps
      • Transitioning from Applets to Apps
      • Third Party and Community Apps
        • Community App Guidelines
        • Third Party App Style Guide
        • Third Party App Publishing Checklist
      • App Metadata
      • App Permissions
      • App Execution Environment
        • Connecting to Jobs
      • Dependency Management
        • Asset Build Process
        • Docker Images
        • Python package installation in Ubuntu 24.04 AEE
      • Job Identity Tokens for Access to Clouds and Third-Party Services
      • Enabling Web Application Users to Log In with DNAnexus Credentials
      • Types of Errors
    • Workflows
      • Importing Workflows
      • Introduction to Building Workflows
      • Building and Running Workflows
      • Workflow Build Process
      • Versioning and Publishing Global Workflows
      • Workflow Metadata
    • Ingesting Data
      • Molecular Expression Assay Loader
        • Common Errors
        • Example Usage
        • Example Input
      • Data Model Loader
        • Data Ingestion Key Steps
        • Ingestion Data Types
        • Data Files Used by the Data Model Loader
        • Troubleshooting
      • Dataset Extender
        • Using Dataset Extender
    • Dataset Management
      • Rebase Cohorts and Dashboards
      • Assay Dataset Merger
      • Clinical Dataset Merger
    • Apollo Datasets
      • Dataset Versions
      • Cohorts
    • Creating Custom Viewers
    • Client Libraries
      • Support for Python 3
    • Walkthroughs
      • Creating a Mixed Phenotypic Assay Dataset
      • Guide for Ingesting a Simple Four Table Dataset
    • DNAnexus API
      • Entity IDs
      • Protocols
      • Authentication
      • Regions
      • Nonces
      • Users
      • Organizations
      • OIDC Clients
      • Data Containers
        • Folders and Deletion
        • Cloning
        • Project API Methods
        • Project Permissions and Sharing
      • Data Object Lifecycle
        • Types
        • Object Details
        • Visibility
      • Data Object Metadata
        • Name
        • Properties
        • Tags
      • Data Object Classes
        • Records
        • Files
        • Databases
        • Drives
        • DBClusters
      • Running Analyses
        • I/O and Run Specifications
        • Instance Types
        • Job Input and Output
        • Applets and Entry Points
        • Apps
        • Workflows and Analyses
        • Global Workflows
        • Containers for Execution
      • Search
      • System Methods
      • Directory of API Methods
      • DNAnexus Service Limits
  • Administrator
    • Billing
    • Org Management
    • Single Sign-On
    • Audit Trail
    • Integrating with External Services
    • Portal Setup
    • GxP
      • Controlled Tool Access (allowed executables)
  • Science Corner
    • Scientific Guides
      • Somatic Small Variant and CNV Discovery Workflow Walkthrough
      • SAIGE GWAS Walkthrough
      • LocusZoom DNAnexus App
      • Human Reference Genomes
    • Using Hail to Analyze Genomic Data
    • Open-Source Tools by DNAnexus Scientists
    • Using IGV Locally with DNAnexus
  • Downloads
  • FAQs
    • EOL Documentation
      • Python 3 Support and Python 2 End of Life (EOL)
    • Automating Analysis Workflow
    • Backups of Customer Data
    • Developing Apps and Applets
    • Importing Data
    • Platform Uptime
    • Legal and Compliance
    • Sharing and Collaboration
    • Product Version Numbering
  • Release Notes
  • Technical Support
  • Legal
Powered by GitBook

Copyright 2025 DNAnexus

On this page
  • Usage
  • How does it work?
  • Spark Property Overrides
  • Application Configuration Json
  • User Configuration Json
  • Log Collection
  • Log Level
  • Spark Arguments
  • Example

Was this helpful?

Export as PDF
  1. Developer
  2. Apps
  3. Spark Apps

DX Spark Submit Utility

Dx-spark-submit is a utility script that can be used in DNAnexus Spark applications to more easily submit and monitor a Spark job.

Last updated 1 month ago

Was this helpful?

DNAnexus is working to phase out outdated terminology and change scripts using those terms to remove inappropriate language. The terms "master" and "slave" will be replaced with "driver" and "clusterWorker" in Spark documentation. DNAnexus will also eventually replace the older terms in the codebase. For now, variable names and scripts containing the older terms will still be used in the actual code.

A license is required to access Spark functionality on the DNAnexus Platform. for more information.

Usage

usage: dx-spark-submit [-h | --help] [--log-level {INFO,WARN,TRACE,DEBUG}]
                       [--collect-logs] [--log-collect-dir LOG_COLLECT_DIR]
                       [--app-config APP_CONFIG] [--user-config USER_CONFIG]
                       spark-driver-args

positional arguments:
  spark-driver-args     Options to be passed directly to spark-submit, including 
                        Spark application, properties, and driver options

optional arguments:
  -h, --help            show this help message and exit
  --log-level {INFO,WARN,TRACE,DEBUG}
                        Log level for driver and executor
  --collect-logs        Collect logs to a project in the platform
  --log-collect-dir LOG_COLLECT_DIR
                        Directory in project to upload logs
  --app-config APP_CONFIG
                        Application configuration json string or file
  --user-config USER_CONFIG
                        User configuration json string or file

How does it work?

The dx-spark-submit utility simplifies some of the common Spark application tasks.

  • Allows easy overrides of Spark properties at the app developer and user level.

  • Sets the driver and executor log level.

  • Submits and sets up the UI to monitor Spark jobs.

  • Initiates log collection once the job is done (success or failure).

Spark Property Overrides

Spark apps depend on various configurations like spark-defaults.conf, hive-site.xml which will set up the environment for your application. There are certain scenarios where an application developer or the user of the application may want to override a default setting.

dx-spark-submit allows you to specify two configuration inputs.

  • Application configuration

  • User configuration

Application Configuration Json

Application config json (--app-config) contains the list of configurations the app developer may want to restrict or override.

{
  "spark-defaults.conf": [         
    {
      "name": "spark.ui.port",
      "value": 8081,
      "override_allowed": true
    },
    {
      "name": "spark.sql.parquet.filterPushdown",
      "value": false
    }

  ]
}

User Configuration Json

User config json (--user-config) contains the list of configurations the app user may want to add or override. If you want to offer this override ability to users of your app you will need to reference this file in the app input spec, so it's available to dx-spark-submit.

{
  "spark-defaults.conf": [         
    {
      "name": "spark.ui.port",
      "value": 8080
    },
    {
      "name": "spark.sql.shuffle.partitions",
      "value": 1
    }
  ]
}

Note: These Spark configurations cannot be overridden as they affect the basic functioning of the cluster application:

spark.driver.host
spark.driver.bindAddress
spark.driver.port
spark.driver.blockManager.port
spark.blockManager.port
spark.port.maxRetries
spark.master
spark.driver.extraClassPath
spark.jars
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version

Log Collection

When --collect-logs option is set, log collection will be triggered by the script. It will collect the logs from clusterWork and driver and upload to the project by default. If --log-collect-dir is specified, the logs are copied to the specified folder in the project.

Note: Subjobs cannot use the log collection feature.

Log Level

--log-level can be used to set the driver and executor log level (INFO, WARN, TRACE, DEBUG).

Spark Arguments

spark-driver-args should contain the Spark application and any arguments you want to pass to spark-submit.

Example

    dx-spark-submit \
        --log-level INFO \
        --collect-logs \
        --log-collect-dir pitestlogs \
        --app-config /app.json \
        --user-config /user.json \
        --class org.apache.spark.examples.SparkPi /cluster/spark/examples/jars/spark-examples*.jar 10

Note: dx-spark-submit is located in /cluster/dnax/bin on the Spark cluster worker container.

Contact DNAnexus Sales