DX Spark Submit Utility
Dx-spark-submit is a utility script that can be used in DNAnexus Spark applications to more easily submit and monitor a Spark job.
Usage
usage: dx-spark-submit [-h | --help] [--log-level {INFO,WARN,TRACE,DEBUG}]
[--collect-logs] [--log-collect-dir LOG_COLLECT_DIR]
[--app-config APP_CONFIG] [--user-config USER_CONFIG]
spark-driver-args
positional arguments:
spark-driver-args Options to be passed directly to spark-submit, including
Spark application, properties, and driver options
optional arguments:
-h, --help show this help message and exit
--log-level {INFO,WARN,TRACE,DEBUG}
Log level for driver and executor
--collect-logs Collect logs to a project in the platform
--log-collect-dir LOG_COLLECT_DIR
Directory in project to upload logs
--app-config APP_CONFIG
Application configuration json string or file
--user-config USER_CONFIG
User configuration json string or file
How does it work?
The dx-spark-submit
utility simplifies common Spark application tasks.
Allows easy overrides of Spark properties at the app developer and user level.
Sets the driver and executor log level.
Submits and sets up the UI to monitor Spark jobs.
Initiates log collection once the job is done (success or failure).
Spark Property Overrides
Spark apps depend on specific configurations like spark-defaults.conf, hive-site.xml which will set up the environment for your application. In certain scenarios, an application developer or the user of the application may want to override a default setting.
dx-spark-submit allows you to specify two configuration inputs.
Application configuration
User configuration
Application Configuration JSON
Application config JSON (--app-config
) contains the list of configurations the app developer may want to restrict or override.
{
"spark-defaults.conf": [
{
"name": "spark.ui.port",
"value": 8081,
"override_allowed": true
},
{
"name": "spark.sql.parquet.filterPushdown",
"value": false
}
]
}
User Configuration JSON
User config JSON (--user-config
) contains the list of configurations the app user may want to add or override. If you want to offer this override ability to users of your app you will need to reference this file in the app input spec, so it's available to dx-spark-submit.
{
"spark-defaults.conf": [
{
"name": "spark.ui.port",
"value": 8080
},
{
"name": "spark.sql.shuffle.partitions",
"value": 1
}
]
}
These Spark configurations cannot be overridden as they affect the basic functioning of the cluster application:
spark.driver.host
spark.driver.bindAddress
spark.driver.port
spark.driver.blockManager.port
spark.blockManager.port
spark.port.maxRetries
spark.master
spark.driver.extraClassPath
spark.jars
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version
Log Collection
When --collect-logs
option is set, log collection will be triggered by the script. It will collect the logs from clusterWorker
and driver
and upload to the project by default. If --log-collect-dir
is specified, the logs are copied to the specified folder in the project.
Log Level
--log-level
can be used to set the driver and executor log level (INFO, WARN, TRACE, DEBUG).
Spark Arguments
spark-driver-args
should contain the Spark application and any arguments you want to pass to spark-submit.
Example
dx-spark-submit \
--log-level INFO \
--collect-logs \
--log-collect-dir pitestlogs \
--app-config /app.json \
--user-config /user.json \
--class org.apache.spark.examples.SparkPi /cluster/spark/examples/jars/spark-examples*.jar 10
Last updated
Was this helpful?