# SQL Runner

{% hint style="info" %}
A license is required to access Spark functionality on the DNAnexus Platform. [Contact DNAnexus Sales](mailto:sales@dnanexus.com) for more information.
{% endhint %}

## Overview

The Spark SQL Runner application brings up a Spark cluster and executes your provided list of SQL queries. This is especially useful if you need to perform a sequence repeatedly or if you need to run a complex set of queries. You can vary the size of your cluster to speed up your tasks.

## How to Run Spark SQL Runner

Input:

* `sqlfile`: \[Required] A SQL file which contains an ordered list of SQL queries.
* `substitutions`: A JSON file which contains the variable substitutions.
* `user_config`: User configuration JSON file, in case you want to set or override certain Spark configurations.

Other Options:

* `export`: (boolean) default `false`. Exports output files with results for the queries in the `sqlfile`.
* `export_options`: A JSON file which contains the export configurations.
* `collect_logs`: (boolean) default `false`. Collects cluster logs from all nodes.
* `executor_memory`: (string) Amount of memory to use per executor process, in MiB unless otherwise specified. Common values include 2g or 8g. This is passed as `--executor-memory` to Spark submit.
* `executor_cores`: (integer) Number of cores to use per executor process. This is passed as `--executor-cores` to Spark submit.
* `driver_memory`: (string) Amount of memory to use for the driver process. Common values include 2g or 8g. This is passed as `--driver-memory` to Spark submit.
* `log_level`: (string) default `INFO`. Logging level for both driver and executors. `[ALL, TRACE, DEBUG, INFO]`

Output:

* `output_files`: Output files include report SQL file and query export files.

## Basic Run

```shell
dx run spark-sql-runner \
   -i sqlfile=file-FQ4by2Q0Yy3pGp21F7vp8XGK \
   -i paramfile=file-FK7Qpj00GQ8Q7ybZ0pqYJj6G \
   -i export=true
```

## Examples

### `sqlfile`

```shell
SELECT * FROM ${srcdb}.${patient_table};
DROP DATABASE IF EXISTS ${dstdb} CASCADE;
CREATE DATABASE IF NOT EXISTS ${dstdb} LOCATION 'dnax://';
CREATE VIEW ${dstdb}.patient_view AS SELECT * FROM ${srcdb}.patient;
SELECT * FROM ${dstdb}.patient_view;
```

### How `sqlfile` is Processed

1. The SQL runner extracts each command in `sqlfile` and runs them in sequential order.
2. Every SQL command needs to be separated with a semicolon `;`.

```shell
SHOW DATABASES;
SELECT * FROM dbname.tablename1;
SELECT * FROM
dbname.tablename2;
DESCRIBE DATABASE EXTENDED dbname;
```

1. Any command starting with `--` is ignored (comments). Any comment within a command should be inside `/*...*/` The following are examples of valid comments:

```shell
-- SHOW DATABASES;
-- SELECT * FROM dbname.tablename1;
SHOW /* this is valid comment */ TABLES;
```

## Variable Substitution

Variable substitution can be done by specifying the variables to replace in `substitutions`.

```json
{
    "srcdb": "sskrdemo1",
    "dstdb": "sskrtest201",
    "patient": "patient_new",
    "f2c":"patient_f2c",
    "derived":"patient_derived",
    "composed":"patient_composed",
    "complex":"patient_complex",
    "patient_view": "patient_newview",
    "brca": "brca_new",
    "patient_table":"patient",
    "cna": "cna_new"
}
```

In the above example, each reference to `srcdb` in `sqlfile` within `${...}` is substituted with `sskrdemo1`. For example, `select * from ${srcdb}.${patient_table};`. The script adds the `set` command before executing any of the SQL commands in `sqlfile`. As a result, `select * from ${srcdb}.${patient_table};` translates to:

```shell
set srcdb=sskrdemo1;
set patient_table=patient;
select * from ${srcdb}.${patient_table};
```

## Export

If enabled, the results of the SQL commands are exported to a CSV file. `export_options` defines an export configuration.

```json
{
   "num_files" : 2,
   "fileprefix":"demo",
   "header": true
}
```

1. `num_files`: default 1. This defines the maximum number of output files to generate. The number generally depends on how many executors are running in the cluster and how many partitions of this file exist in the system. Each output file corresponds to a part file in parquet.
2. `fileprefix`: The filename prefix for every SQL output file. By default, output files are prefixed with `query_id`, which is the order in which the queries are listed in `sqlfile` (starting with 1), for example, `1-out.csv`. If a prefix is specified, output files are named like `<prefix>-1-out.csv`.
3. `header`: Default is true. If true, a header is added to each exported file.

## User Configuration

Values in `spark-defaults.conf` override or add to the default Spark configuration.

```json
{
  "spark-defaults.conf": [
    {
      "name": "spark.app.name",
      "value": "SparkAppName"
    },
    {
      "name": "spark.test.conf",
      "value": true
    }
  ]
}
```

### Output Files

```shell
$ dx tree export
export
├── job-FFp7K2j0xppVXZ791fFxp2Bg-export.tar
├── job-FFp7K2j0xppVXZ791fFxp2Bg-debug.sql
```

The export folder contains two generated files:

* `<JobId>-export.tar`: Contains all the query results.
* `<JobId>-outfile.sql`: SQL debug file.

### Export Files

After extracting the export tar file, the structure appears as follows:

```shell
├── demo-0
│   ├── demo-0-out.csv
│   │   ├── _SUCCESS
│   │   ├── part-00000-1e2c301e-6b28-47de-b261-c74249cc6724-c000.csv
│   │   └── part-00001-1e2c301e-6b28-47de-b261-c74249cc6724-c000.csv
│   └── demo-0.sql
├── demo-1
│   ├── demo-1-out.csv
│   │   ├── _SUCCESS
│   │   └── part-00000-b21522da-0e5f-42ba-8197-e475841ba9c3-c000.csv
│   └── demo-1.sql
├── demo-2
│   ├── demo-2-out.csv
│   │   ├── _SUCCESS
│   │   ├── part-00000-e61c6eff-5448-4c39-8c72-546279d8ce6f-c000.csv
│   │   └── part-00001-e61c6eff-5448-4c39-8c72-546279d8ce6f-c000.csv
│   └── demo-3.sql
├── demo-3
│   ├── demo-3-out.csv
│   │   ├── _SUCCESS
│   │   └── part-00000-5a48ba0f-d761-4aa5-bdfa-b184ca7948b5-c000.csv
│   └── demo-3.sql
```

In the above example, demo is the `fileprefix` used. The export produces one folder per query. Each folder contains a SQL file with the query executed and a `.csv` folder containing the result CSV.

### SQL Report File

Every SQL run execution generates a SQL runner debug report file. This is a SQL file.

```sql
-- [SQL Runner Report] --;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set f2c=patient_f2c;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set srcdb=sskrdemosrcdb1_13;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient=patient_new;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set derived=patient_derived;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set composed=patient_composed;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_table=patient;
-- [SUCCESS][TimeTaken: 1.19209289551e-06 secs ] set complex=patient_complex;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_view=patient_newview;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set cna=cna_new;
-- [SUCCESS][TimeTaken: 0.0 secs ] set brca=brca_new;
-- [SUCCESS][TimeTaken: 2.14576721191e-06 secs ] set dstdb=sskrdemodstdb1_13;
-- [SUCCESS][OutputFile: demo-0-out.csv, TimeTaken: 8.83630990982 secs] SHOW DATABASES;
-- [SUCCESS][OutputFile: demo-1-out.csv, TimeTaken: 3.85295510292 secs] create database sskrdemo2 location 'dnax://';
-- [SUCCESS][OutputFile: demo-2-out.csv, TimeTaken: 4.8106200695 secs] use sskrdemo2;
-- [SUCCESS][OutputFile: demo-3-out.csv , TimeTaken: 1.00737595558 secs] create table patient (first_name string, last_name string, age int, glucose int, temperature int, dob string, temp_metric string) stored as parquet;
```

It lists all the queries executed and status of the execution (Success or Fail). It also lists the name of the output file for that command and the time taken. If there are any failures, it reports the query and stops executing subsequent commands.

## SQL Errors

During execution of the series of SQL commands, a command may fail (error, syntax, etc). In that case, the app quits and uploads a SQL debug file to the project:

```sql
-- [SQL Runner Report] --;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set f2c=patient_f2c;
-- [SUCCESS][TimeTaken: 1.90734863281e-06 secs ] set srcdb=sskrdemosrcdb1_13;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient=patient_new;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set derived=patient_derived;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set composed=patient_composed;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_table=patient;
-- [SUCCESS][TimeTaken: 1.19209289551e-06 secs ] set complex=patient_complex;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set patient_view=patient_newview;
-- [SUCCESS][TimeTaken: 9.53674316406e-07 secs ] set cna=cna_new;
-- [SUCCESS][TimeTaken: 0.0 secs ] set brca=brca_new;
-- [SUCCESS][TimeTaken: 2.14576721191e-06 secs ] set dstdb=sskrdemodstdb1_13;
-- [SUCCESS][OutputFile: demo-0-out.csv, TimeTaken: 8.83630990982 secs] select * from ${srcdb}.${patient_table};
-- [FAIL] SQL ERROR while below command [ Reason: u"\nextraneous input '`' expecting <EOF>(line 1, pos 45)\n\n== SQL ==\ndrop database if exists sskrtest2011 cascade `\n---------------------------------------------^^^\n"];
drop database if exists ${dstdb} cascade `;
create database if not exists ${dstdb} location 'dnax://';
create view ${dstdb}.patient_view as select * from ${srcdb}.patient;
select * from ${dstdb}.patient_view;
```

The output identifies the line with the SQL error and its response.

```sql
drop database if exists ${dstdb} cascade `;
```

The query in the `.sql` file can be fixed, and this report file can be used as input for a subsequent run, allowing you to resume from where execution stopped.
