Monitoring Executions

Learn how to get information on current and past executions, via both the UI and the CLI.

Monitoring an Execution via the UI

Getting Basic Information on an Execution

To get basic information on a job (the execution of an app or applet) or an analysis (the execution of a workflow):

  1. Click on Projects in the main Platform menu.

  2. On the Projects list page, find and click on the name of the project within which the execution was launched.

  3. Click on the Monitor tab to open the Monitor screen.

  4. On the Monitor screen, you'll see a list of executions launched within the project. By default, they're listed in reverse chronological order, with the most recently launched execution at the top.

  5. Find the row displaying information on the execution.

    • Note that for each analysis (the execution of a workflow), summary information is shown by default. To see information on each stage, click the "+" icon next to the status value for the analysis.

  6. To see additional information on an execution, click on its name to be taken to its details page.

Available Basic Information on Executions

In the list on the Monitor screen, you'll see the following information for each of the executions that is running or has been run within the project in question:

  • Status - This is the execution's status. Status values include:

    • "Waiting" - When initially launched, the execution's status will be "Waiting" until the Platform has allocated the resources required to run it, and, in some cases, until other executions on which it depends have finished.

    • "Running" - Once a job has started to run, its status will change from "Waiting" to “Running.”

    • "In Progress" - Once an analysis has been launched, its status will change to "In Progress."

    • "Done" - If the execution completes with no errors, its status will change to "Done."

    • "Failed" - If the execution fails to complete with errors, its status will change to "Failed." See Types of Errors for help in understanding why an execution failed.

    • "Partially Failed" - An analysis is in the "Partially Failed" state if one or more stages in the workflow have not finished successfully, and there is at least one stage that has not transitioned to a terminal state (either "Done," "Failed," or "Terminated").

    • "Terminated" - If the execution is terminated prior to completion, its status will change to "Terminated."

  • Name - The default name for an execution is the name of the app, applet, or workflow being run. When configuring an execution, you can give it a custom name, either via the UI, or via the CLI. Note that the execution's name is used in Platform email alerts related to the execution. Note as well that clicking on the name in the execution's list opens the execution details page, giving in-depth information about it.

  • Executable - The executable or executables run in the course of the execution. Note that if the execution is an analysis, each stage will be shown in a separate row, including the name of the executable run during the stage in question. Note as well that if there is an informational page giving details about the executable and how to configure and use it, the executable's name will be clickable, and clicking the name will display that page.

  • Launched By - The name of the user who launched the execution.

  • Started Running - The time at which the execution started running, if it has done so. Note that this is not always the same as its launch time, if it has to spend time waiting for resources to become available, before it can start running.

  • Duration - For jobs, this figure represents the time elapsed since the job entered the running state. For analyses, it represents the time elapsed since the analysis was created.

  • Price - This column displays only for users who are authorized to perform billable activities within the project, and billing has been set up, either for the individual user, or for the org, to whom or to which project activities are billed. If the column is displayed, the figure shown represents either, for a running execution, an estimate of the charges it has incurred so far, or, for a completed execution, the total costs it incurred.

  • Priority - The priority assigned to the execution - either "Low," "Normal," or "High" - when it was configured, either via the CLI or via the UI. This setting determines the scheduling priority of the execution, vis-a-vis other executions that are waiting to be launched.

  • Worker URL - If the execution is running an executable - such as DXJupyterLab - to which you can connect directly via a web URL, that URL will be shown here. Clicking the URL will open a connection to the executable, in a new browser tab.

Additional Basic Information

Additional basic information can be displayed for each execution. To do this:

  1. Click on the "table" icon at the right edge of the table header row.

  2. Select one or more of the entries in the list, to display an additional column or columns.

Available additional columns include:

  • Launched - The time at which execution was launched. Note that for many executions, this will be earlier than the time displayed in the Started Running column, because many executions spend time waiting for resources to become available, before they start running.

  • Output Folder - Clicking Output in the list will open a column labeled Output Folder. For each execution, the value displayed represents a path relative to the project's root folder. Clicking the value will open the folder in which the execution's outputs have been or will be stored.

  • Stopped Running - The time at which the execution stopped running.

Customizing the Executions List Display

To remove columns from the list, click on the "table" icon at the right edge of the table header row, then de-select one or more of the entries in the list, to hide the column or columns in question.

Filtering the Executions List

A filter menu above the executions list allows you to run a search that refines the list to display only executions meeting specific criteria.

By default, pills are displayed that allow you to set search criteria that will filter executions by one or more of the following attributes:

  • State - Execution state

  • Name - Execution name

  • ID - An execution's job ID or analysis ID

  • Launched By - The user who launched an execution or executions

  • Created - The time range within which executions were launched

Click the Filters button, above the right edge of the executions list, to display pills that allow filtering by additional execution attributes.

Search Scope

Note that by default, filters are set to display only root executions that meet the criteria defined in the filter. If you want the display to include all executions, including those run during individual stages of workflows, click the Search Scope button, above the left edge of the executions list. Then click All Executions.

Saving and Reusing Filters

To save a particular filter, click the Saved Filters button, above the right edge of the executions list, assign your filter a name, then click Save.

To apply a saved filter to the executions list, click the Saved Filters button, then select the filter from the list.

If you modify a saved filter and want to save the new version under its existing name, click the Saved Filters button, select the filter from the list, then click the Save icon.

Terminating an Execution from the Monitor Screen

As launcher of a given execution, and a contributor to the project within which the exeuction is running, you can terminate the execution from the list on the Monitor screen, when it's in a non-terminal state. You can also terminate executions launched by other project members if you have project admin status.

To terminate an execution:

  1. Find the execution in the list, and move your mouse over the row displaying information on the execution. A red Terminate button will appear at the right end of the row.

  2. Click the Terminate button. A modal window will open, asking you to confirm that you want to terminate the execution. Click Terminate to confirm.

  3. The execution's status will show as "Terminating" as it is being terminated. Then its status will change to "Terminated."

Getting Detailed Information on an Execution via the UI

To get additional information on an execution, click on its name in the list on the Monitor screen. A new page will open.

Available Detailed Information on Executions

On the details page for an execution, you'll see a range of information, including:

  • High-level details - In the Execution Tree section, at the top of the screen, you'll see high-level information, including:

    • For a standalone execution - such as a job without children - you'll see a single entry that includes details on the state of the execution, when it started and stopped running, and how long it spent in the running state.

    • For an execution with descendants - such as an analysis with multiple stages - you'll see a list, with each row containing details on the execution run at each stage of the analysis. If the execution has descendants, you can click on the “+” icon next to its name to expand the row to view information on its descendants. To see a page displaying detailed information on a stage, click on its name in the list. To navigate back to the workflow's details page, click on its name in the "breadcrumb" navigation menu in the top right corner of the screen.

  • Execution state - In the Execution Tree section, each execution row includes a color bar that represents the execution's current state. For descendants within the same execution tree, the time visualizations are staggered, indicating their different start and stop times in relation to each other. The colors include:

    • Blue - A blue bar indicates that the execution is in the "Running" or "In Progress" state.

    • Green - A green bar indicates that the execution is in the "Done" state.

    • Red - A red bar indicates that the execution is in the "Failed" or "Partially Failed" state.

    • "Grey" indicates that the execution is in the "Terminated" state.

  • Execution start and stop times - Times are displayed in the header bar at the top of the Execution Tree section. These times run, from left to right, from the time at which the job started running, or when the analysis was created, to either the current time, or the time at which the execution entered a terminal state ("Done," "Failed," or "Terminated").

  • Inputs - In this section, you'll see a list of the inputs to the execution. If a direct link to the input file is available, the input's name will be hyperlinked to the file; clicking the link will open the project location containing the file. If the input was provided by another execution in a workflow, the execution's name will be hyperlinked; clicking the link will open the details page for the execution in question.

  • Outputs - In this section, you'll see a list of the execution's outputs. If a direct link to the output file is available, the output's name will be hyperlinked to the file; clicking the link will open the folder containing the file.

  • Log files - An execution's log file is useful in understanding details about, for example, the resources used by an execution, the costs it incurred, and the source of any delays it encountered. To access log files, and, as needed, download them in .txt format:

    • To access the log file for a job, click either the View Log button in the top right corner of the screen, or the View Log link in the Execution Tree section.

    • To access the log file for each stage in an analysis, click the View Log link next to the row displaying information on the stage in question, in the Execution Tree section.

  • Basic info - The Info pane, on the right side of the screen, displays a range of basic information on the execution, along with additional detail such as the execution's unique ID, and custom properties and tags assigned to it.

  • Reused results - If an execution reuses results from another execution, this information will be shown in a blue pane, above the Execution Tree section. To see details on the execution that generated these results, click on its name.

Getting Help with Failed Executions

If an execution failed, a Cause of Failure pane will display, above the Execution Tree section. The cause of failure is a system-generated error message. For assistance in diagnosing the failure and any related issues:

  1. Click the button labeled Send Failure Report to DNAnexus Support.

  2. A form will open in a modal window, with both the Subject and Message fields pre-populated with information that DNAnexus Support will use in diagnosing and resolving the issue.

  3. By clicking the button in the Grant Access section, DNAnexus Support reps will be given "View" access to the project in which the issue occurred. This will enable Support reps to diagnose and resolve the issue more quickly.

  4. Click Send Report to send the report.

Launching a New Execution

To re-launch a job from the execution details screen:

  1. Click the Launch as New Job button in the upper right corner of the screen.

  2. A new browser tab will open, displaying the Run App / Applet form.

  3. Configure the run, then click Start Analysis.

To re-launch an analysis from the execution details screen:

  1. Click the Launch as New Analysis button in the upper right corner of the screen.

  2. A new browser tab will open, displaying the Run Analysis form.

  3. Configure the run, then click Start Analysis.

Saving a Workflow as a New Workflow

If you want to save a copy of a workflow along with its input configurations under a new name, from the execution details screen:

  1. Click the Save as New Workflow button in the upper right corner of the screen.

  2. In the Save as New Workflow modal window, give the workflow a name, and select the project in which you'd like to save it.

  3. Click Save.

Viewing Initial Tries for Restarted Jobs

As described in this documentation, jobs can be configured to restart automatically upon certain types of failures.

If you want to view the execution details for the initial tries for a restarted job:

  1. Click on the "Tries" link below the job name in the summary banner, or the "Tries" link next to the job name in the execution tree.

  2. A modal window will open.

  3. Click the name of the try for which you'd like to view execution details.

Note that you can only send a failure report for the most recent try, not for any previous tries.

Monitoring a Job via the CLI

You can use dx watch to view the log of a running job or any past jobs, which may have finished successfully, failed, or been terminated.

Monitoring a Currently Running Job

If you'd like to view the job's log stream while it runs, you can use dx watch. The log stream includes a log of stdout, stderr, and additional information the worker outputs as it executes the job.

$ dx watch job-xxxx
Watching job job-xxxx. Press Ctrl+C to stop.
* Sample Prints (sample_prints:main) (running) job-xxxx
  amy 2024-01-01 09:00:00 (running for 0:00:37)
2024-01-01 09:06:00 Sample Prints INFO Logging initialized (priority)
2024-01-01 09:06:37 Sample Prints INFO CPU: 4% (4 cores) * Memory: 547/7479MB * Storage: 74GB free * Net: 0↓/0↑MBps
2024-01-01 09:06:37 Sample Prints INFO Setting SSH public key
2024-01-01 09:06:37 Sample Prints STDOUT dxpy/0.365.0 (Linux-5.15.0-1050-aws-x86_64-with-glibc2.29) Python/3.8.10
2024-01-01 09:06:37 Sample Prints STDOUT Invoking main with {}
2024-01-01 09:06:37 Sample Prints STDOUT 0
...

Terminating a Job

If for some reason you need to terminate a job before it completes, use the command dx terminate.

Monitoring Past Jobs

If you'd like to view any jobs that have finished running, you can use the dx watch command. The log stream includes a log of stdout, stderr, and additional information the worker outputs as it executed the job.

$ dx watch job-xxxx
Watching job job-xxxx. Press Ctrl+C to stop.
* Sample Prints (sample_prints:main) (running) job-xxxx
  amy 2024-01-01 09:00:00 (running for 0:00:37)
2024-01-01 09:06:00 Sample Prints INFO Logging initialized (priority)
2024-01-01 09:06:37 Sample Prints INFO CPU: 4% (4 cores) * Memory: 547/7479MB * Storage: 74GB free * Net: 0↓/0↑MBps
2024-01-01 09:06:37 Sample Prints INFO Setting SSH public key
204-01-01 09:06:37 Sample Prints STDOUT dxpy/0.365.0 (Linux-5.15.0-1050-aws-x86_64-with-glibc2.29) Python/3.8.10
2024-01-01 09:06:37 Sample Prints STDOUT Invoking main with {}
2024-01-01 09:06:37 Sample Prints STDOUT 0
2024-01-01 09:06:37 Sample Prints STDOUT 1
2024-01-01 09:06:37 Sample Prints STDOUT 2
2024-01-01 09:06:37 Sample Prints STDOUT 3
* Sample Prints (sample_prints:main) (done) job-xxxx
  amy 2024-01-01 09:08:11 (runtime 0:02:11)
  Output: -

Finding Executions via the CLI

You can use dx find executions to return the ten most recent executions in your current project. You can specify the number of executions you wish to view by running dx find executions -n <specified number>. The output from dx find executions will be similar to the information shown in the "Monitor" tab on the DNAnexus web UI.

Below is an example of dx find executions; in this case, only two executions have been run in the current project. There is an individual job, DeepVariant Germline Variant Caller, and a workflow consisting of two stages, Variant Calling Workflow. A stage is represented by either another analysis (if running a workflow) or a job (if running an app(let)).

The job running the DeepVariant Germline Variant Caller executable is running and has been running for 10 minutes and 28 seconds. The analysis running the Variant Calling Workflow consists of 2 stages, Freebayes Variant Caller, which is waiting on input, and BWA-MEM FASTQ Read Mapper, which has been running for 10 minutes and 18 seconds.

$ dx find executions
* DeepVariant Germline Variant Caller (deepvariant_germline:main) (running) job-xxxx
  amy 2024-01-01 09:00:18 (running for 0:10:28)
* Variant Calling Workflow (in_progress) analysis-xxxx
│ amy 2024-01-01 09:00:18
├── * FreeBayes Variant Caller (freebayes:main) (waiting_on_input) job-yyyy
│     amy 2024-01-01 09:00:18
└── * BWA-MEM FASTQ Read Mapper (bwa_mem_fastq_read_mapper:main) (running) job-zzzz
      amy 2024-01-01 09:00:18 (running for 0:10:18)

Using dx find executions

By default, the dx find executions operation will search for jobs or analyses created when a user runs an app or applet. If a job is part of an analysis, the results will be returned in a tree representation linking all of the jobs in an analysis together.

By default, dx find executions will return up to ten of the most recent executions in your current project in order of execution creation time.

However, a user can also filter the returned executions by job type. Using the flag --origin-jobs in conjunction with the dx find executions command returns only original jobs, whereas the flag --all-jobs will also include subjobs.

Finding Analyses via the CLI

We can choose to monitor only analyses by running the command dx find analyses. Analyses are executions of workflows and consist of one or more app(let)s being run. When using dx find analyses, the command will return only the top-level analyses, not any of the jobs contained therein.

Below is an example of dx find analyses:

$ dx find analyses
* Variant Calling Workflow (in_progress) analysis-xxxx
  amy 2024-01-01 09:00:18

Finding Jobs via the CLI

Jobs are runs of an individual app(let) and compose analyses. We can monitor jobs by running the command dx find jobs, which will return a flat list of jobs. If a job is in an analysis, all jobs within the analysis are also returned.

Below is an example of dx find jobs:

$ dx find jobs
* DeepVariant Germline Variant Caller (deepvariant_germline:main) (running) job-xxxx
  amy 2024-01-01 09:10:00 (running for 0:00:28)
* FreeBayes Variant Caller (freebayes:main) (waiting_on_input) job-yyyy
  amy 2024-01-01 09:00:18 
* BWA-MEM FASTQ Read Mapper (bwa_mem_fastq_read_mapper:main) (running) job-zzzz
  amy 2024-01-01 09:00:18 (running for 0:10:18)

Advanced CLI Monitoring Options

Searches for executions can be restricted to specific parameters.

Viewing stdout and/or stderr from a Job Log

  • To extract stdout only from this job, we can run the command dx watch job-xxxx --get-stdout

  • To extract stderr only from this job, we can run the command dx watch job-xxxx --get-stderr\

  • To extract only stdout and stderr from this job, we can run the command dx watch job-xxxx --get-streams

Below is an example of viewing stdout lines of a job log:

$ dx watch job-xxxx --get-streams
Watching job job-xxxx. Press Ctrl+C to stop.
dxpy/0.365.0 (Linux-5.15.0-1050-aws-x86_64-with-glibc2.29) Python/3.8.10
Invoking main with {}
0
1
2
3
4
5
6
7
8
9
10

Viewing Subjobs

To view the entire job tree, including both main jobs and subjobs, use the command dx watch job-xxxx --tree.

Viewing the First n Messages of a Job Log

To view the entire job tree -- both main jobs and subjobs -- use the command dx watch job-xxxx -n 8. If the job already ran, the output is displayed as well.

In the example below, the app Sample Prints doesn’t have any output.

$ dx watch job-F5vPQg807yxPJ3KP16Ff1zyG -n 8
Watching job job-xxxx. Press Ctrl+C to stop.
* Sample Prints (sample_prints:main) (done) job-xxxx
  amy 2024-01-01 09:00:00 (runtime 0:02:11)
2024-01-01 09:06:00 Sample Prints INFO Logging initialized (priority)
2024-01-01 09:08:11 Sample Prints INFO CPU: 4% (4 cores) * Memory: 547/7479MB * Storage: 74GB free * Net: 0↓/0↑MBps
2024-01-01 09:08:11 Sample Prints INFO Setting SSH public key
2024-01-01 09:08:11 Sample Prints dxpy/0.365.0 (Linux-5.15.0-1050-aws-x86_64-with-glibc2.29) Python/3.8.10
* Sample Prints (sample_prints:main) (done) job-F5vPQg807yxPJ3KP16Ff1zyG
  amy 2024-01-01 09:00:00 (runtime 0:02:11)
  Output: -

Finding and Examining Initial Tries for Restarted Jobs

Jobs can be configured to restart automatically upon certain types of failures as described in the Restartable Jobs section. To view initial tries of the restarted jobs along with execution subtrees rooted in those initial tries, use dx find executions --include-restarted. To examine job logs for initial tries, use dx watch job-xxxx --try X. An example of these commands is shown below.

$ dx run swiss-army-knife -icmd="exit 1" \
    --extra-args '{"executionPolicy": { "restartOn":{"*":2}}}'

$ dx find executions --include-restarted
* Swiss Army Knife (swiss-army-knife:main) (failed) job-xxxx tries
├── * Swiss Army Knife (swiss-army-knife:main) (failed) job-xxxx try 2
│     amy 2023-08-02 16:33:40 (runtime 0:01:45)
├── * Swiss Army Knife (swiss-army-knife:main) (restarted) job-xxxx try 1
│     amy 2023-08-02 16:33:40
└── * Swiss Army Knife (swiss-army-knife:main) (restarted) job-xxxx try 0
      amy 2023-08-02 16:33:40

$ dx watch job-xxxx --try 0
Watching job job-xxxx try 0. Press Ctrl+C to stop watching.
* Swiss Army Knife (swiss-army-knife:main) (restarted) job-xxxx try 0
  amy 2023-08-02 16:33:40
2023-08-02 16:35:26 Swiss Army Knife INFO Logging initialized (priority)

Searching Across All Projects

By default, dx find will restrict your search to only your current project context. To search across all the projects to which you have access, use the --all-projects flag.

$ dx find executions -n 3 --all-projects
* Sample Prints (sample_prints:main) (done) job-xxxx
  amy 2024-01-01 09:15:00 (runtime 0:02:11)
* Sample Applet (sample_applet:main) (done) job-yyyy
  ben 2024-01-01 09:10:00 (runtime 0:00:28)
* Sample Applet (sample_applet:main) (failed) job-zzzz
  amy 2024-01-01 09:00:00 (runtime 0:19:02)

Returning More Than Ten Results

By default, dx find will only return up to ten of the most recently launched executions matching your search query. To change the number of executions returned, you can use the -n option.

# Find the 100 most recently launched jobs in your project
$ dx find executions -n 100

Searching by Executable

A user can search for only executions of a specific app(let) or workflow based on its entity ID.

# Find most recent executions running app-deepvariant_germline in the current project
$ dx find executions --executable app-deepvariant_germline
* DeepVariant Germline Variant Caller (deepvariant_germline:main) (running) job-xxxx
  amy 2024-01-01 09:00:18 (running for 0:10:18)

Searching by Execution Start Time

Users can also use the --created-before and --created-after options to search based on when the execution began.

Searching by Date

# Find executions run on January 2, 2024
$ dx find executions --created-after=2024-01-01 --created-before=2024-01-03

Searching by Time

# Find executions created in the last 2 hours
$ dx find executions --created-after=-2h
# Find analyses created in the last 5 days
$ dx find analyses --created-after=-5d

Searching by Execution State

Users can also restrict the search to a specific state, e.g. "done", "failed", "terminated".

# Find failed jobs in the current project 
$ dx find jobs --state failed

Scripting

Delimiters

The --delim flag will tab-delimit the output. This allows the output to be passed into other shell commands.

$ dx find jobs --delim
* Cloud Workstation (cloud_workstation:main) done  job-xxxx    amy   2024-01-07 09:00:00 (runtime 1:00:00)
* GATK3 Human Exome Pipeline(gatk3_human_exome_pipeline:main)    done  job-yyyy amy 2024-01-07  09:00:00 (runtime 0:21:16)

Returning Only IDs

You can use the --brief flag to return only the object IDs for the objects returned by your search query. The ‑‑origin‑jobs flag will omit the subjob information.

Below is an example usage of the --brief flag:

$ dx find jobs -n 3 --brief
job-xxxx
job-yyyy
job-zzzz

Below is an example of using the flags --origin-jobs and --brief. In the example below, we describe the last job run in the current default project.

$ dx describe $(dx find jobs -n 1 --origin-jobs --brief)
Result 1:
ID                  job-xxxx
Class               job
Job name            BWA-MEM FASTQ Read Mapper
Executable name     bwa_mem_fastq_read_mapper
Project context     project-xxxx
Billed to           amy
Workspace           container-xxxx
Cache workspace     container-yyyy
Resources           container-zzzz
App                 app-xxxx
Instance Type       mem1_ssd1_x8
Priority            high
State               done
Root execution      job-zzzz
Origin job          job-zzzz
Parent job          -
Function            main
Input               genomeindex_targz = file-xxxx
                reads_fastqgz = file-xxxx
                [read_group_library = "1"]
                [mark_as_secondary = true]
                [read_group_platform = "ILLUMINA"]
                [read_group_sample = "1"]
                [add_read_group = true]
                [read_group_id = {"$dnanexus_link": {"input": "reads_fastqgz", "metadata": "name"}}]
                [read_group_platform_unit = "None"]
Output              -
Output folder       /
Launched by         amy
Created             Sun Jan  1 09:00:17 2024
Started running     Sun Jan  1 09:00:10 2024
Stopped running     Sun Jan  1 09:00:27 2024 (Runtime: 0:00:16)
Last modified       Sun Jan  1 09:00:28 2024
Depends on          -
Sys Requirements    {"main": {"instanceType": "mem1_ssd1_x8"}}
Tags                -
Properties          -

Rerunning Time-Specific Failed Jobs With Updated Instance Types

# Find failed jobs in the current project from a time period
$ dx find jobs --state failed --created-after=2024-01-01 --created-before=2024-02-01
* BWA-MEM FASTQ Read Mapper (bwa_mem_fastq_read_mapper:main) (failed) job-xxxx
  amy 2024-01-22 09:00:00 (runtime 0:02:12)
* BWA-MEM FASTQ Read Mapper (bwa_mem_fastq_read_mapper:main) (done) job-yyyy
  amy 2024-01-07 06:00:00 (runtime 0:11:22)

Rerunning Failed Executions With an Updated Executable

# Find all failed executions of specified executable
$ dx find executions --state failed --executable app-bwa_mem_fastq_read_mapper
* BWA-MEM FASTQ Read Mapper (bwa_mem_fastq_read_mapper:main) (failed) job-xxxx
  amy 2024-01-01 09:00:00 (runtime 0:02:12)
# Update the app and navigate to within app directory
$ dx build -a 
INFO:dxpy:Archived app app-xxxx to project-xxxx:"/.App_archive/bwa_mem_fastq_read_mapper (Sun Jan  1 09:00:00 2024)"
{"id": "app-yyyy"}
# Rerun job with updated app
$ dx run bwa_mem_fastq_read_mapper --clone job-xxxx
$ dx find jobs --tag

Forwarding Job Logs to Splunk for Analysis

A license is required to use this feature. Contact DNAnexus Sales for more information.

Job logs can be automatically forwarded to a customer's Splunk instance for analysis. See this documentation for more information on enabling and using this feature.

Last updated