References
This page is a reference for most useful operations and features in the DNAnexus JupyterLab environment:

References

Table of Content

Download files from the project to the local execution environment

bash

You can download input data from a project using dx download in a notebook cell:
1
%%bash
2
dx download input_data/reads.fastq
Copied!
The %%bash keyword converts the whole cell to a magic cell which allows us to run bash code in that cell without exiting the Python kernel. See me examples of magic commands in the IPython documentation. The ! prefix to achieves the same result:
1
! dx download input_data/reads.fastq
Copied!
Alternatively, the dx command can be executed from the terminal.

Python

To download data with Python in the notebook, you can use the download_dxfile function:
1
import dxpy
2
dxpy.download_dxfile(dxid='file-xxxx',
3
filename='unique_name.txt')
Copied!
Check dxpy helper functions for details on how to download files and folders.

Upload data from the session to the project

bash

Any files from the execution environment can be uploaded to the project using dx upload:
1
%%bash
2
dx upload Readme.ipynb
Copied!

Python

To upload data using Python in the notebook, you can use the upload_local_file function:
1
import dxpy
2
dxpy.upload_local_file('variants.vcf')
Copied!
Check dxpy helper functions for details on how to upload files and folders.

Download and upload data to your local machine

By selecting a notebook or any other file on your computer and dragging it into the DNAnexus project file browser, you can upload the files directly to the project. To download a file, right-click on it and click Download (to local computer).
You may upload and download data to the local execution environment in a similar way, i.e. by dragging and dropping files to the execution file browser or by right-clicking on the files there and clicking Download.

Use the terminal

It is useful to have a terminal provided by JupyterLab at hand, which uses bash shell by default and lets you execute shell scripts or interact with the platform via dx toolkit. For example, the command:
1
$ dx pwd
2
MyProject:/
Copied!
will confirm what the current project context is.
Running pwd will show you that the working directory of the execution environment is /opt/notebooks. The JupyterLab server is launched from this directory, which is also the default location of the output files generated in the notebooks.
To open a terminal window, go to File > New > Terminal or open it from the Launcher (using the "Terminal" box at the bottom). To open a Launcher, select File > New Launcher.

Install custom packages in the session environment

You can install pip, conda, apt-get, and other packages in the execution environment from the notebook:
1
%%bash
2
pip install torch
3
pip install torchvision
4
conda install -c conda-forge opencv
Copied!
By creating a snapshot, you can start subsequent sessions with these packages pre-installed by providing the snapshot as input.

Access public and private github repositories from the JupyterLab terminal

You can access public github repositories from the JupyterLab terminal using git clone command. By placing a private ssh key that's registered with your github account in /root/.ssh/id_rsa,you can clone private github repositories using git clone and push any changes back to github using git push from the JupyterLab terminal.
Below is a screenshot of a JupyterLab session with a terminal displaying a script that:
  • sets up ssh key to access a private github repository and clones it,
  • clones a public repository,
  • downloads a json file from the DNAnexus project,
  • modifies an open-source notebook to convert the json file to csv format,
  • saves the modified notebook to the private github repository,
  • and uploads the results of json to csv conversion back to the DNAnexus project.
This animation shows the first part of the script in action:

Run notebooks non-interactively

A command can be run in the JupyterLab Docker container without starting an interactive JupyterLab server. To do that, provide the cmd input and additional input files using the in input file array. The command will run in the directory where the JupyterLab server is started and notebooks are run, i.e. /opt/notebooks/. Any output files generated in this directory will be uploaded to the project and returned in the out output.
The cmd input makes it possible to use a papermill tool pre-installed in the JupyterLab environment that executes notebooks non-interactively. For example, to execute all the cells in a notebook and produce an output notebook:
1
my_cmd="papermill notebook.ipynb output_notebook.ipynb"
2
dx run dxjupyterlab -icmd="$my_cmd" -iin="notebook.ipynb"
Copied!
where notebook.ipynb is the input notebook to "papermill", which needs to be passed in the "in" input, and output_notebook.ipynb is the name of the output notebook, which will store the result of the cells' execution. The output will be uploaded to the project at the end of the app execution.
If one of imagename or snapshot parameters is specified, execution of cmd will take place in the specified Docker container. The duration argument will be ignored when running the app with cmd. The app can be run from commandline with the --extra-args flag to limit the runtime, e.g. dx run dxjupyterlab --extra-args '{"timeoutPolicyByExecutable": {"app-xxxx":{"\*": {"hours": 1}}}}'".
If cmd is not specified, the in parameter will be ignored and the output of an app will consist of an empty array.
Last modified 8mo ago