Singularity is a developing platform, so version matters a lot. The version we will be using is the latest development HEAD of 2.3.
In [3]:
singularity --version
We maintain a base image on SingularityHub for running Jupyter* (https://singularity-hub.org/collections/440/). This image contains the minimum dependencies and configuration needed to run containerized Notebooks (standalone or JupyterHub-spawned), and is intended to serve as a base for user-built software environments.
At this time though bootstrapping from SingularityHub is still an upcoming feature (https://github.com/singularityware/singularity/issues/833) so we will instead be using the jupyter/base-notebook
docker container as a base image. The base-notebook is provided by the Jupyter Docker Stacks project (https://github.com/jupyter/docker-stacks), which provides pre-built stacks ready to be run standalone or behind JupyterHub.
In [1]:
singularity pull --name "jupyter-base.img" docker://jupyter/base-notebook:ae885c0a6226
There it is! Your container is good to go.
In [2]:
singularity exec -e jupyter-base.img jupyter -h
In [3]:
ls -lsah | grep jupyter-base.img
When pulling from a Docker registry, you can use the --size
flag to specify the built image size. Notice that Singularity isn't grabbing Docker layers from the registry, because the specified commit (ae885c0a6226) has already been pulled. Singularity Docker cache is located in $HOME/.singularity/docker
.
In [9]:
singularity pull --size 3000 --name "jupyter-ext.img" docker://jupyter/base-notebook:ae885c0a6226
In [3]:
singularity exec -e jupyter-ext.img jupyter -h
By default Singularity containers mounted as read-only volumes, which means you won't be able to add content or install software (even as a privileged user), save for default or system-mounted paths. In order to add content you must run your Singularity command with the --writable
flag.
For an interactive shell into your container, use the shell
subcommand. The command below also passes the -e
flag, which tells Singularity to strip the host environment before entering the container.
In [ ]:
sudo singularity shell -e --writable jupyter-ext.img
Alternatively, you can use the exec
subcommand to execute commands in your container without leaving your host environment.
In [10]:
singularity exec -e --writable jupyter-ext.img /opt/conda/bin/conda install -y matplotlib
singularity exec -e --writable jupyter-ext.img /opt/conda/bin/conda install -y seaborn
Now seaborn is installed in your image.
In [11]:
singularity exec -e jupyter-ext.img conda list | grep seaborn
Shelling into your container and making ad-hoc changes is excellent for debugging and initial development, but it is considered bad practice as the steps needed to construct your software environment are not captured and cannot be reproduced.
To make durable, reproducible changes you need to build a spec file from which you can bootstrap your container. Bootstrapping must be done by a privileged user
In [19]:
cat jupyter-bootstrapped.def
In [17]:
singularity create --force --size 2500 jupyter-bootstrapped.img
sudo /usr/local/bin/singularity bootstrap jupyter-bootstrapped.img jupyter-bootstrapped.def
In [18]:
singularity exec -e jupyter-bootstrapped.img conda list | grep seaborn
IPython notebooks interface with the system via an abstraction called Kernels. A wide variety of languages are supported via Kernels, and they can be customized by editing the kernelspec JSON file that defines them. Here is the default Python 3 kernelspec for reference:
"argv": [
"python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3",
"language": "python"
}
The argv
key in this JSON object is the list that Jupyter uses to construct the kernel command when a notebook is started.
Remember the singularity exec
subcommand? We can leverage that here to start a kernel in our container from a notebook server running in our host environment. All we need to do is prepend the components of the exec command to the argv
list:
"argv": [
"singularity",
"exec",
"-e",
"jupyter-bootstrapped.img",
"python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3",
"language": "python"
}
In [21]:
ipython kernel install --prefix /tmp
Now edit your kernelspec. An example can be found in this repo at singularity-kernel.json. Make sure to rename the kernelspec directory to avoid conflicts with existing kernels.
In [6]:
mv /tmp/share/jupyter/kernels/python3 /tmp/share/jupyter/kernels/seaborn
# Then edit /tmp/share/jupyter/kernels/seaborn/kernel.json (in our case we'll just copy the example)
cp singularity-kernel.json /tmp/share/jupyter/kernels/seaborn/kernel.json
In [7]:
jupyter kernelspec install --user /tmp/share/jupyter/kernels/seaborn