Skip to main content

Singularity: Running Containers on ACCRE

Singularity is a tool that allows containers (including those converted from Docker) to be run within a shared high-performance computing environment. This enables users to control the OS environment (including software libraries) that their jobs run within. For example, if a user wishes to run within an Ubuntu 16.04 environment he or she may do so despite the fact that the OS on the ACCRE cluster is a completely different Linux distribution (i.e. CentOS)! Docker containers themselves cannot be run in a shared environment like the ACCRE cluster for security reasons. However, Singularity enables a user to convert a Docker container image into a Singularity container image, which can then be run on the cluster.

When running within a Singularity container, a user has the same permissions and privileges that he or she would have outside the container. A Singularity image generally must first be developed and built from a Linux machine where you have administrative access (i.e. a personal machine), although ACCRE makes standard images available to all cluster users at /scratch/singularity-images . If you do not have administrative access to a Linux machine, you can create a virtual Linux machine using a free tool like VirtualBox .

A user’s cluster storage may be accessed from within the Singularity container, but no operations (e.g. the installation of system software) that require root/sudo privileges are allowed within the context of the Singularity container when run from the cluster. If you are interested in using Singularity but need assistance creating a custom image to run on the cluster, please schedule an appointment via our Helpdesk. Below are some basic instructions for running Singularity on the cluster. The Singularity documentation is very helpful, so we suggest you invest some time reading through it as well.

Bootstrapping a Singularity Image

Once you have installed Singularity on your own Linux machine or virtual machine , you are ready to create your image. First, create a spec file called ubuntu14-accre.def that looks like the following:

BootStrap: debootstrap
OSVersion: trusty

    echo "This is what happens when you run the container..."

    apt-get -y install python3 python3-numpy python3-scipy
    # install any other software you need here... 
    mkdir /scratch /data /gpfs21 /gpfs22 /gpfs23

In this file, we are telling Singularity we want to build an image based on the latest version of Ubuntu Trusty (version 14.04). The %runscript section is for defining commands or tasks that you want to run each time you run a container of this image. The %post section contains one-time setup steps that you want to be inside the image. This is where you install your custom software needs. In this case, we use apt-get to install Python 3 with NumPy and SciPy included. Finally, it’s important if you want to access all your cluster space from within the container to create the following directories from within the container: /scratch, /data, /gpfs21, /gpfs22, and /gpfs23. If you don’t make these directories within the container (inside the %post section), you will get a warning message when you run a container of this image on the cluster. In particular, creating /gpfs21 will ensure you have access to all your /home space from within a container run on the cluster. To bootstrap this image, run the following command on your Linux machine where you have admin rights:

# on your personal Linux machine
sudo singularity create
sudo singularity bootstrap ubuntu-accre.def

The first command will create an empty image with a default size (768 MiB, at the time this page was written). If you need a larger image containing lots of custom software, it is prudent to pass the –size option and specify the size in MiB (e.g. sudo singularity create –size 2048 ). The second command creates your custom image based on the spec file you created above.

Using docker2singularity Container

Singularityware provides a Docker image expressly for converting Docker images to singularity images. This is especially useful for non-Linux users who do have Docker installed on their local machine. To convert the ubuntu:14.04 image from DockerHub to a singularity image, simply run

docker run \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v path/to/my/singularity/images:/output \
  --privileged -t --rm \
  singularityware/docker2singularity \

The resulting .img file can now be copied to the cluster for use.

Running a Singularity Image on the Cluster

Once you have successfully created your image, you can shell into it with the following command:

# From the cluster
module load GCC Singularity
singularity shell

The shell subcommand is useful for interactive work. Note that you can create new data inside the container that will persist outside the context of the container. For batch processing (e.g. within a SLURM job), you should use the exec subcommand instead. For example:

module load GCC Singularity
singularity exec /path/to/my/

Where script contains the processing steps you want to run from within the batch job. You can also pass a generic Linux command to the exec subcommand. Pipes and redirected output are supported with the exec subcommand. Below is a quick demonstration showing the change in Linux distribution:

[jill@vmps10 ~]$ cat /etc/*release
CentOS release 6.8 (Final)
CentOS release 6.8 (Final)
CentOS release 6.8 (Final)
[jill@vmps10 ~]$ module load GCC Singularity
[jill@vmps10 ~]$ singularity shell ubuntu14-accre.img 
Singularity: Invoking an interactive shell within container...
Singularity.ubuntu14-accre.img> $ cat /etc/*release
VERSION="14.04, Trusty Tahr"
PRETTY_NAME="Ubuntu 14.04 LTS"

Notice that the command prompt changes when you are inside the container.

Singularity.ubuntu14-accre.img> $ exit
[jill@vmps10 ~]$

Building a Singularity Image with GPUs Enabled

From NIVDIA’s GitHub :

Docker containers are often used to seamlessly deploy CPU-based applications on multiple machines. With this use case, Docker containers are both hardware- agnostic and platform-agnostic. This is obviously not the case when using NVIDIA GPUs since it is using specialized hardware and it requires the installation of the NVIDIA driver. As a result, Docker does not natively support NVIDIA GPUs with containers.

Update 2017-10-05: Singularity now supports passing an --nv option which instructs the container to use the native NVIDIA libraries. Previously, the CUDA libraries corresponding to the versions installed on the cluster had to be installed within the Singularity image.

Concretely, a bootstrap file for a TensorFlow image might look like:


  exec python "$@" 

  # Enables access to ACCRE storage
  mkdir /scratch /data /gpfs22 /gpfs23 /dors

  python -V

The instantiation of a container from such an image would look like:

$ ml GCC Singularity CUDA
$ singularity exec --nv my-singularity-image.img python -c "import tensorflow"

For more information, see the Singularity documentation.