Skip to content

Slurm Quickstart

Create an interactive bash session (srun will run bash in real-time, --pty connects its stdout and stderr to your current session).

res-login-1:~$ srun --pty bash -i
med0740:~$ echo "Hello World"
Hello World
med0740:~$ exit

Note you probably want to longer running time for your interactive jobs. This way, your jobs can run for up to 28 days. This will make your job be routed automatically into the long partition as it is the only one that can fit your job.

res-login-1:~$ srun --pty --time 28-00 bash -i

Pro-Tip: Using Bash aliases for quick access.

res-login-1:~$ alias slogin="srun --pty bash -i"
res-login-1:~$ slogin
med0740:~$ exit
res-login-1:~$ cat >>~/.bashrc <<"EOF"
# Useful aliases for logging in via Slurm
alias slogin="srun --pty bash -i"
alias slogin-x11="srun --pty --x11 bash -i"

Create an interactive R session on the cluster (assuming conda is active and the environment my-r is created, e.g., with conda create -n my-r r).

res-login-1:~$ conda activate my-r
res-login-1:~$ srun --pty R
R version 3.6.2 (2019-12-12) -- "Dark and Stormy Night"
Copyright (C) 2019 The R Foundation for Statistical Computing
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> q()
Save workspace image? [y/n/c]:

Create an interactive iPython session on the cluster (assuming conda is active and the environment my-python is created, e.g., with conda create -n my-python python=3 ipython).

res-login-1:~$ conda activate my-python
res-login-1:~$ srun --pty ipython
Python 3.8.2 | packaged by conda-forge | (default, Mar  5 2020, 17:11:00)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import socket; socket.gethostname()
Out[1]: 'med0740'

In [2]: exit

Allocate 4 cores (default is 1 core), and a total of 4GB of RAM on one node (alternatively use --mem-per-cpu to set RAM per CPU); sbatch accepts the same argument.

res-login-1:~$ srun --cpus-per-task=4 --nodes=1 --mem=4G --pty bash
med0740:~$ export | grep SLURM_CPUS_ON_NODE
med0740:~$ your-parallel-script --threads 4

Submit an R script to the cluster in batch mode (sbatch schedules the job for later execution).

res-login-1:~$ cat > <<"EOF"
echo "Hello, I'm running on $(hostname) and it's $(date)"
res-login-1:~$ sbatch
Submitted batch job 7

# Some time later:
res-login-1:~$ cat slurm-7.out
Hello, I'm running on med0740 and it's Fri Mar  6 07:36:42 CET 2020

Last update: February 7, 2023