SLURM Cluster Usage
The BMRC cluster uses the SLURM cluster software to manage jobs. Whilst you can use this directly, fsl_sub provides a similified interface that abstracts away most of the SLURM specific options for generic ones that mostly would be transferrable to other sites that use different queuing software.
April 2026 Changes
fsl_sub 2.11 (installed March 31st 2026) has native support for the new GPU charging model. See our fsl_sub page for details.
To ensure efficient scheduling of jobs (and thus shorten your wait times) we STRONGLY recommend that you specify RAM (with fsl_sub's -R option) and time (-T), without these, all jobs will will default to requesting default resource sizes. As these are used to calculate job priority and to help will filling empty space on the cluster these values are very important. If your job exceeds these values it will be killed so it is important to overestimate values. If in doubt, run an example job with high limits and then ask the cluster what your job actually used.
If you have used other cluster software (for example Grid Engine) then you may be aware of 'parallel environments' - SLURM does not support these, but fsl_sub '-s' option can be used to request multi-threaded jobs - use a simple number -s <number>. If you provide a parallel environment name this will be discarded, so existing scripts should continue to work as is.
Interactive tasks are started in a completely different manner - see BMRC's documentation.
BMRC's documentation: https://www.medsci.ox.ac.uk/divisional-services/support-services-1/bmrc/using-the-bmrc-cluster-with-slurm
GPU hardware information and usage is available at: https://www.medsci.ox.ac.uk/for-staff/resources/bmrc/gpu-resources-slurm
