Skip to content

WIP# Run MRIqc on the cluster#

Written by CPP lab people

To contribute see here

General tips#

  • The more resources required, the faster it can be but the more waiting time

  • To try things, set --time=00:05:00 and --partition=debug so it starts right away and you can check if it at least starts without problems (eg the singularity images is running, data are bids compatible or data folders are loaded proprerly). See below in the section Submit a MRIqc job via sbatch command

Prepare to run MRIqc on the cluster#

  • have your data on the cluster and unlock them if they are managed by datalad
  • install datalad on your user (see here)
  • get the fmriprep singularity image as follow:

here the example is with MRIqc version 24.0.0 but check for newer version, list of fmriprep version available here

datalad install https://github.com/ReproNim/containers.git

cd containers

datalad get images/bids/bids-mriqc--24.0.0.sing

In case you have installe the repo a while a ago and you want to use a new version of fmriprep., update the containers repo via:

# go to the repo folder
cd path/to/containers

datald update --merge
``````

Depending on the cluster “unlock” is needed or not. No need for `lemaitre4`.

```bash
datalad unlock containers/images/bids/bids-mriqc--24.0.0.sing

Submit a MRIqc job via a slurm script#

  • pros:
    • easy to run for multiple subject
  • cons:
    • the slurm script can be hard to edit from within the cluster in case of error or a change of mind with fmriprep options. You can edit via vim or locally and then uploading a newversion.

Participants level#

Content of the cpp_mriqc.slurm file (download and edit from here)

Warning

  1. Read the MRIqc documentation to know what you are doing and how the arguments of the run call effects the results
  2. All the paths and email are set afte Marco's users for demosntration. Change them for your user.
  3. Edit the scripts with the info you need to make it run for your user from top to buttom of the script, do not over look the first "commented" chunk cause it is not a real commented section (check the email and job report path, data paths and the username etc.).
#!/bin/bash

#SBATCH --job-name=MRIqc
#SBATCH --time=4:00:00 # hh:mm:ss

#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=10000 # megabytes
#SBATCH --partition=batch

#SBATCH --mail-user=marco.barilari@uclouvain.be
#SBATCH --mail-type=ALL
#SBATCH --output=/home/ucl/cosy/marcobar/jobs_report/mriqc_job-%j.txt

#SBATCH --comment=project-name

#export OMP_NUM_THREADS=4
#export MKL_NUM_THREADS=4

## CPP MRIqc script for CECI cluster v0.3.0
#
# writtent by CPP people
#
# USAGE on cluster:
#
# sbatch cpp_mriqc.slurm <subjID>
#
# examples:
# - 1 subject
#
# sbatch cpp_mriqc.slurm sub-01
#
# submit all the subjects (1 per job) all at once
# read subj list to submit each to a job for all the tasks
# !!! to run from within `raw` folder
# ls -d sub* | xargs -n1 -I{} sbatch path/to/cpp_mriqc.slurm {}

# create jobs_report folder in case they don't exist
mkdir -p $HOME/jobs_report/

# fail whenever something is fishy
# -e exit immediately
# -x to get verbose logfiles
# -u to fail when using undefined variables
# https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
set -e -x -u -o pipefail

module --force purge

subjID=$1

# "latest" or procide specific version number
MRIQC_VERSION="24.0.0"

# cluster paths
path_to_singularity_image="$HOME/tools/containers/images/bids/bids-mriqc--${MRIQC_VERSION}.sing"
scratch_dir=$GLOBALSCRATCH

# data paths
root_dir="$HOME/path-to-project-yoda-fodler"
bids_dir="$root_dir/inputs/raw"
output_dir="$root_dir/outputs/derivatives/mriqc"

# make the scratch folder, here there is no limit space and fmriprep can store stuff in case of crash and do not start from zero again
mkdir -p "${scratch_dir}"/work-mriqc

# create output folder in case it does not exists
mkdir -p "${output_dir}"

singularity run --cleanenv \
    -B "${scratch_dir}":/scratch_dir \
    -B "${bids_dir}":/bids_dir \
    -B "${output_dir}":/output \
    "${path_to_singularity_image}" \
    /bids_dir \
    /output \
    participant --participant-label "${subjID}" \
    --work-dir /scratch_dir/work-mriqc/"${subjID}" \
    --verbose-reports

On the cluster prompt, submit the jobs as:

# Submission command for Lemaitre4

# USAGE on cluster:

sbatch cpp_mriqc.slurm <subjID>

# examples:
# - 1 subject

sbatch cpp_mriqc.slurm sub-01

# submit all the subjects (1 per job) all at once
# read subj list to submit each to a job for all the tasks
# !!! to run from within `raw` folder
ls -d sub* | xargs -n1 -I{} sbatch path/to/cpp_mriqc.slurm {}

Group level#

Content of the cpp_mriqc_group.slurm file (download and edit from here)

Warning

  1. Read the MRIqc documentation to know what you are doing and how the arguments of the run call effects the results
  2. All the paths and email are set afte Marco's users for demosntration. Change them for your user.
  3. Edit the scripts with the info you need to make it run for your user from top to buttom of the script, do not over look the first "commented" chunk cause it is not a real commented section (check the email and job report path, data paths and the username etc.).
#!/bin/bash

#SBATCH --job-name=MRIqc
#SBATCH --time=4:00:00 # hh:mm:ss

#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=10000 # megabytes
#SBATCH --partition=batch

#SBATCH --mail-user=marco.barilari@uclouvain.be
#SBATCH --mail-type=ALL
#SBATCH --output=/home/ucl/cosy/marcobar/jobs_report/mriqc_job-%j.txt

#SBATCH --comment=project-name

#export OMP_NUM_THREADS=4
#export MKL_NUM_THREADS=4

## CPP MRIqc script for CECI cluster v0.3.0
#
# writtent by CPP people
#
# Submission command for Lemaitre4 after running mriqc for each participant
#
# sbatch cpp_mriqc_group.slurm

# create jobs_report folder in case they don't exist
mkdir -p $HOME/jobs_report/

# fail whenever something is fishy
# -e exit immediately
# -x to get verbose logfiles
# -u to fail when using undefined variables
# https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
set -e -x -u -o pipefail

module --force purge

# "latest" or procide specific version number
MRIQC_VERSION="24.0.0"

# cluster paths
path_to_singularity_image="$HOME/tools/containers/images/bids/bids-mriqc--${MRIQC_VERSION}.sing"
scratch_dir=$GLOBALSCRATCH

# data paths
root_dir="$HOME/path-to-project-yoda-fodler"
bids_dir="$root_dir/inputs/raw"
output_dir="$root_dir/outputs/derivatives/mriqc"

# make the scratch folder, here there is no limit space and fmriprep can store stuff in case of crash and do not start from zero again
mkdir -p "${scratch_dir}"/work-mriqc

# create mriqc output folder in case they don't exist
mkdir -p "${output_dir}"

singularity run --cleanenv \
    -B "${scratch_dir}":/scratch_dir \
    -B "${bids_dir}":/bids_dir \
    -B "${output_dir}":/output \
    "${path_to_singularity_image}" \
    /bids_dir \
    /output \
    --work-dir /scratch_dir/work-mriqc/"${subjID}" \
    --verbose-reports \
    group

On the cluster prompt, submit the jobs as:

# Submission command for Lemaitre4

# USAGE on cluster:

# no need to priovide any input

sbatch cpp_mriqc_group.slurm

TIPS#

check your job#

see here

To contribute see here