MPCDF Viper Configuration

All nf-core pipelines have been successfully configured for use on the HPCs at Max Planck Computing and Data Facility.

⚠️ these profiles are not officially supported by the MPCDF.

To run Nextflow, the jdk and apptainer modules must be loaded.

This mpcdf_viper config is for viper. For raven see the mpcdf profile.

All profiles use apptainer as the corresponding containerEngine. To prevent repeatedly downloading the same apptainer image for every pipeline run, for all profiles we recommend specifying a cache location in your ~/.bash_profile with the $NXF_APPTAINER_CACHEDIR bash variable.

[!TIP] If you have issues pulling the apptainer image, with errors such as apptainer unable to create new build:, you may need to create the directory the error refers to (i.e., directory with a -temp suffix)

[!WARNING] Do not set the NXF_APPTAINER_LIBRARYDIR, this will prevent images from being correctly pulled.

NB: Nextflow will need to submit the jobs via SLURM to the clusters and as such the commands above will have to be executed on one of the head nodes. Check the MPCDF documentation.

Config file

See config file on GitHub

mpcdf_viper.config
params {
    config_profile_description = 'MPCDF Viper HPC profiles (unoffically) provided by nf-core/configs.'
    config_profile_contact     = 'James Fellows Yates (@jfy133)'
    config_profile_url         = 'https://www.mpcdf.mpg.de/services/supercomputing'
}
 
cleanup = true
 
process {
    resourceLimits = [
        memory: 2300.GB,
        cpus: 256,
        time: 24.h
    ]
    beforeScript   = 'module load apptainer'
    clusterOptions = '--export=ALL'
    executor       = 'slurm'
}
 
executor {
    queueSize         = 30
    pollInterval      = '1 min'
    queueStatInterval = '5 min'
}
 
// Set $NXF_APPTAINER_CACHEDIR in your ~/.bash_profile
// to stop downloading the same image for every run
apptainer {
    enabled    = true
    autoMounts = true
}
 
params {
    max_memory = 2300.GB
    max_cpus   = 256
    max_time   = 24.h
}