Forge Home


SLURM Puppet module


70 latest version

4.7 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 4.0.1 (latest)
  • 4.0.0
  • 3.2.0
  • 3.1.0
  • 3.0.0
  • 2.4.0
  • 2.3.0
  • 2.2.0
  • 2.1.0
  • 2.0.2
  • 2.0.1
  • 2.0.0
  • 1.0.0
  • 0.7.0
  • 0.6.2
  • 0.6.1
  • 0.6.0
  • 0.5.1
  • 0.5.0
  • 0.4.0
  • 0.3.0
  • 0.2.1
  • 0.2.0
  • 0.1.0
released Apr 16th 2024
This version is compatible with:
  • Puppet Enterprise 2023.7.x, 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x
  • Puppet >= 7.0.0 < 9.0.0
  • , , , , ,
  • reconfig

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'treydock-slurm', '4.0.1'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add treydock-slurm
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install treydock-slurm --version 4.0.1

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Tags: hpc, slurm, batch


treydock/slurm — version 4.0.1 Apr 16th 2024


Puppet Forge CI Status

Table of Contents

  1. Overview
  2. Usage - Configuration options
  3. Reference - Parameter and detailed reference to all options
  4. Limitations - OS compatibility, etc.


Manage SLURM.

Supported Versions of SLURM

This module is designed to work with SLURM 22.05.x, 23.02.x and 23.11.x.

SLURM Version SLURM Puppet module versions
20.02.x 0.x
20.11.x 1.x
21.08.x & 22.05.x 2.x
23.02.x 3.x
23.11.x 4.x


This module is designed so the majority of configuration changes are made through the slurm class directly.

In most cases all that is needed to begin using this module is to have include slurm defined. The following usage examples are all assuming that a host has include slurm performed already and the rest of the configuration is done via Hiera.

It's advisable to put as much of the Hiera data as possible in a location like common.yaml.


In order to use SLURM the munge daemon must be configured. This module will include the munge class from treydock/munge but will not configure munge. The minimial configuration needed is to set the munge key source to the munge key stored in a module somewhere.

munge::munge_key_source: "puppet:///modules/profile/munge.key"

As of version v2.3.0 you can also provide the content of the munge key, for example if your using EYAML in Hiera.

munge::munge_key_content: "supersecret"


The following parameter changes can be made to avoid dependencies on several modules

  • slurm::manage_firewall: false - Disable dependency on puppetlabs/firewall
  • slurm::use_nhc: false OR slurm::include_nhc: false - Disable dependency on treydock/nhc
  • slurm::manage_rsyslog: false OR slurm::use_syslog: false - Disable dependenchy on saz/rsyslog
  • slurm::manage_logrotate: false - Disable dependency on puppet/logrotate
  • slurm::source_install_manage_alternatives: false - When install_method is source and installing on a system without a default Python install, this will disable a dependency on puppet/alternatives
  • slurm::tuning_net_core_somaxconn: false - Disable dependency on herculesteam/augeasproviders_sysctl

NOTE: If use_syslog is set to true there is a soft dependency on saz/rsyslog NOTE: If use_nhc and include_nhc are set to true there is a soft dependency on treydock/nhc


The following could be included in common.yaml. This assumes your site has access to SLURM RPMs.

slurm::repo_baseurl: "{facts.os.release.major}/"
slurm::install_torque_wrapper: true
slurm::install_pam: true
slurm::slurm_group_gid: 93
slurm::slurm_user_uid: 93
slurm::slurm_user_home: /var/lib/slurm
slurm::manage_firewall: false
slurm::use_syslog: true
slurm::cluster_name: example
    auto_detect: nvml
slurm::slurmd_spool_dir: /var/spool/slurmd
    - gres/gpu
    - gres/gpu:tesla
    - license/ansys
    - ansys:2
  ReturnToService: 2
  SelectType: select/cons_tres
    - CR_CPU
    default: 'YES'
    def_mem_per_cpu: 1700
    max_mem_per_cpu: 1750
    nodes: slurmd01
    cpus: 4
    threads_per_core: 1
    cores_per_socket: 1
    sockets: 4
    real_memory: 7000


The behavior of this module is determined by 5 booleans that set the role for a host.

  • client - When true will setup a host as SLURM client
  • slurmctld - When true will setup a host to run slurmctld
  • slurmdbd - When true will setup a host to run slurmdbd
  • database - When true will setup a host to manage the slurmdbd MySQL database
  • slurmd - When true will setup a host to run slurmd
  • slurmrestd - When true will setup a host to run slurmrestd

NOTE: The only role enabled by default is client.

Role: slurmdbd and database

The following example will setup an instance of slurmdbd that exports the database resource that can be collected by a database server:

slurm::client: true
slurm::slurmdbd: true
slurm::database: true
slurm::slurmdbd_storage_loc: slurm_acct_db
slurm::slurmdbd_storage_user: slurmdbd
slurm::slurmdbd_storage_pass: changeme
slurm::export_database: true
slurm::export_database_tag: "%{lookup('slurm::slurmdbd_storage_host')}"
  MaxQueryTimeRange: '90-00:00:00'
  MessageTimeout: '10'

The database server would have something like the following to collect the db resources

Mysql::Db <<| tag == $facts['fqdn'] |>>

The following example would avoid PuppetDB dependency and require including the slurm class on the MySQL server

# common.yaml
slurm::slurmdbd_storage_loc: slurm_acct_db
slurm::slurmdbd_storage_user: slurmdbd
slurm::slurmdbd_storage_pass: changeme
# fqdn/
slurm::client: false
slurm::database: true
# fqdn/
slurm::slurmdbd: true
slurm::database: false

Role: slurmctld

The following enables a host to act as the slurmctld daemon with a remote slurmdbd.

slurm::client: true
slurm::slurmdbd: false
slurm::database: false
slurm::slurmctld: true

If you wish to enable configless SLURM:

slurm::enable_configless: true

Role: slurmd

The following enables a host to act as a slurmd compute node

slurm::client: true
slurm::slurmdbd: false
slurm::database: false
slurm::slurmctld: false
slurm::slurmd: true

To have slurmd pull configs via configless SLURM:

slurm::configless: true

Role: client

If the majority of your configuration is done in common.yaml then the default for slurm::client of true is sufficient to configure a host to act as a SLURM client.

Role: slurmrestd

First the common Hiera such as common.yaml should have something like the below. Setting auth_alt_types to include auth/jwt will activate the Puppet code to manage JWT resources where appropriate.

  - auth/jwt
slurm::jwt_key_source: 'puppet:///modules/site_slurm/jwt.key'

For the host to run slurmrestd:

slurm::slurmrestd: true

slurm::conf usage

It's possible to deploy multiple slurm.conf files using this module.

The following example will deploy /etc/slurm/slurm-ascend.conf with only ClusterName and SlurmctldHost changed.

include slurm
$cluster_conf = {
  'ClusterName'   => 'ascend',
  'SlurmctldHost' => '',
slurm::conf { 'ascend':
  configs => $slurm::slurm_conf + $cluster_conf,



This module has been tested on:

  • RedHat/CentOS 7 x86_64
  • RedHat/Rocky/AlmaLinux 8 x86_64
  • Debian 10 x86_64
  • Ubuntu 20.04 x86_64