Forge Home


Puppet Module to manage the Icinga Software Stack


1,702 latest version

5.0 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 2.8.0 (latest)
  • 2.7.1
  • 2.7.0
  • 2.6.1
  • 2.6.0
  • 2.5.0
  • 2.4.2
  • 2.4.1
  • 2.4.0
  • 2.3.3
  • 2.3.2
  • 2.3.1
  • 2.3.0
  • 2.2.0
  • 2.1.4
  • 2.1.3
  • 2.1.2
  • 2.1.1
  • 2.1.0
  • 2.0.0
  • 1.0.3
  • 1.0.2
  • 1.0.1 (deleted)
  • 1.0.0 (deleted)
  • 0.1.2
  • 0.1.1
  • 0.1.0
released Jul 26th 2022
This version is compatible with:
  • Puppet Enterprise 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x, 2019.8.x, 2019.7.x, 2019.5.x, 2019.4.x, 2019.3.x, 2019.2.x, 2019.1.x, 2019.0.x, 2018.1.x, 2017.3.x, 2017.2.x, 2016.4.x
  • Puppet >= 4.10.0 < 8.0.0
  • , , , , , , , , ,

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'icinga-icinga', '2.8.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add icinga-icinga
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install icinga-icinga --version 2.8.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Tags: monitoring


icinga/icinga — version 2.8.0 Jul 26th 2022


Icinga Logo

Table of Contents

  1. Description
  2. Setup - The basics of getting started with icinga
  3. Usage - Configuration options and additional functionality
  4. Reference
  5. Release notes


This module provides several non private helper classes for the official Icinga modules:

  • [icinga/icinga2]
  • [icinga/icingaweb2]
  • [icinga/icingadb]

Changes in v2.7.0

  • The Class icinga::web now uses event as MPM instead of worker.
  • Class icinga::repos got a new parameter 'manage_powertools' to manage the PowerTools on CentOS Stream, Rocky and AlmaLinux.

Changes in v2.0.0

  • Earlier the parameter manage_* enables or disables a repository but it was still managed. Now the management is enabled or disabled, see Enable or disable repositories.


What the Icinga Puppet module supports

  • [icinga::repos] involves the needed repositories to install icinga2, icingadb and icingaweb2:
    • The Icinga Project repository for the stages: stable, testing or nightly builds
    • EPEL repository for RHEL simular platforms
    • Backports repository for Debian and Ubuntu
    • NETWAYS extras repository for Icinga Web 2
    • NETWAYS plugins repository with some additional monitoring plugins
  • Classes to manage and setup an Icinga environment much easier:
    • [icinga::server] setups an Icinga 2 including CA, config server, zones and workers aka satellites
    • [icinga::worker] installs an Icinga 2 worker aka satellite
    • [icinga::ido] configures the IDO backend including the database
    • [icinga::web] manages Icinga Web 2, an Apache and a PHP-FPM

Setup Requirements

The requirements depend on the class to be used.

Beginning with icinga

Add this declaration to your Puppetfile:

mod 'icinga',
  :git => '',
  :tag => 'v2.5.0'

Then run:

bolt puppetfile install

Or do a git clone by hand into your modules directory:

git clone icinga

Change to icinga directory and check out your desired version:

cd icinga
git checkout v2.5.0



The class supports:

  • [puppet] >= 5.5 < 8.0

And requiers:

  • [puppetlabs/stdlib] >= 5.1.0 < 9.0.0
  • [puppetlabs/apt] >= 6.0.0
  • [puppet/zypprepo] >= 2.2.1
  • [puppetlabs/yumrepo_core] >= 1.0.0
    • If Puppet 6 or 7 is used

By default the upstream Icinga repository for stable release are involved.

include ::icinga::repos

To setup the testing repository for release candidates use instead:

class { '::icinga::repos':
  manage_stable  => false,
  manage_testing => true,

Or the nightly builds:

class { '::icinga::repos':
  manage_stable  => false,
  manage_nightly => true,

Other possible needed repositories like EPEL on RHEL or the Backports on Debian can also be involved:

class { '::icinga::repos':
  manage_epel         => true,
  configure_backports => true,

The prefix configure means that the repository is not manageable by the module. But backports can be configured by the class apt::backports, that is used by this module.

Enable and Disable Repositories

When manage is set to true for a repository the ressource is managed and the repository is enabled by default. To switch off a repository again, it still has to be managed and the corresponding parameter has to set via hiera. The module does a deep merge lookup for a hash named icinga::repos. Allowed keys are:

  • icinga-stable-release
  • icinga-testing-builds
  • icinga-snapshot-builds
  • epel (only on RHEL Enterprise platforms)
  • netways-plugins
  • netways-extras

An example for Yum or Zypper based platforms to change from stable to testing repo:

icinga::repos::manage_testing: true
    enabled: 0

Or on Apt based platforms:

icinga::repos::manage_testing: true
    ensure: absent

Installing from Non-Upstream Repositories

To change to a non upstream repository, e.g. a local mirror, the repos can be customized via hiera. The module does a deep merge lookup for a hash named icinga::repos. Allowed keys are:

  • icinga-stable-release
  • icinga-testing-builds
  • icinga-snapshot-builds
  • epel (only on RHEL Enterprise platforms)
  • netways-plugins
  • netways-extras

An example to configure a local mirror of the stable release:

    baseurl: '$releasever/release/'

IMPORTANT: The configuration hash depends on the platform an requires one of the following resources:

Also the Backports repo on Debian can be configured like the apt class of course, see to configure the class apt::backports via Hiera.

As an example, how you configure backports on a debian squeeze. For squeeze the repository is already moved to the unsupported archive:

    content: 'Acquire::Check-Valid-Until no;'
    priority: 99
    notify_update: true
apt::backports::location: ''

icinga::server / icinga::worker / icinga::agent

The class supports:

  • [puppet] >= 5.5 < 8.0

And requiers:

  • [icinga/icinga2] >= 2.0.0 < 4.0.0

Setting up a Icinga Server with a CA and to store configuration:

class { '::icinga::server':
  ca            => true,
  ticket_salt   => 'supersecret',
  config_server => true,
  workers       => { 'dmz' => { 'endpoints' => { '' => { 'host' => '' }}, }},
  global_zones  => [ 'global-templates', 'linux-commands', 'windows-commands' ],

Addtition a connection to a worker is configured. By default the zone for the server is named main. When config_server is enabled directories are managed for all zones, including the worker and global zones.

IMPORTANT: A alpha numeric String has to be set to ticket_salt in Hiera to protect the CA! An alternative is to set icinga::ticket_salt in a hiera common section for all agents, workers and servers.

The associated worker could look like this:

class { '::icinga::worker':
  ca_server        => '',
  zone             => 'dmz',
  parent_endpoints => { '' => { 'host' => '', }, },
  global_zones     => [ 'global-templates', 'linux-commands', 'windows-commands' ],

If the worker doesn't have a certificate, it sends a certificate request to the CA on the host ca_server. The default parent zone is main. Thus, only the associated endpoint has to be defined.

If icinga::ticket_salt is also set in Hiera for the worker, he's automatically sent a certificate. Otherwise the request will be saved on the CA server and must be signed manually.

Both, server and workers, can operated with a parnter in the same zone to share load. The endpoint of the respective partner is specified as an Icinga object in colocation_endpoints.

colocation_endpoints => { '' => { 'host' => '', } },

Of course, the second endpoint must also be specified in the respective parent_endpoints of the worker or agent.

An agent is very similar to a worker, only it has no parameter colocation_endpoints:

class { '::icinga::agent':
  ca_server        => '',
  parent_endpoints => { '' => { 'host' => '', }, } },
  global_zones     => [ 'linux-commands' ],

NOTICE: To switch off the package installation via chocolatey on windows, icinga2::manage_packgaes must be set to false for the corresponding hosts in Hiera. That works only on Windows, on Linux package installation is always used.


The class supports:

  • [puppet] >= 5.5 < 8.0

Ands requires:

  • [puppetlabs/mysql] >= 6.0.0
  • [puppetlabs/postgresql] >= 7.0.0
  • [icinga/icinga2] >= 2.0.0 < 4.0.0

To activate and configure the IDO feature (usally on a server) do:

class { '::icinga::ido':
  db_type         => 'pgsql',
  db_host         => 'localhost',
  db_pass         => 'icinga2',
  manage_database => true,

Setting manage_database to true also setups a database as specified in db_type including database for the IDO. Supported are pgsql for PostgreSQL und maysql for MariaDB. By default the database name is set to icinga2 and the user to icinga2.


The class supports:

  • [puppet] >= 5.5 < 8.0

And requires:

  • [puppetlabs/mysql] >= 6.0.0
  • [puppetlabs/postgresql] >= 7.0.0
  • [puppetlabs/apache] >= 3.0.0
  • [puppet/php] >= 6.0.0
  • [icinga/icinga2] >= 2.0.0
  • [icinga/icingaweb2] >= 2.0.0

A Icinga Web 2 with an Apache and PHP-FPM can be managed as follows:

class { '::icinga::web':
  backend_db_type => $icinga::ido::db_type,
  backend_db_host => $icinga::ido::db_host,
  backend_db_pass => $icinga::ido::db_pass,
  db_type         => 'pgsql',
  db_host         => 'localhost',
  db_pass         => 'supersecret',
  manage_database => true,
  api_pass        => $icinga::server::web_api_pass,

If the Icinga Web 2 is operated on the same host as the IDO, the required user credentials can be accessed, otherwise they must be specified explicitly. With manage_database set to true, a database of the specified type is also installed here. It is used to save user settings for the users of the Icinga Web 2.

IMPORTANT: If you plan tu use icingacli as plugin, e.g. director health checks, businessprocess checks or vspheredb checks, set the parameter run_web => true for icinga::server on the same host icinga::web is declared. That put the Icinga user to the group icingaweb2 and restart the icinga2 process if necessary.


Install and manage the famous Icinga Director and the required database. A graphical addon to manage your monitoring environment, the hosts, services, notifications etc.

Here an example with an PostgreSQL database on the same host:

class { '::icinga::web::director':
  db_type         => 'pgsql',
  db_host         => 'localhost',
  db_pass         => 'supersecret',
  manage_database => true,
  endpoint        => $::fqdn,
  api_host        => 'localhost',
  api_pass        => $icinga::server::director_api_pass,

In this example the Icinga server is running on the same Host like the web and the director.


The class supports:

  • [puppet] >= 5.5 < 8.0

And required in addition to icinga::web:

  • [icinga/icingaweb2] >= 3.2.0

The following example sets up the vspheredb Icinga Web 2 module and teh required database. At this time only MySQL/MariaDB is support by the Icinga team, so this class also supports only mysql.

class { '::icinga::web::vspheredb':
  db_type         => 'mysql',
  db_host         => 'localhost',
  db_pass         => 'vspheredb',
  manage_database => true,



Release Notes

This code is a very early release and may still be subject to significant changes.