Version information
This version is compatible with:
- , ,
Start using this module
Add this module to your Puppetfile:
mod 'lcgdm-dpm', '1.15.1'
Learn more about managing modules with a PuppetfileDocumentation
puppet-dpm module
Table of Contents
Description
The puppet-dpm module has been developed to ease the set up of a DPM installation via puppet.
It can be used to set up different DPM installations :
- DPM Headnode ( with or without a local MySql DB)
- DPM Disknode
- DPM Head+Disk Node ( with or without a local MySql DB)
Dependencies
It relies on several puppet modules, some of them developed @ CERN and some others available from third party.
The following modules are needed in order to use this module, and they are automatically installed from puppetforge
- lcgdm-gridftp
- lcgdm-dmlite
- lcgdm-lcgdm
- lcgdm-xrootd
- lcgdm-voms
- puppetlabs-stdlib
- puppetlabs-mysql
- saz-memcached
- CERNOps-bdii
- puppet-fetchcrl
- puppetlabs-firewall
- puppetlabs-translate
Installation
The puppet-dpm module can be installed from puppetforge via
puppet module install lcgdm-dpm
Prerequisites
The DPM components need an X509 host certificate (PEM format) to be installed on each host under /etc/grid-security/hostcert.pem and /etc/grid-security/hostkey.pem
SELinux must be disabled on every hosts before the installation.
The local firewall is not managed by the module, please check the DPM wiki for information on which ports to open:
https://twiki.cern.ch/twiki/bin/view/DPM/DpmSetupManualInstallation#Firewall_Configuration
Usage
The module folder tests contains some examples, for instance you can set up a DPM box with both HEAD and DISK nodes with the following code snippet
class{'dpm::head_disknode':
configure_repos => true,
configure_default_pool => true,
configure_default_filesystem => true,
localdomain => 'cern.ch',
db_user => 'dpmdbuser',
db_pass => 'PASS',
db_host => 'localhost',
disk_nodes => ['$::fqdn'],
mysql_root_pass => 'ROOTPASS',
token_password => 'kwpoMyvcusgdbyyws6gfcxhntkLoh8jilwivnivel',
xrootd_sharedkey => 'A32TO64CHARACTERA32TO64CHARACTER',
site_name => 'CNR_DPM_TEST',
volist => ['dteam', 'lhcb'],
new_installation => true,
mountpoints => ['/srv/dpm','/srv/dpm/01'],
pools => ['mypool:100M'],
filesystems => ["mypool:${fqdn}:/srv/dpm/01"],
}
the same parameters can be configured via hiera ( see the dpm::params class)
Having the code snippet saved in a file ( i.e. dpm.pp), then you just need to run:
puppet apply dpm.pp
to have the DPM box installed and configured
Please note that it could be needed to run twice the puppet apply command in order to have all the changes correctly applied
Headnode
The Headnode configuration is performed via the dpm::headnode class or in case of an installation of a Head+Disk node via the dpm::head_disknode class
class{"dpm::headnode":
localdomain => 'cern.ch',
db_user => 'dpmdbuser',
db_pass => 'PASS',
db_host => 'localhost',
disk_nodes => ['dpm-disk01.cern.ch'],
local_db => true,
mysql_root_pass => 'MYSQLROOT',
token_password => 'kwpoMyvcusgdbyyws6gfcxhntkLoh8jilwivnivel',
xrootd_sharedkey => 'A32TO64CHARACTERA32TO64CHARACTER',
site_name => 'CNR_DPM_TEST',
volist => ['dteam', 'lhcb'],
new_installation => true,
pools => ['mypool:100M'],
filesystems => ["mypool:${fqdn}:/srv/dpm/01"],
#configure_legacy => false,
configure_dome => true,
configure_domeadapter => true,
host_dn => 'your headnode host cert DN',
http_macaroon_secret => 'your_secret_string_longer_then_64_chars_abcdefghijklmnopqrstuvwx',
#oidc_clientid => '< The OIDC Client ID for this service >',
#oidc_clientsecret => '< The OIDC Client Secret for this service >',
#oidc_passphrase => '< The OIDC crypto passphrase >',
#oidc_allowissuer => ['"/dpm/cern.ch/home/wlcg" "https://wlcg.cloud.cnaf.infn.it/"'],
#oidc_allowaudience => ['https://wlcg.cern.ch/jwt/v1/any', "${::fqdn}"],
}
Each pool and filsystem specified in the pools and filesystems parameter should have the following syntax:
- pools: 'poolname:defaultSize'
- filesystems : 'poolname:servername:filesystem_path'
DB configuration
Depending on the DB installation ( local to the headnode or external ) there are different configuration parameters to set:
In case of a local installation the db_host parameter should be configured as localhost together with the local_db parameter set to true. While for an external DB installation the local_db parameter should be set to false.
N.B. the root DB grants for the headnode should be added manually to the DB in case of an external DB installation:
GRANT ALL PRIVILEGES ON *.* TO 'root'@'HEADNODE' IDENTIFIED BY 'MYSQLROOT' WITH GRANT OPTION;
N.B. In case of an upgrade of an existing DPM installation the new_installation parameter MUST be set to false
the mysql_override_options parameter can be used to override the mysql server configuration. In general the values provided by default by the module ( via the $dpm::params::mysql_override_options var ) should be fine.
Xrootd configuration
The basic Xrootd configuration requires only to specifies the xrootd_sharedkey, which should be a 32 to 64 char long string, the same for all the cluster.
In order to configure the Xrootd Federations and the Xrootd Monitoring via the parameter dpm_xrootd_fedredirs, xrd_report and xrd_monitor please refer to the DPM-Xrootd puppet guide:
https://twiki.cern.ch/twiki/bin/view/DPM/DPMComponents_Dpm-Xrootd#Puppet_Configuration
Other configuration
The Headnode is configured with the Memcache server and the related DPM plugin. In order to disable it the parameter memcached_enabled should be set to false.
As well for the WedDav frontend, installed and enabled by default but it can be disabled with webdav_enabled set to false
Other parameters are:
- configure_bdii : enabled/disabled the configuration of Resource BDII ( default = true)
- configure_star : enabled/disabled the configuration of APEL StAR accounting ( default = false)
- configure_default_pool : create the pools specified in the pools paramter ( default = false)
- configure_default_filesystem : create the filesytems specified in the filesystems parameter ( default = false)
see the Common Configuration section for the rest of configuration options
Disknode
The Disknode configuration is performed via the dpm::disknode class, as follows:
class{'dpm::disknode':
headnode_fqdn => "HEADNODE",
disk_nodes => ['$::fqdn'],
localdomain => 'cern.ch',
token_password => 'TOKEN_PASSWORD',
xrootd_sharedkey => 'A32TO64CHARACTERKEYTESTTESTTESTTEST',
volist => ['dteam', 'lhcb'],
mountpoints => ['/data','/data/01'],
#configure_legacy => false,
configure_dome => true,
configure_domeadapter => true,
host_dn => 'your disknode host cert DN',
}
In particular the mountpoints var should include the mountpoint paths for the filesystems and the related parent folders. See the Common Configuration section for the rest of configuration options
Common configuration
VO list and mapfile
Both Head and Disk nodes should be configured vith the list of the VOs supported and the configuration input to generate the mapfile.
The parameter volist is needed to specify the supported VOs, while the groupmap parameter specifies how to map VOMS users.By default the dteam VO mapping is given, an example for the whole LHC VOs mappings is as follows:
groupmap = {
"vomss://voms2.cern.ch:8443/voms/atlas?/atlas" => "atlas",
"vomss://lcg-voms2.cern.ch:8443/voms/atlas?/atlas" => "atlas",
"vomss://voms2.cern.ch:8443/voms/cms?/cms" => "cms",
"vomss://lcg-voms2.cern.ch:8443/voms/cms?/cms" => "cms",
"vomss://voms2.cern.ch:8443/voms/lhcb?/lhcb" => "lhcb",
"vomss://lcg-voms2.cern.ch:8443/voms/lhcb?/lhcb" => "lhcb",
"vomss://voms2.cern.ch:8443/voms/alice?/alice" => "alice",
"vomss://lcg-voms2.cern.ch:8443/voms/alice?/alice" => "alice",
"vomss://voms2.cern.ch:8443/voms/ops?/ops" => "ops",
"vomss://lcg-voms2.cern.ch:8443/voms/ops?/ops" => "ops",
"vomss://voms.hellasgrid.gr:8443/voms/dteam?/dteam" => "dteam",
"vomss://voms2.hellasgrid.gr:8443/voms/dteam?/dteam" => "dteam"
}
N.B. The VOMS configuraton of VO names with "." is not supported with this class (it will be ignored) therefore each vo of this type should be explicetly added to your manifest as follows:
voms{"voms::voname":}
and declared as a class like documented at https://forge.puppet.com/lcgdm/voms
Other configuration:
- configure_vos : enable/disable the configuration of the VOs ( default = true)
- configure_repos : configure the yum repositories specified in the repos parameter ( default = false)
- configure_gridmap : enable/disable the configuration of gridmap file ( default = true)
- gridftp_redirect : enabled/disabled the GridFTP redirection functionality ( default = true)
- dpmmgr_user , dpmmgr_uid and dpmmgr_gid : the dpm user name , gid and uid ( default = dpmmgr, 151 and 151)
- debug : enable/disable debug logs and coredumps for xrootd ( default = false)
Compatibility
The module can configure a DPM on SL6 and CentOS7/SL7
It has been tested with puppet 5 and 6
Mysql 5.1 and 5.5 are supported on SL6
MariaDB 5.5 is supported on C7
2020-06-01 Petr Vokac petr.vokac@cern.ch
* drop support for puppet 4
* update dependencies
2019-09-05 Petr Vokac petr.vokac@cern.ch
* add StAR accounting
2019-05-09 Andrea Manzi andrea.manzi@cern.ch
* add http_macaroon_secret parameter
* remove default mapping
2019-02-11 Petr Vokac petr.vokac@cern.ch
* properly disable gridftp redirection
2019-02-11 Andrea Manzi andrea.manzi@cern.ch
* update BDII module version
2019-01-10 Andrea Manzi andrea.manzi@cern.ch
* add conf for xrootd delegation
* add conf to disable xrootd checksum
* remove installation of n2n plugin for atlas
2018-11-13 Andrea Manzi andrea.manzi@cern.ch
* aligned version to DPM version
2018-09-4 Andrea Manzi andrea.manzi@cern.ch
* update voms dependency
* update mysql tunings
2018-06-4 Andrea Manzi andrea.manzi@cern.ch
* added DPM 1.10.0 configuration
2018-03-23 Andrea Manzi andrea.manzi@cern.ch
* fix for LCGDM-2589
2018-02-21 Andrea Manzi andrea.manzi@cern.ch
* update dependencies
2018-01-18 Andrea Manzi andrea.manzi@cern.ch
* update dependencies
2017-11-23 Andrea Manzi andrea.manzi@cern.ch
* update dependencies
2017-06-26 Andrea Manzi andrea.manzi@cern.ch
* update dependencies
* fix Readme
2017-05-29 Andrea Manzi andrea.manzi@cern.ch
* update dependencies
2017-04-12 Andrea Manzi andrea.manzi@cern.ch
* don't configure gridftp redirection if DOME is enabled
2017-04-10 Andrea Manzi andrea.manzi@cern.ch
* force stdlib version to be <= 4.15
2017-02-15 Andrea Manzi andrea.manzi@cern.ch
* added admin_dn and db_manage parameters
* updated deps
2016-12-15 Andrea Manzi andrea.manzi@cern.ch
* added DPM 1.9.0 functionalites
2016-10-28 Andrea Manzi andrea.manzi@cern.ch
* use num2bool
2016-08-23 Andrea Manzi andrea.manzi@cern.ch
* updated DPM deps
2016-07-12 Andrea Manzi andrea.manzi@cern.ch
* updated DPM deps
2016-06-29 Andrea Manzi andrea.manzi@cern.ch
* update lcgdm-voms deps
2016-06-22 Andrea Manzi andrea.manzi@cern.ch
* moved to metadata.json
* removed stdlib strict deps
2016-05-31 Andrea Manzi andrea.manzi@cern.ch
* added new deps
* removed limits conf and deps
* raised memcahced max memory
2016-05-25 Andrea Manzi andrea.manzi@cern.ch
* moved xrd_report and xrd_monitoring to the disknodes
* fixed configuration of repositories
2016-03-13 Andrea Manzi andrea.manzi@cern.ch
* added mountpoints, pools and filesystems vars
* added mysql_override var
* added pools var
* bug fixes
2016-02-23 Andrea Manzi andrea.manzi@cern.ch
* added gridftp redir support
* moved disknodes to array
* several bug fixes
2016-02-08 Andrea Manzi andrea.manzi@cern.ch
* puppet 4 support
2016-01-06 Andrea Manzi andrea.manzi@cern.ch
* fixed handling of multiple vos on disknode
2015-09-28 Andrea Manzi andrea.manzi@cern.ch
* fixed handling of multiple vos
Dependencies
- lcgdm/gridftp (>= 0.2.8)
- lcgdm/dmlite (>=1.15.0)
- lcgdm/lcgdm (>= 0.3.14)
- lcgdm/xrootd (>= 0.2.9)
- lcgdm/voms (>= 0.4.0)
- puppetlabs/stdlib (>= 4.25.0)
- puppetlabs/mysql (>= 3.4.0 <= 10.4.0)
- saz/memcached (>= 2.8.1 <= 3.5.0)
- CERNOps/bdii (>= 1.2.2 <= 1.3.0)