Version information
This version is compatible with:
- Puppet Enterprise >= 3.4.0
- Puppet >= 3.4.0
- , , , , ,
Start using this module
Add this module to your Puppetfile:
mod 'cesnet-hadoop', '2.1.0'
Learn more about managing modules with a PuppetfileDocumentation
##Hadoop
####Table of Contents
- Module Description - What the module does and why it is useful
- Setup - The basics of getting started with hadoop
- Usage - Configuration options and additional functionality
- Reference - An under-the-hood peek at what the module is doing and how
- Limitations - OS compatibility, etc.
- Development - Guide for contributing to the module
##Module Description
Management of Hadoop Cluster with security based on Kerberos and with High Availability. All services can be separated across all nodes or collocated at single node as needed. Optionally other features can be enabled:
- Security based on Kerberos
- HTTPS
- High availability for HDFS Name Node and YARN Resource Manager (requires zookeeper)
- YARN Resource Manager state-store
Puppet >= 3.x is required.
Supported are:
- Debian 7/wheezy: Cloudera distribution (tested with CDH 5.3.1/5.4.1, Hadoop 2.5.0/2.6.0)
- Fedora 21: native packages (tested with Hadoop 2.4.1)
- Ubuntu 14/trusty: Cloudera distribution (tested with CDH 5.3.1, Hadoop 2.5.0)
- RHEL 6 and clones: Cloudera distribution (tested with CDH 5.4.1, Hadoop 2.6.0)
There are some limitations how to use this module. You should read the documentation, especially the Setup Requirements section.
##Setup
###What cesnet-hadoop module affects
- Packages: installs Hadoop packages (common packages, subsets for requested services, or the client)
- Files modified:
- */etc/hadoop/* (or /etc/hadoop/conf/**)
- */etc/sysconfig/hadoop* (or /etc/default/hadoop**)
- */etc/cron.d/hadoop-** (only when explicit key refresh or restarts are requested)
- /usr/local/sbin/yellowmanager (not needed, only when administrator manager script is requested by features)
- Alternatives:
- alternatives are used for /etc/hadoop/conf in Cloudera
- this module switches to the new alternative by default, so the Cloudera original configuration can be kept intact
- Services:
- only requested Hadoop services are setup and started
- HDFS: namenode, journalnode, datanode, zkfc, nfs
- YARN: resourcemanager, nodemanager
- MAPRED: historyserver
- Data Files: Hadoop is using metadata and data in */var/lib/hadoop-* (or /var/lib/hadoop*/cache*), for most of it the custom location can be setup (and it is recommended to use different hard drives), see http://wiki.apache.org/hadoop/DiskSetup.
- Helper Files:
- */var/lib/hadoop-hdfs/.puppet-hdfs-**
- Secret Files (keytabs, certificates): some files are copied to home directories of service users: ~hdfs/, ~yarn/, ~mapred/
###Setup Requirements
There are several known or intended limitations in this module.
Be aware of:
-
Hadoop repositories
-
neither Cloudera nor Hortonworks repositories are configured in this module (for Cloudera you can find list and key files here: http://archive.cloudera.com/cdh5/debian/wheezy/amd64/cdh/, Fedora has Hadoop as part of distribution, ...)
-
java is not installed by this module (openjdk-7-jre-headless is OK for Debian 7/wheezy)
-
for security the package providing kinit is also needed (Debian: krb5-util or heimdal-clients, RedHat/Fedora: krb5-workstation)
-
One-node Hadoop cluster (may be collocated on one machine): Hadoop replicates by default all data to at least to 3 data nodes. For one-node Hadoop cluster use property dfs.replication=1 in properties parameter
-
No inter-node dependencies: working HDFS (namenode+some data nodes) is required before history server launch or for state-store resourcemanager feature; some workarounds exists:
-
helper parameter hdfs_deployed: when false, services dependent on HDFS are not launched (default: true)
-
administrators are encouraged to use any other way to solve inter-node dependencies (PuppetDB?)
-
or just repeat setup on historyserver and resourcemanager machines
Note: Hadoop cluster collocated on one-machine is handled OK
-
Secure mode: keytabs must be prepared in /etc/security/keytabs/ (see realm parameter)
-
Fedora: 1) see RedHat Bug #1163892, you may use repository at http://copr-fe.cloud.fedoraproject.org/coprs/valtri/hadoop/ 2) you need to enable ticket refresh and RM restarts (see features module parameter)
-
HTTPS:
-
prepare CA certificate keystore and machine certificate keystore in /etc/security/cacerts and /etc/security/server.keystore (location can be modified by https_cacerts and https_keystore parameters), see Enable HTTPS section
-
prepare /etc/security/http-auth-signature-secret file (with any content)
Note: some files are copied into ~hdfs, ~yarn/, and ~mapred/ directories
###Beginning with hadoop
By default the main hadoop class do nothing but configuration of the hadoop puppet module. Main actions are performed by the included service and client classes.
Let's start with brief examples. Before beginning you should read the Setup Requirements section above.
Example: The simplest setup is one-node Hadoop cluster without security, with everything on single machine:
class{"hadoop":
hdfs_hostname => $::fqdn,
yarn_hostname => $::fqdn,
slaves => [ $::fqdn ],
frontends => [ $::fqdn ],
# security needs to be disabled explicitly by using empty string
realm => '',
properties => {
'dfs.replication' => 1,
}
}
node default {
# HDFS
include hadoop::namenode
# YARN
include hadoop::resourcemanager
# MAPRED
include hadoop::historyserver
# slave (HDFS)
include hadoop::datanode
# slave (YARN)
include hadoop::nodemanager
# client
include hadoop::frontend
}
For full-fledged Hadoop cluster it is recommended (services can be collocated):
- one HDFS namenode (or two for high availability, see below)
- one YARN resourcemanager (or two for high availability, see below)
- N slaves with HDFS datanode and YARN nodemanager
Modify $::fqdn and node(s) section as needed. You can also remove the dfs.replication property with more data nodes.
Multiple HDFS namespaces are not supported now (ask or send patches, if you need it :-)).
##Usage
###Enable Security
Security in Hadoop is based on Kerberos. Keytab files needs to be prepared on the proper places before enabling the security.
Following parameters are used for security (see also Module Parameters):
-
realm (required parameter) Enable security and Kerberos realm to use. Empty string disables security. To enable security, there are required:
- installed Kerberos client (Debian: krb5-user/heimdal-clients; RedHat: krb5-workstation)
- configured Kerberos client (/etc/krb5.conf, /etc/krb5.keytab)
- /etc/security/keytab/dn.service.keytab (on data nodes)
- /etc/security/keytab/jhs.service.keytab (on job history node)
- /etc/security/keytab/nm.service.keytab (on node manager nodes)
- /etc/security/keytab/nn.service.keytab (on name nodes)
- /etc/security/keytab/rm.service.keytab (on resource manager node)
- /etc/security/keytab/nfs.service.keytab (on nfs gateway node)
-
authorization (empty hash by default)
It is recommended also to enable HTTPS when security is enabled. See Enable HTTPS.
Example: One-node Hadoop cluster with security (add also the node section from the single-node setup above):
class{"hadoop":
hdfs_hostname => $::fqdn,
yarn_hostname => $::fqdn,
slaves => [ $::fqdn ],
frontends => [ $:fqdn ],
realm => 'MY.REALM',
properties => {
'dfs.replication' => 1,
},
features => {
#restarts => '00 */12 * * *',
#krbrefresh => '00 */12 * * *',
},
authorization => {
'rules' => 'limit',
# more paranoid permissions to users in "hadoopusers" group
#'security.service.authorization.default.acl' => ' hadoop,hbase,hive,hadoopusers',
},
# https recommended (and other extensions may require it)
https => true,
https_cacerts_password => '',
https_keystore_keypassword => 'changeit',
https_keystore_password => 'changeit',
}
Modify $::fqdn and add node sections as needed for multi-node cluster.
Long running applications
For long-running applications as Spark Streaming jobs you may need to workaround user's delegation tokens a maximum lifetime of 7 days by these properties in properties parameter:
'yarn.resourcemanager.proxy-user-privileges.enabled' => true,
'hadoop.proxyuser.yarn.hosts' => RESOURCE MANAGER HOSTS,
'hadoop.proxyuser.yarn.groups' => 'hadoop',
Auth to local mapping
You can consider changing or even removing property hadoop.security.auth_to_local:
properties => {
'hadoop.security.auth_to_local' => '::undef',
}
Default value is valid for principal names according to Hadoop documentation at http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SecureMode.html.
In the default value in cesnet-hadoop module are also mappings for the following Hadoop addons:
- HBase: hbase/<HOST>@<REALM> -> hbase
- Hive: hive/<HOST>@<REALM> -> hive
- Hue: hue/<HOST>@<REALM> -> hue
- Spark: spark/<HOST>@<REALM> -> spark
- Zookeeper: zookeeper/<HOST>@<REALM> -> zookeeper
- ... and helper principals:
- HTTP SPNEGO: HTTP/<HOST>@<REALM> -> HTTP
- Tomcat: tomcat/<HOST>@<REALM> -> tomcat
hadoop.security.auth_to_local is needed and can't be removed if:
- Kerberos principals and local user names are different
- they differ in the official documentation: nn/_HOST vs hdfs, ...
- they are the same in Cloudera documentation: hdfs/_HOST vs hdfs, ...
- when cross-realm authentication is needed
- when support for more principals is needed (another Hadoop addon, ...)
###Enable HTTPS
Hadoop is able to use SPNEGO protocol (="Kerberos tickets through HTTPS"). This requires proper configuration of the browser on the client side and valid Kerberos ticket.
HTTPS support requires:
- enabled security (realm => ...)
- /etc/security/cacerts file (https_cacerts parameter) - kept in the place, only permission changed if needed
- /etc/security/server.keystore file (https_keystore parameter) - copied for each daemon user
- /etc/security/http-auth-signature-secret file (any data, string or blob) - copied for each daemon user
- /etc/security/keytab/http.service.keytab - copied for each daemon user
Preparing the CA certificates store (/etc/security/cacerts):
# for each CA certificate in the chain
keytool -importcert -keystore cacerts -storepass changeit -trustcacerts -alias some-alias -file some-file.pem
# check
keytool -list -keystore cacerts -storepass changeit
# move to the right default location
mv cacerts /etc/security/
Preparing the certificates keystore (/etc/security/server.keystore):
# X509 -> pkcs12
# (enter some passphrase)
openssl pkcs12 -export -in /etc/grid-security/hostcert.pem
-inkey /etc/grid-security/hostkey.pem \
-out server.p12 -name hadoop-dcv -certfile tcs-ca-bundle.pem
# pkcs12 -> java
# (the alias must be the same as the name above)
keytool -importkeystore \
-deststorepass changeit1 -destkeypass changeit2 -destkeystore server.keystore \
-srckeystore server.p12 -srcstoretype PKCS12 -srcstorepass some-passphrase \
-alias hadoop-dcv
# check
keytool -list -keystore server.keystore -storepass changeit1
# move to the right default location
chmod 0600 server.keystore
mv server.keystore /etc/security/
Preparing the signature secret file (/etc/security/http-auth-signature-secret):
dd if=/dev/random bs=128 count=1 > http-auth-signature-secret
chmod 0600 http-auth-signature-secret
mv http-auth-signature-secret /etc/security/
The following hadoop class parameters are used for HTTPS (see also Module Parameters):
-
realm (required for HTTPS) Enable security and Kerberos realm to use. See Security.
-
https (undef) Enable support for https.
-
https_cacerts (/etc/security/cacerts) CA certificates file.
-
https_cacerts_password ('') CA certificates keystore password.
-
https_keystore (/etc/security/server.keystore) Certificates keystore file.
-
https_keystore_password ('changeit') Certificates keystore file password.
-
https_keystore_keypassword (undef) Certificates keystore key password. If not specified, https_keystore_password is used.
Consider also checking POSIX ACL support in the system and enable acl in Hadoop module. It's useful for more pedantic rights on ssl-*.xml files, which needs to be read by Hadoop additions (like HBase).
###Multihome Support
Multihome support doesn't work out-of-the box in Hadoop 2.6.x (2015-01). Properties and bind hacks to multihome support can be enabled by multihome => true in features. You will also need to add secondary IPs of datanodes to datanode_hostnames (or slaves, which sets datanode_hostnames and nodemanager_hostnames):
class{"hadoop":
hdfs_hostname => $::fqdn,
yarn_hostname => $::fqdn,
# for multi-home
datanode_hostnames => [ $::fqdn, '10.0.0.2', '192.169.0.2' ],
slaves => [ $::fqdn ],
frontends => [ $:fqdn ],
realm => '',
properties => {
'dfs.replication' => 1,
}
# for multi-home
features => {
multihome => true,
}
}
Multi-home feature enables following properties:
- 'hadoop.security.token.service.use_ip' => false
- 'yarn.resourcemanager.bind-host' => '0.0.0.0'
- 'dfs.namenode.rpc-bind-host' => '0.0.0.0'
###High Availability
There are needed also these daemons for High Availability:
- Secondary Name Node (1) - there will be two Name Node servers
- Journal Node (>=3) - requires HTTPS, when Kerberos security is enabled
- Zookeeper/Failover Controller (2) - on each Name Node
- Zookeeper (>=3)
Fresh installation
Setup High Availability requires precise order of all steps. For example all zookeeper servers must be running before formatting zkfc (class hadoop::zkfc::service), or all journal nodes must running during initial formatting (class hadoop::namenode::config) or when converting existing cluster to cluster with high availability.
There are helper parameters to separate overall cluster setup to more stages:
- zookeeper_deployed=false, *hdfs_deployed=*false: zookeeper quorum and journal nodes quorum
- zookeeper_deployed=true, *hdfs_deployed=*false: HDFS format and bootstrap (primary and secondary NN), setup and launch ZKFC and NN daemons
- zookeeper_deployed=true, *hdfs_deployed=*true: enable History Server and RM state-store feature, if enabled
These parameters are not required, the setup should converge when setup is repeated. They may help with debugging problems though, because less things will fail if the setup is separated to several stages over the whole cluster.
Example:
$master1_hostname = 'hadoop-master1.example.com'
$master2_hostname = 'hadoop-master2.example.com'
$slaves = ['hadoop1.example.com', 'hadoop2.example.com', ...]
$frontends = ['hadoop.example.com']
$quorum_hostnames = [$master1_hostname, $master2_hostname, 'master3.example.com']
$cluster_name = 'example'
$hdfs_deployed = true
$zookeeper_deployed = true
class{'hadoop':
hdfs_hostname => $master1_hostname,
hdfs_hostname2 => $master2_hostname,
yarn_hostname => $master1_hostname,
yarn_hostname2 => $master2_hostname,
historyserer_hostname => $master1_hostname,
slaves => $slaves,
frontends => $frontends,
journalnode_hostnames => $quorum_hostnames,
zookeeper_hostnames => $quorum_hostnames,
cluster_name => $cluster_name,
realm => '',
hdfs_deployed => $hdfs_deployed,
zookeeper_deployed => $zookeeper_deployed,
}
node 'master1.example.com' {
include hadoop::namenode
include hadoop::resourcemanager
include hadoop::historyserver
include hadoop::zkfc
include hadoop::journalnode
class{'zookeeper':
hostnames => $quorum_hostnames,
realm => '',
}
}
node 'master2.example.com' {
include hadoop::namenode
include hadoop::resourcemanager
include hadoop::zkfc
include hadoop::journalnode
class{'zookeeper':
hostnames => $quorum_hostnames,
realm => '',
}
}
node 'master3.example.com' {
include hadoop::journalnode
class{'zookeeper':
hostnames => $quorum_hostnames,
realm => '',
}
}
node 'frontend.example.com' {
include hadoop::frontend
include hadoop::journalnode
class{'zookeeper':
hostnames => $quorum_hostnames,
realm => '',
}
}
node /hadoop\d+.example.com/ {
include hadoop::datanode
include hadoop::nodemanager
}
Note: Journalnode and Zookeeper are not resource intensive daemons and can be collocated with other daemons. In this example the content of master3.example.com node can be moved to some slave node or the frontend.
Converting non-HA cluster
You can use the example above. But you will need to let skip bootstrap on secondary Name Node before setup:
touch /var/lib/hadoop-hdfs/.puppet-hdfs-bootstrapped
And activate HA on the secondary Name Node after setup (under hdfs user):
# when Kerberos is enabled:
#kinit -k -t /etc/security/keytab/nn.ervice.keytab nn/`hostname -f`
#
hdfs namenode -initializeSharedEdits
HDFS NFS Gateway
HDFS NFS Gateway provides limited support for direct access to HDFS. Beware, the NFS is still problematic and unstable (tested with Hadoop 2.6.0/Cloudera 5.4.7).
The class hadoop::nfs will setup the daemon and mount locally HDFS to /hdfs. The resource hadoop::nfs::mount is used to perform the mounting. If mounting remotely, don't forget to add authorization access to the remote HDFS NFS server.
HDFS NFS Gateway doesn't support any authentication, so we recommend to filter clients at least by hostnames/IPs. By default only local machine is allowed to mount the NFS (nfs_exports parameter).
Useful properties:
- nfs.superuser: super-user name (not configured by default)
- nfs.metrics.percentiles.intervals: 100 will enable latency histogram in Nfs3Metrics
- nfs.port.monitoring.disabled: true to allow mounting from unprivileged users
Useful environments:
- HADOOP_NFS3_OPTS: JVM settings (heap, GC, ...)
Example 1: local HDFS NFS Gateway
class{"hadoop":
...
#nfs_dumpdir => '/mnt/scratch/.hdfs-nfs',
nfs_hostnames => ['hadoop-frontend.example.com'],
}
node 'hadoop-frontend.example.com' {
include hadoop::nfs
}
Example 2: remote HDFS NFS Gateway
class{"hadoop":
...
nfs_hostnames => ['hadoop-frontend.example.com', 'external-host.example.com'],
#nfs_dumpdir => '/mnt/scratch/.hdfs-nfs',
nfs_exports => "${::fqdn} rw; external-host.example.com rw",
}
node 'hadoop-frontend.example.com' {
include hadoop::nfs
}
node 'external-host.example.com' {
hadoop::nfs::mount { '/mnt/hadoop':
nfs_hostname => 'hadoop-frontend.example.com',
}
}
Security
The keytab file /etc/security/keytab/nfs.service.keytab is required. It must contain principal for HDFS NFS Gateway.
The principal must corespond to the valid system user (auth_to_local rules provides the mapping). This system user will be used also as Hadoop proxy user. The default value is 'nfs'.
Principals needed:
- host/<HOSTNAME>@<REALM>
- nfs/<HOSTNAME>@<REALM>
Authorization
root user must be authorized for client access to able to mount. By default it is not needed (authorization is '*'). See authorization parameter.
Example of changing HADOOP default ACL to more strict settings:
authorization => {
'rules' => 'limit',
'security.client.protocol.acl' => 'root hadoop,hbase,hive,spark,users'
'security.service.authorization.default.acl' => ' hadoop,hbase,hive,spark,users',
}
Quick check
nfs_hostname=`hostname -f`
rpcinfo -p ${nfs_hostname}
showmount -e ${nfs_hostname}
Upgrade
The best way is to refresh configurations from the new original (=remove the old) and relaunch puppet on top of it.
For example:
alternative='cluster'
d='hadoop'
mv /etc/{d}$/conf.${alternative} /etc/${d}/conf.cdhXXX
update-alternatives --auto ${d}-conf
# upgrade
...
puppet agent --test
#or: puppet apply ...
##Reference
###Classes
hadoop
: Main configuration classcommon::hdfs::config
common::hdfs::daemon
common::mapred::config
common::mapred::daemon
common::yarn::config
common::yarn::daemon
common::config
common::install
common::postinstall
common::slaves
config
create_dirs
install
params
service
datanode
: HDFS Data Nodedatanode::config
datanode::install
datanode::service
frontend
: Hadoop client and examplesfrontend::config
frontend::install
frontend::service
(empty)historyserver
: MapReduce Job History Serverhistoryserver::config
historyserver::install
historyserver::service
journalnode
: HDFS Journal Node used for Quorum Journal Managerjournalnode::config
journalnode::install
journalnode::service
namenode
: HDFS Name Nodenamenode::bootstrap
namenode::config
namenode::format
namenode::install
namenode::service
nfs
: HDFS NFS Gatewaynfs::config
nfs::install
nfs::service
nodemanager
: YARN Node Managernodemanager::config
nodemanager::install
nodemanager::service
resourcemanager
: YARN Resource Managerresourcemanager::config
resourcemanager::install
resourcemanager::service
zkfc
: HDFS Zookeeper/Failover Controllerzkfc::config
zkfc::install
zkfc::service
###Resource Types
kinit
: Init credentialskdestroy
: Destroy credentialsmkdir
: Creates a directory on HDFSnfs::mount
: Mount NFS provided by the HDFS NFS gateway
hadoop
Parameters
#####acl
Determines, if setfacl command is available and /etc/hadoop is on filesystem supporting POSIX ACL. Default: undef.
It is used only when https is enabled to set less open privileges on ssl-server.xml.
#####alternatives
Switches the alternatives used for the configuration. Default: 'cluster' (Debian) or undef.
It can be used only when supported (for example with Cloudera distribution).
#####authorization
Hadoop service level authorization ACLs. Default: {}.
Authorizations are enabled and predefined rule set and/or particular properties can be specified.
Each ACL is in the form of: (wildcard "*" is allowed)
- "USER1,USER2,... GROUP1,GROUP2"
- "USER1,USER2,..."
- " GROUP1,GROUP2,..." (notice the space character)
These properties are available:
- rules (limit, permit, false): predefined ACL sets from cesnet-hadoop puppet module
- security.service.authorization.default.acl: default ACL
- security.client.datanode.protocol.acl
- security.client.protocol.acl
- security.datanode.protocol.acl
- security.inter.datanode.protocol.acl
- security.namenode.protocol.acl
- security.admin.operations.protocol.acl
- security.refresh.usertogroups.mappings.protocol.acl
- security.refresh.policy.protocol.acl
- security.ha.service.protocol.acl
- security.zkfc.protocol.acl
- security.qjournal.service.protocol.acl
- security.mrhs.client.protocol.acl
- security.resourcetracker.protocol.acl
- security.resourcemanager-administration.protocol.acl
- security.applicationclient.protocol.acl
- security.applicationmaster.protocol.acl
- security.containermanagement.protocol.acl
- security.resourcelocalizer.protocol.acl
- security.job.task.protocol.acl
- security.job.client.protocol.acl
- ... and everything with .blocked suffix
ACL set: limit: policy tuned with minimal set of permissions:
- security.datanode.protocol.acl => ' hadoop'
- security.inter.datanode.protocol.acl => ' hadoop'
- security.namenode.protocol.acl => 'hdfs,nn,sn'
- security.admin.operations.protocol.acl => ' hadoop'
- security.refresh.usertogroups.mappings.protocol.acl => ' hadoop'
- security.refresh.policy.protocol.acl => ' hadoop'
- security.ha.service.protocol.acl => ' hadoop'
- security.zkfc.protocol.acl => ' hadoop'
- security.qjournal.service.protocol.acl => ' hadoop'
- security.resourcetracker.protocol.acl => 'yarn,nm,rm'
- security.resourcemanager-administration.protocol.acl => ' hadoop',
- security.applicationmaster.protocol.acl => '*',
- security.containermanagement.protocol.acl => '*',
- security.resourcelocalizer.protocol.acl => '*',
- security.job.task.protocol.acl => '*',
ACL set: permit defines this policy (it's default):
- security.service.authorization.default.acl => '*'
See also Service Level Authorization Hadoop documentation.
You can use use limit rules. For more strict settings you can define security.service.authorization.default.acl to something different from '*':
authorization => {
'rules' => 'limit',
'security.service.authorization.default.acl' => ' hadoop,hbase,hive,spark,users',
}
Note: Beware ...acl.blocked are not used if the ....acl counterpart is defined.
Note 2: If not using wildcards in permit rules, you should enable access also for Hadoop additions (as seen in the example).
Note 3: See also HDFS NFS Gateway: Authorization.
#####cluster_name
Name of the cluster. Default: 'cluster'.
Used during initial formatting of HDFS. For non-HA configurations it may be undef.
#####compress_enable
Enable compression of intermediate files by snappy codec. Default: true.
This will set following properties:
- mapred.compress.map.output: true
- mapred.map.output.compression.codec: "org.apache.hadoop.io.compress.SnappyCodec"
#####datanode_hostnames
Array of Data Node machines. Default: slaves.
#####descriptions
Descriptions for the properties. Default: see params.pp.
Just for cuteness.
#####environment
Environment to set for all Hadoop daemons. Default: undef.
environment => {'HADOOP_HEAPSIZE' => 4096, 'YARN_HEAPSIZE' => 4096}
#####features
Enable additional features. Default: {}.
Available features:
- rmstore: resource manager recovery using state-store (YARN may depends on HDFS)
- hdfs: store state on HDFS, this requires HDFS datanodes already running and /rmstore directory created ==> you may want to keep it disabled on initial setup. Requires hdfs_deployed to be true.
- zookeeper: store state on zookeepers. Requires zookeeper_hostnames specified. Warning: no authentication is used.
- true: select automatically zookeeper or hdfs according to zookeeper_hostnames
- restarts: regular resource manager restarts (MIN HOUR MDAY MONTH WDAY); it shall never be restarted, but it may be needed for refreshing Kerberos tickets
- krbrefresh: use and refresh Kerberos credential cache (MIN HOUR MDAY MONTH WDAY); beware there is a small race-condition during refresh
- yellowmanager: script in /usr/local to start/stop all daemons relevant for given node
- multihome: enable properties required for multihome usage. You will need also add secondary IP addresses to datanode_hostnames.
- aggregation: enable YARN log aggregation (we recommend, but YARN will depend on HDFS)
We recommend to enable: rmstore, aggregation and probably multihome.
#####frontends
Array of frontend hostnames. Default: slaves.
#####ha_credentials
Zookeeper credentials for HA HDFS. Default: undef.
With enabled high availability of HDFS in secured cluster, it is recommended to secure also zookeeper. The value is in the form USER:PASSWORD.
Set this to something like: hdfs-zkfcs:PASSWORD.
#####ha_digest
undef
Digest version of ha_credentials. Default: undef.
You can generate it this way:
java -cp $ZK_HOME/lib/*:$ZK_HOME/zookeeper-*.jar org.apache.zookeeper.server.auth.DigestAuthenticationProvider hdfs-zkfcs:PASSWORD
#####hdfs_data_dirs
Directory prefixes to store the data on HDFS datanodes. Default: ["/var/lib/hadoop-hdfs"] or ["/var/lib/hadoop-hdfs/cache"].
- directory for DFS data blocks
- /${user.name}/dfs/datanode suffix is always added
- If there is multiple directories, then data will be stored in all directories, typically on different devices.
#####hdfs_deployed
Perform also actions requiring working HDFS (namenode + enough datanodes). Default: true.
You can set this to false during initial installation and divide setup this way to two separated stages. false will disable following actions:
- starting MapReduce History Server
- enabling RM HDFS state-store feature (if enabled)
- starting NFS server and NFS mounts (if enabled)
Two stage setup is not required, but it is recommended to avoid errors during initial installation.
#####hdfs_hostname
Hadoop Filesystem Name Node machine. Default: $::fqdn.
#####hdfs_hostname2
Another Hadoop Filesystem Name Node machine for High Availability. Default: undef.
Used for High Availability. This parameter will activate the HDFS HA feature. See http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html.
If you're converting existing Hadoop cluster without HA to cluster with HA, you need to initialize journalnodes yet:
hdfs namenode -initializeSharedEdits
Zookeepers are required for automatic transitions.
If Hadoop cluster is secured, it is recommended also secure Zookeeper. See ha_credentials and ha_digest parameters.
#####hdfs_journal_dirs
Directory prefixes to store journal logs by journal name nodes, if different from hdfs_name_dirs. Default: undef.
#####hdfs_name_dirs
Directory prefixes to store the metadata on the namenode. Default: ["/var/lib/hadoop-hdfs"] or ["/var/lib/hadoop-hdfs/cache"].
- directory for name table (fsimage)
- /${user.name}/dfs/namenode or /${user.name}/dfs/name suffix is always added
- If there is multiple directories, then the name table is replicated in all of the directories, for redundancy.
- All directories needs to be available to namenode work properly (==> good on mirrored raid)
- Crucial data (==> good to save at different physical locations)
When adding a new directory, you will need to replicate the contents from some of the other ones. Or set dfs.namenode.name.dir.restore to true and create NEW_DIR/hdfs/dfs/namenode with proper owners.
#####hdfs_secondary_dirs
Directory prefixes to store metadata by secondary name nodes, if different from hdfs_name_dirs. Default: undef.
#####historyserver_hostname
History Server machine. Default: yarn_hostname.
#####https
Enable support for https. Default: undef.
Requires:
- enabled security (non-empty realm)
- /etc/security/cacerts file (https_cacerts parameter) - kept in the place, only permission changed, if needed
- /etc/security/server.keystore file (https_keystore parameter) - copied for each daemon user
- /etc/security/http-auth-signature-secret file (any data, string or blob) - copied for each daemon user
- /etc/security/keytab/http.service.keytab - copied for each daemon user
#####https_cacerts
CA certificates file. Default: '/etc/security/cacerts'.
#####https_cacerts_password
CA certificates keystore password. Default: ''.
#####https_keystore
Certificates keystore file. Default: '/etc/security/server.keystore'.
#####https_keystore_keypassword
Certificates keystore key password. Default: undef.
If not specified, https_keystore_password is used.
#####https_keystore_password
Certificates keystore file password. Default: 'changeit'.
#####https_keytab
Keytab file for HTTPS. Default: '/etc/security/keytab/http.service.keytab'.
It will be copied for each daemon user and according permissions and properties set.
#####journalnode_hostnames
Array of HDFS Journal Node machines. Default: undef.
Used in HDFS namenode HA.
#####keytab_datanode
Keytab file for HDFS Data Node. Default: '/etc/security/keytab/dn.service.keytab'.
This will set also property dfs.datanode.keytab.file, if not specified directly.
#####keytab_jobhistory
Keytab file for Map Reduce Job History Server. Default: '/etc/security/keytab/jhs.service.keytab'.
This will set also property mapreduce.jobhistory.keytab, if not specified directly.
#####keytab_journalnode
Keytab file for HDFS Data Node. Default: '/etc/security/keytab/jn.service.keytab'.
This will set also property dfs.journalnode.keytab.file, if not specified directly.
#####keytab_namenode
Keytab file for HDFS Name Node. Default: '/etc/security/keytab/nn.service.keytab'.
This will set also property dfs.namenode.keytab.file, if not specified directly.
#####keytab_nfs
Keytab file for HDFS NFS Gateway. Default: '/etc/security/keytab/hdfs.service.keytab'.
This will set also property nfs.keytab.file, if not specified directly.
#####keytab_nodemanager
Keytab file for YARN Node Manager. Default: '/etc/security/keytab/nm.service.keytab'.
This will set also property yarn.nodemanager.keytab, if not specified directly.
#####keytab_resourcemanager
Keytab file for YARN Resource Manager. Default: '/etc/security/keytab/rm.service.keytab'.
This will set also property yarn.resourcemanager.keytab, if not specified directly.
#####min_uid
Minimal permitted UID of Hadoop users. Default: autodetect by facter.
Used in Linux containers, when security is enabled.
#####nfs_dumpdir
Directory used to temporarily save out-of-order writes before writing to HDFS. Default: '/tmp/.hdfs-nfs'.
Enough space is needed (>= 1 GB).
#####nfs_exports
NFS host access privileges. Default: "${::fqdn} rw".
As HDFS NFS Gateway doesn't have any authentication, we recommend to limit access according to IP/hostnames. Java regular expressions are used, entries are separated by ';'. Example: '192.168.0.0/22 rw ; \w*\.example\.com ; host1.test.org ro'.
#####nfs_hostnames
Array of HDFS NFS Gateway hostnames. Default: [].
#####nfs_mount
Default directory to mount HDFS NFS Gateway. Default: '/hdfs'.
HDFS NFS Gateway is automatically mounted locally, but this can be disabled using empty string. Mounts are handled by hadoop::nfs::mount resource.
#####nfs_mount_options
Additional NFS mount options. Default: undef.
#####nfs_proxy_user
HDFS proxy user for NFS Gateway. Default: 'nfs' (secured cluster), nfs_system_user (without security).
This must be a system user. It is created automatically, if needed.
The Kerberos principal prefix from keytab_nfs must be the same as this user. If it is not, you need to ensure:
- Proper mapping from principal name to nfs_proxy_user must be specified in hadoop.security.auth_to_local property.
- Principal must be specified in nfs.kerberos.principal property.
#####nfs_system_user
System user for HDFS NFS Gateway server. Default: 'hdfs'.
The value must correspond to packaging of Hadoop distribution.
#####nodemanager_hostnames
Array of Node Manager machines. Default: slaves.
#####perform
Launch all installation and setup here, from hadoop class. Default: false.
#####properties
"Raw" properties for hadoop cluster. Default: see params.pp.
"::undef" value will remove given property set automatically by this module, empty string sets the empty value.
#####realm
Enable security and Kerberos realm to use. Default: ''.
Empty string disables the security.
With security there is required:
- installed Kerberos client (Debian: krb5-user/heimdal-clients; RedHat: krb5-workstation)
- configured Kerberos client (/etc/krb5.conf, /etc/krb5.keytab)
- /etc/security/keytab/dn.service.keytab (on data nodes)
- /etc/security/keytab/jhs.service.keytab (on job history node)
- /etc/security/keytab/nm.service.keytab (on node manager nodes)
- /etc/security/keytab/nn.service.keytab (on name nodes)
- /etc/security/keytab/rm.service.keytab (on resource manager node)
- /etc/security/keytab/nfs.service.keytab (on nfs gateway node)
If https is enabled, cookie domain is set automatically to lowercased realm. This may be overridden by http.authentication.cookie.domain in properties.
#####slaves
Array of slave node hostnames. Default: [$::fqdn].
#####yarn_hostname
Yarn machine (with Resource Manager and Job History services). Default: $::fqdn.
#####yarn_hostname2
YARN resourcemanager second hostname for High Availability. This parameter will activate the YARN HA feature. See http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html.
Zookeepers are required (zookeeper_hostnames parameter).
Default: undef.
#####zookeeper_deployed
Perform also actions requiring working zookeeper and journal nodes. Default: true.
When true, launch ZKFC daemons and secondary namenode (if enabled). You can set this to false during initial installation when High Availability is enabled.
#####zookeeper_hostnames
Array of Zookeeper machines. Default: undef.
Used in HDFS namenode HA for automatic failover and for YARN resourcemanager state-store feature.
Without zookeepers and HDFS HA, the manual failover is needed: the namenodes are always started in standby mode and one would need to be activated manually.
##Limitations
Idea in this module is to do only one thing - setup Hadoop cluster - and not limit generic usage of this module by doing other stuff. You can have your own repository with Hadoop SW, you can select which Kerberos implementation or Java version to use.
On other hand this leads to some limitations as mentioned in Setup Requirements section and usage is more complicated - you may need site-specific puppet module together with this one.
Other limitation is poor support for synchronization across multiple machines. Setup will converge on repeated runs, but it is better to separate setup to two (or more) stages.
##Development
- Repository: https://github.com/MetaCenterCloudPuppet/cesnet-hadoop
- Tests:
- basic: see .travis.yml
- vagrant: https://github.com/MetaCenterCloudPuppet/hadoop-tests
- Email: František Dvořák <valtri@civ.zcu.cz>
#Changelog
See https://github.com/MetaCenterCloudPuppet/cesnet-hadoop/commits/master.
#Incompatible changes
2.0.0:
- environments parameter:
- name changed: environments -> environment
- type changed: array -> hash
1.0.0:
- mkdir resource definition:
- parameters ordering changed
Dependencies
- puppetlabs/stdlib (>= 1.0.0 <5.0.0)
- adrien/alternatives (>= 0.3.0 <1.0.0)
The MIT License (MIT) Copyright (c) 2014,2015 CESNET Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.