Version information
This version is compatible with:
- Puppet Enterprise 2023.8.x, 2023.7.x, 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x
- Puppet >= 7.0.0 < 9.0.0
- , , ,
Start using this module
Add this module to your Puppetfile:
mod 'rjd1-kafka_connect', '3.1.0'
Learn more about managing modules with a PuppetfileDocumentation
kafka_connect
Welcome to the kafka_connect Puppet module!
Table of Contents
- Description
- Setup - The basics of getting started with kafka_connect
- Usage - Configuration options and additional functionality
- Reference - An under-the-hood peek at what the module is doing and how
- Limitations - OS compatibility, etc.
- Development - Guide for contributing to the module
Description
Manages the setup of Kafka Connect.
Supports running through either Confluent package or Apache .tgz archive.
Includes a Type, Provider, and helper class for management of individual KC connectors.
Setup
What kafka_connect affects
- Manages the KC installation, configuration, and system service.
- Manages the individual state of running connectors, + their config & secret files.
Getting started with kafka_connect
For a basic Kafka Connect system setup with the default settings, declare the kafka_connect
class.
class { 'kafka_connect': }
Usage
See the manifest documentation for various examples.
Typical deployment
For a typical distributed mode deployment, most of the default settings should be fine. However, a normal setup will involve connecting to a cluster of Kafka brokers and the replication factor config values for storage topics should be increased. Here is a real-world example that also specifies version and includes the Confluent JDBC plugin:
class { 'kafka_connect':
config_storage_replication_factor => 3,
offset_storage_replication_factor => 3,
status_storage_replication_factor => 3,
bootstrap_servers => [
"kafka-01.${facts['networking']['domain']}:9092",
"kafka-02.${facts['networking']['domain']}:9092",
"kafka-03.${facts['networking']['domain']}:9092",
"kafka-04.${facts['networking']['domain']}:9092",
"kafka-05.${facts['networking']['domain']}:9092"
],
confluent_hub_plugins => [ 'confluentinc/kafka-connect-jdbc:10.7.4' ],
package_ensure => '7.5.2-1',
repo_version => '7.5',
}
Managing connectors through the helper class
The helper class is designed to work with connector data defined in hiera.
The main class needs to be included/declared. If only the connector management functionality is desired, there's a flag to exclude the standard setup stuff:
class { 'kafka_connect':
manage_connectors_only => true,
}
The following sections provide examples of specific functionality through hiera data.
Add a Connector
The connector config data should be added to hiera with the following layout.
kafka_connect::connectors:
my-connector.json:
name: 'my-kc-connector'
config:
my.config.key: "value"
my.other.config: "other_value"
Update an existing Connector
Simply make changes to the connector config
hash, as needed.
kafka_connect::connectors:
my-connector.json:
name: 'my-kc-connector'
config:
my.config.key: "new_value"
my.other.config: "other_new_value"
Remove a Connector
There's a parameter, enable_delete
, that by default is set to false and must first be overwritten to support this. Then use the optional ensure
key in the connector data hash and set it to 'absent'.
kafka_connect::enable_delete: true
kafka_connect::connectors:
my-connector.json:
name: 'my-jdbc-connector'
ensure: 'absent'
NOTE: be sure to remove it from the secrets array list as well, if present.
Pause a Connector
The provider supports ensuring the connector state is either running (default) or paused. Similar to removing, use the ensure
key in the connector data hash and set it to 'paused'.
kafka_connect::connectors:
my-connector.json:
name: 'my-jdbc-connector'
ensure: 'paused'
config:
my.config.key: "value"
my.other.config: "other_value"
Remove the ensure line or set it to 'running' to unpause/resume.
Managing Secrets Config Data
Support for Externalized Secrets is provided through kafka_connect::secrets
. This enables things like database passwords, etc., to be separated from the normal config and just loaded into memory when the connector starts.
The following is a basic DB connection example defined in YAML.
kafka_connect::connectors:
my-connector.json:
name: 'my-jdbc-connector'
config:
connection.url: "jdbc:postgresql://some-host.example.com:5432/db"
connection.user: "my-user"
connection.password: "${file:/etc/kafka-connect/my-jdbc-secret-file.properties:jdbc-sink-connection-password}"
The password is then added, preferably via EYAML, with the file and var names used in the config.
kafka_connect::secrets:
my-jdbc-secret-file.properties:
connectors:
- 'my-jdbc-connector'
key: 'jdbc-sink-connection-password'
value: 'ENC[PKCS7,encrypted-passwd-value]'
To add multiple secrets to a single file, use the kv_data
hash. Continuing with the example above, to instead have individual secret vars for each of the connection configs:
kafka_connect::secrets:
my-jdbc-secret-file.properties:
connectors:
- 'my-jdbc-connector'
kv_data:
jdbc-sink-connection-url: 'ENC[PKCS7,encrypted-url-value]'
jdbc-sink-connection-user: 'ENC[PKCS7,encrypted-user-value]'
jdbc-sink-connection-password: 'ENC[PKCS7,encrypted-passwd-value]'
The connectors
array should contain a list of connector names that reference it in the config. This allows for automatic update/refresh (via REST API restart POST) if the password value is changed.
To later remove unused files, use the optional ensure
hash key and set it to 'absent'.
kafka_connect::secrets:
my-old-jdbc-secret-file.properties:
ensure: 'absent'
Managing connectors directly through the resource type
WARNING: Breaking change in v2.0.0
In release v2.0.0 the type and provider were renamed from manage_connector
to kc_connector
. Usage and functionality remain the same.
Examples
Ensure a connector exists and the running config matches the file config:
kc_connector { 'some-kc-connector-name' :
ensure => 'present',
config_file => '/etc/kafka-connect/some-kc-connector.properties.json',
port => 8084,
}
To pause:
kc_connector { 'some-kc-connector-name' :
connector_state_ensure => 'PAUSED',
}
To remove:
kc_connector { 'some-kc-connector-name' :
ensure => 'absent',
enable_delete => true,
}
Command to remove through the Puppet RAL:
$ puppet resource kc_connector some-kc-connector-name ensure=absent enable_delete=true
Limitations
Tested with Confluent Platform 7.x and Apache Kafka 3.8.0 on the Operating Systems noted in metadata.json.
Known Issues
In certain situations, for example when a connector is set to absent and the enable_delete
parameter is false (the default), Puppet will report a system state change when actually none has occured (i.e., it lies). There are warnings output along with the change notices in these scenarios.
Development
The project is held at GitHub:
Issue reports and pull requests are welcome.
Reference
Table of Contents
Classes
Public Classes
kafka_connect
: Main kafka_connect class.
Private Classes
kafka_connect::config
: Manages the Kafka Connect configuration.kafka_connect::confluent_repo
: Manages the Confluent package repository.kafka_connect::confluent_repo::apt
: Manages the Confluent apt package repository.kafka_connect::confluent_repo::yum
: Manages the Confluent yum package repository.kafka_connect::install
: Main class for the Kafka Connect installation.kafka_connect::install::archive
: Manages the Kafka Connect archive (.tgz) based installation.kafka_connect::install::package
: Manages the Kafka Connect package installation.kafka_connect::manage_connectors
: Class to manage individual Kafka Connect connectors and connector secrets.kafka_connect::manage_connectors::connector
: Class to manage individual Kafka Connect connectors.kafka_connect::manage_connectors::secret
: Class to manage individual Kafka Connect connector secrets.kafka_connect::service
: Manages the Kafka Connect service.kafka_connect::user
: Manages the KC user and group.
Defined types
kafka_connect::install::plugin
: Defined type for Confluent Hub plugin installation.
Resource types
kc_connector
: Manage running Kafka Connect connectors.
Data types
Kafka_connect::Connector
: Validate the individual connector data.Kafka_connect::Connectors
: Validate the connectors data.Kafka_connect::HubPlugin
: Validate the Confluent Hub plugin.Kafka_connect::HubPlugins
: Validate the Confluent Hub plugins list.Kafka_connect::LogAppender
: Validate the log4j file appender.Kafka_connect::Loglevel
: Matches all valid log4j loglevels.Kafka_connect::Secret
: Validate the individual secret data.Kafka_connect::Secrets
: Validate the secrets data.
Classes
kafka_connect
Main kafka_connect class.
Examples
Basic setup.
include kafka_connect
Typical deployment with a 3 node Kafka cluster, S3 plugin, and Schema Registry config.
class { 'kafka_connect':
config_storage_replication_factor => 3,
offset_storage_replication_factor => 3,
status_storage_replication_factor => 3,
bootstrap_servers => [ 'kafka-01:9092', 'kafka-02:9092', 'kafka-03:9092' ],
confluent_hub_plugins => [ 'confluentinc/kafka-connect-s3:10.5.7' ],
value_converter_schema_registry_url => "http://schemaregistry-elb.${facts['networking']['domain']}:8081",
}
Custom logging options, with the Elasticsearch plugin.
class { 'kafka_connect':
log4j_enable_stdout => true,
log4j_custom_config_lines => [ 'log4j.logger.io.confluent.connect.elasticsearch=DEBUG' ],
confluent_hub_plugins => [ 'confluentinc/kafka-connect-elasticsearch:latest' ],
}
Only manage connectors, not the full setup (i.e. without install/config/service classes).
class { 'kafka_connect':
manage_connectors_only => true,
connector_config_dir => '/opt/kafka-connect/etc',
rest_port => 8084,
enable_delete => true,
}
Standalone mode with local Kakfa and Zookeeper services.
class { 'kafka_connect':
config_mode => 'standalone',
run_local_kafka_broker_and_zk => true,
}
Apache archive source install.
class { 'kafka_connect':
install_source => 'archive',
connector_config_dir => '/opt/kafka/config/connectors',
user => 'kafka',
group => 'kafka',
service_name => 'kafka-connect',
manage_user_and_group => true,
manage_confluent_repo => false,
}
Parameters
The following parameters are available in the kafka_connect
class:
manage_connectors_only
manage_confluent_repo
manage_user_and_group
include_java
repo_ensure
repo_enabled
repo_version
install_source
package_name
package_ensure
manage_schema_registry_package
schema_registry_package_name
confluent_rest_utils_package_name
confluent_hub_plugin_path
confluent_hub_plugins
confluent_hub_client_package_name
confluent_common_package_name
archive_install_dir
archive_source
config_mode
kafka_heap_options
kc_config_dir
config_storage_replication_factor
config_storage_topic
group_id
bootstrap_servers
key_converter
key_converter_schemas_enable
listeners
log4j_file_appender
log4j_appender_file_path
log4j_appender_max_file_size
log4j_appender_max_backup_index
log4j_appender_date_pattern
log4j_enable_stdout
log4j_custom_config_lines
log4j_loglevel_rootlogger
offset_storage_file_filename
offset_flush_interval_ms
offset_storage_topic
offset_storage_replication_factor
offset_storage_partitions
plugin_path
status_storage_topic
status_storage_replication_factor
status_storage_partitions
value_converter
value_converter_schema_registry_url
value_converter_schemas_enable
manage_systemd_service_file
service_name
service_ensure
service_enable
service_provider
run_local_kafka_broker_and_zk
user
group
user_and_group_ensure
owner
connector_config_dir
connector_config_file_mode
connector_secret_file_mode
hostname
rest_port
enable_delete
restart_on_failed_state
manage_connectors_only
Data type: Boolean
Flag for including the connector management class only.
Default value: false
manage_confluent_repo
Data type: Boolean
Flag for including the confluent repo class.
Default value: true
manage_user_and_group
Data type: Boolean
Flag for managing the service user & group.
Default value: false
include_java
Data type: Boolean
Flag for including class java.
Default value: false
repo_ensure
Data type: Enum['present', 'absent']
Ensure value for the Confluent package repo resource.
Default value: 'present'
repo_enabled
Data type: Boolean
Enabled value for the Confluent package repo resource.
Default value: true
repo_version
Data type: Pattern[/^(\d+\.\d+|\d+)$/]
Version of the Confluent repo to configure.
Default value: '7.7'
install_source
Data type: Enum['package', 'archive']
Installation source to use, either Confluent package or Apache archive.
Default value: 'package'
package_name
Data type: String[1]
Name of the main KC package to manage.
Default value: 'confluent-kafka'
package_ensure
Data type: String[1]
State of the package to ensure. Note that this may be used by more than one resource, depending on the setup.
Default value: '7.7.1-1'
manage_schema_registry_package
Data type: Boolean
Flag for managing the Schema Registry package (and REST Utils dependency package).
Default value: true
schema_registry_package_name
Data type: String[1]
Name of the Schema Registry package.
Default value: 'confluent-schema-registry'
confluent_rest_utils_package_name
Data type: String[1]
Name of the Confluent REST Utils package.
Default value: 'confluent-rest-utils'
confluent_hub_plugin_path
Data type: Stdlib::Absolutepath
Installation path for Confluent Hub plugins.
Default value: '/usr/share/confluent-hub-components'
confluent_hub_plugins
Data type: Kafka_connect::HubPlugins
List of Confluent Hub plugins to install. Each should be in the format author/name:semantic-version, e.g. 'acme/fancy-plugin:0.1.0' Also accepts 'latest' in place of a specific version.
Default value: []
confluent_hub_client_package_name
Data type: String[1]
Name of the Confluent Hub Client package.
Default value: 'confluent-hub-client'
confluent_common_package_name
Data type: String[1]
Name of the Confluent Common package.
Default value: 'confluent-common'
archive_install_dir
Data type: Stdlib::Absolutepath
Install directory to use for Apache archive-based setup.
Default value: '/opt/kafka'
archive_source
Data type: Stdlib::HTTPUrl
Download source to use for Apache archive-based setup.
Default value: 'https://downloads.apache.org/kafka/3.8.0/kafka_2.13-3.8.0.tgz'
config_mode
Data type: Enum['distributed', 'standalone']
Configuration mode to use for the setup.
Default value: 'distributed'
kafka_heap_options
Data type: String[1]
Value to set for 'KAFKA_HEAP_OPTS' export.
Default value: '-Xms256M -Xmx2G'
kc_config_dir
Data type: Stdlib::Absolutepath
Configuration directory for KC properties files.
Default value: '/etc/kafka'
config_storage_replication_factor
Data type: Integer
Config value to set for 'config.storage.replication.factor'.
Default value: 1
config_storage_topic
Data type: String[1]
Config value to set for 'config.storage.topic'.
Default value: 'connect-configs'
group_id
Data type: String[1]
Config value to set for 'group.id'.
Default value: 'connect-cluster'
bootstrap_servers
Data type: Array[String[1]]
Config value to set for 'bootstrap.servers'.
Default value: ['localhost:9092']
key_converter
Data type: String[1]
Config value to set for 'key.converter'.
Default value: 'org.apache.kafka.connect.json.JsonConverter'
key_converter_schemas_enable
Data type: Boolean
Config value to set for 'key.converter.schemas.enable'.
Default value: true
listeners
Data type: Stdlib::HTTPUrl
Config value to set for 'listeners'.
Default value: 'HTTP://:8083'
log4j_file_appender
Data type: Kafka_connect::LogAppender
Log4j file appender type to use (RollingFileAppender or DailyRollingFileAppender).
Default value: 'RollingFileAppender'
log4j_appender_file_path
Data type: Stdlib::Absolutepath
Config value to set for 'log4j.appender.file.File'.
Default value: '/var/log/confluent/connect.log'
log4j_appender_max_file_size
Data type: String[1]
Config value to set for 'log4j.appender.file.MaxFileSize'. Only used if log4j_file_appender = 'RollingFileAppender'.
Default value: '100MB'
log4j_appender_max_backup_index
Data type: Integer
Config value to set for 'log4j.appender.file.MaxBackupIndex'. Only used if log4j_file_appender = 'RollingFileAppender'.
Default value: 10
log4j_appender_date_pattern
Data type: String[1]
Config value to set for 'log4j.appender.file.DatePattern'. Only used if log4j_file_appender = 'DailyRollingFileAppender'.
Default value: '\'.\'yyyy-MM-dd-HH'
log4j_enable_stdout
Data type: Boolean
Option to enable logging to stdout/console.
Default value: false
log4j_custom_config_lines
Data type: Optional[Array[String[1]]]
Option to provide additional custom logging configuration. Can be used, for example, to adjust the log level for a specific connector type. See: https://docs.confluent.io/platform/current/connect/logging.html#use-the-kconnect-log4j-properties-file
Default value: undef
log4j_loglevel_rootlogger
Data type: Kafka_connect::Loglevel
Config value to set for 'log4j.rootLogger'.
Default value: 'INFO'
offset_storage_file_filename
Data type: String[1]
Config value to set for 'offset.storage.file.filename'. Only used in standalone mode.
Default value: '/tmp/connect.offsets'
offset_flush_interval_ms
Data type: Integer
Config value to set for 'offset.flush.interval.ms'.
Default value: 10000
offset_storage_topic
Data type: String[1]
Config value to set for 'offset.storage.topic'.
Default value: 'connect-offsets'
offset_storage_replication_factor
Data type: Integer
Config value to set for 'offset.storage.replication.factor'.
Default value: 1
offset_storage_partitions
Data type: Integer
Config value to set for 'offset.storage.partitions'.
Default value: 25
plugin_path
Data type: Stdlib::Absolutepath
Config value to set for 'plugin.path'.
Default value: '/usr/share/java,/usr/share/confluent-hub-components'
status_storage_topic
Data type: String[1]
Config value to set for 'status.storage.topic'.
Default value: 'connect-status'
status_storage_replication_factor
Data type: Integer
Config value to set for 'status.storage.replication.factor'.
Default value: 1
status_storage_partitions
Data type: Integer
Config value to set for 'status.storage.partitions'.
Default value: 5
value_converter
Data type: String[1]
Config value to set for 'value.converter'.
Default value: 'org.apache.kafka.connect.json.JsonConverter'
value_converter_schema_registry_url
Data type: Optional[Stdlib::HTTPUrl]
Config value to set for 'value.converter.schema.registry.url', if defined.
Default value: undef
value_converter_schemas_enable
Data type: Boolean
Config value to set for 'value.converter.schemas.enable'.
Default value: true
manage_systemd_service_file
Data type: Boolean
Flag for managing systemd service unit file(s).
Default value: true
service_name
Data type: String[1]
Name of the service to manage.
Default value: 'confluent-kafka-connect'
service_ensure
Data type: Stdlib::Ensure::Service
State of the service to ensure.
Default value: 'running'
service_enable
Data type: Boolean
Value for enabling the service at boot.
Default value: true
service_provider
Data type: Optional[String[1]]
Backend provider to use for the service resource.
Default value: undef
run_local_kafka_broker_and_zk
Data type: Boolean
Flag for running local kafka broker and zookeeper services. Intended only for use with standalone config mode.
Default value: false
user
Data type: Variant[String[1], Integer]
User to run service as, set owner on config files, etc.
Default value: 'cp-kafka-connect'
group
Data type: Variant[String[1], Integer]
Group the service will run as.
Default value: 'confluent'
user_and_group_ensure
Data type: Enum['present', 'absent']
Value to set for ensure on user & group, if managed.
Default value: 'present'
owner
Data type:
Optional[
Variant[String[1], Integer]
]
Owner to set on config files. Deprecated: use the 'user' parameter instead.
Default value: undef
connector_config_dir
Data type: Stdlib::Absolutepath
Configuration directory for connector properties files.
Default value: '/etc/kafka-connect'
connector_config_file_mode
Data type: Stdlib::Filemode
Mode to set on connector config file.
Default value: '0640'
connector_secret_file_mode
Data type: Stdlib::Filemode
Mode to set on connector secret file.
Default value: '0600'
hostname
Data type: String[1]
The hostname or IP of the KC service.
Default value: 'localhost'
rest_port
Data type: Stdlib::Port
Port to connect to for the REST API.
Default value: 8083
enable_delete
Data type: Boolean
Enable delete of running connectors. Required for the provider to actually remove when set to absent.
Default value: false
restart_on_failed_state
Data type: Boolean
Allow the provider to auto restart on FAILED connector state.
Default value: false
Defined types
kafka_connect::install::plugin
Defined type for Confluent Hub plugin installation.
Parameters
The following parameters are available in the kafka_connect::install::plugin
defined type:
plugin
Data type: Kafka_connect::HubPlugin
Plugin to install, in the form 'author/name:(semantic-version|latest)'.
Default value: $title
Resource types
kc_connector
Manage running Kafka Connect connectors.
Properties
The following properties are available in the kc_connector
type.
config_updated
Valid values: yes
, no
, unknown
Property to ensure running config matches file config.
Default value: yes
connector_state_ensure
Valid values: RUNNING
, PAUSED
State of the connector to ensure.
Default value: RUNNING
ensure
Valid values: present
, absent
The basic property that the resource should be in.
Default value: present
tasks_state_ensure
Valid values: RUNNING
State of the connector tasks to ensure. This is just used to catch failed tasks and should not be changed.
Default value: RUNNING
Parameters
The following parameters are available in the kc_connector
type.
config_file
Config file fully qualified path.
enable_delete
Valid values: true
, false
, yes
, no
Flag to enable delete, required for remove action.
Default value: false
hostname
The hostname or IP of the KC service.
Default value: localhost
name
namevar
The name of the connector resource you want to manage.
port
The listening port of the KC service.
Default value: 8083
provider
The specific backend to use for this kc_connector
resource. You will seldom need to specify this --- Puppet will
usually discover the appropriate provider for your platform.
restart_on_failed_state
Valid values: true
, false
, yes
, no
Flag to enable auto restart on FAILED connector state.
Default value: false
Data types
Kafka_connect::Connector
Validate the individual connector data.
Alias of
Struct[{
Optional['ensure'] => Enum['absent', 'present', 'running', 'paused'],
'name' => String[1],
Optional['config'] => Hash[String[1], String],
}]
Kafka_connect::Connectors
Validate the connectors data.
Alias of Hash[String[1], Kafka_connect::Connector]
Kafka_connect::HubPlugin
Validate the Confluent Hub plugin.
Alias of Pattern[/^\w+\/[a-zA-z0-9]{1,}[a-zA-z0-9\-]{0,}:(\d+\.\d+\.\d+|latest)$/]
Kafka_connect::HubPlugins
Validate the Confluent Hub plugins list.
Alias of Array[Optional[Pattern[/^\w+\/[a-zA-z0-9]{1,}[a-zA-z0-9\-]{0,}:(\d+\.\d+\.\d+|latest)$/]]]
Kafka_connect::LogAppender
Validate the log4j file appender.
Alias of Enum['DailyRollingFileAppender', 'RollingFileAppender']
Kafka_connect::Loglevel
Matches all valid log4j loglevels.
Alias of Enum['TRACE', 'DEBUG', 'INFO', 'WARN', 'ERROR', 'FATAL']
Kafka_connect::Secret
Validate the individual secret data.
Alias of
Struct[{
Optional['ensure'] => Enum['absent', 'present', 'file'],
Optional['connectors'] => Array[String[1]],
Optional['key'] => String[1],
Optional['value'] => String[1],
Optional['kv_data'] => Hash[String[1], String[1]],
}]
Kafka_connect::Secrets
Validate the secrets data.
Alias of Hash[String[1], Kafka_connect::Secret]
rjd1-kafka_connect changelog
Release notes for the rjd1-kafka_connect module.
Release 3.1.0
2024-12-04 - Update supported Operating Systems
- Add Amazon Linux 2023
- Add RedHat, Rocky 8 & 9
- Drop EOL CentOS
Release 3.0.0
2024-10-23 - Support Apache archive install source + other enhancements
- Add support for installing from archive source (Apache .tgz)
- Add optional management of user & group resources
- Add optional management of systemd service unit file(s)
- Use new defined type for hub plugin installs
- Remove deprecated $connectors_absent & $connectors_paused class params
- Deprecate $owner in favor of new $user param
- Set default package version to 7.7.1
- Add support for Ubuntu 24.04
- Update module dependencies
Release 2.5.0
2024-09-03 - Support standalone mode
- Add support for standalone mode setup
- Allow puppetlabs-java 11.x
Release 2.4.1
2024-06-19 - Update deprecated function
- Changed to_json() to the namespaced stdlib::to_json()
- Require puppetlabs-stdlib 9.x
- Improved yum clean command
- Moved wait exec into the service class
Release 2.4.0
2024-05-22 - Type/Provider enhancement
- Added tasks_state_ensure property and support for restarting individual failed tasks
Release 2.3.0
2024-05-02 - Class restructuring
- Added individual sub-classes for repo resources
- Split manage connector stuff into separate sub-classes
- Added an exec to flush the yum cache
- Added optional $service_provider parameter
- Various rspec test updates
Release 2.2.0
2024-04-16 - Improved secrets support
- Added support for multiple key/value pairs in each secret file
Release 2.1.0
2024-04-02 - Support Ubuntu
- Fixed resource ordering with apt repo
- Added support for Ubuntu 22.04
Release 2.0.0
2024-03-19 - Renaming libs
- Renamed the type and provider pair
- Various updates based on 'pdk validate' output
Release 1.4.0
2024-03-05 - Functionality & usage enhancements
- Added support for 'ensure' key in connector data
- Added data types aliases for connectors & secrets
- Improved logic with error checking and notifies
- Deprecated use of $kafka_connect::connectors_absent
- Deprecated use of $kafka_connect::connectors_paused
Release 1.3.0
2024-02-19 - Improved logging support
- Added support for time-based log rotation
- Added ability to enable sending logs to stdout/console
- Added support for setting custom log4j config lines
- Some various class updates
Release 1.2.0
2024-01-29 - Parameter updates & class enhancements
- Updated a bunch of data types to be more strict
- Converted a couple config params from String to Integer
- Added a pair of new params for setting file modes
- Added logic to set config file ensures dynamically
- Added use of contain()
- Updated Strings docs
Release 1.1.0
2024-01-15 - Support removing secret files
- Added ability to set secret files to absent via 'ensure' hash key
- Fixed a bug where the provider would improperly warn on restart
Release 1.0.1
2024-01-05 - Doc updates
- Fixed syntax error in class example
- A few updates to the README
- Some minor class cleanup
Release 1.0.0
2023-12-14 - Support for distributed mode setup
- Added confluent_repo, install, config, & service classes
- Added template configs
- Spec test adds & updates
- Updated docs
Release 0.2.3
2023-12-08 - Documentation updates
- Updated various portions of the docs
Release 0.2.2
2023-10-30 - Update to provider restart
- Updated provider restart POST to include failed tasks
Release 0.2.1
2023-10-19 - Removal of private class params
- Removed private class parameters, replaced with direct scoping
Release 0.2.0
2023-10-18 - Elimination of old dependency
- Removed hash_file resource in favor of to_json() content function
- Some minor manifest cleanup
Release 0.1.0
2023-07-24 - Initial version released as 0.1.0
- Initial release just containing connector management code
- Added type & provider pair, plus manage_connectors helper class
- Added spec tests, standard docs, etc.
Dependencies
- puppetlabs-apt (>= 4.5.1 < 10.0.0)
- puppet-archive (>= 5.0.0 < 8.0.0)
- puppetlabs-stdlib (>= 9.0.0 < 10.0.0)
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.