Forge Home


Puppet module to manage Kafka Connect.


53 latest version

4.7 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 2.2.0 (latest)
  • 2.1.0
  • 2.0.0
  • 1.4.0
  • 1.3.0
  • 1.2.0
  • 1.1.0
  • 1.0.1
  • 1.0.0
  • 0.2.3
  • 0.2.2
  • 0.2.1
  • 0.2.0
  • 0.1.0
released Apr 16th 2024
This version is compatible with:
  • Puppet Enterprise 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x, 2021.6.x, 2021.5.x, 2021.4.x, 2021.3.x, 2021.2.x, 2021.1.x, 2021.0.x
  • Puppet >= 7.0.0 < 9.0.0
  • , ,

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'rjd1-kafka_connect', '2.2.0'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add rjd1-kafka_connect
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install rjd1-kafka_connect --version 2.2.0

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.



rjd1/kafka_connect — version 2.2.0 Apr 16th 2024


Puppet Forge docs License

Welcome to the kafka_connect Puppet module!

Table of Contents

  1. Description
  2. Setup - The basics of getting started with kafka_connect
  3. Usage - Configuration options and additional functionality
  4. Reference - An under-the-hood peek at what the module is doing and how
  5. Limitations - OS compatibility, etc.
  6. Development - Guide for contributing to the module


Manages the setup of Kafka Connect.

Includes a Type, Provider, and helper class for management of individual KC connectors.


What kafka_connect affects

  • Manages the KC installation, configuration, and system service.
  • Manages the individual state of running connectors, + their config & secret files.

Getting started with kafka_connect

For a basic Kafka Connect system setup with the default settings, declare the kafka_connect class.

class { 'kafka_connect': }


Typical deployment

For a typical distributed mode deployment, most of the default settings should be fine. However, a normal setup will involve connecting to a cluster of Kafka brokers and the replication factor config values for storage topics should be increased. Here is a real-world example that also specifies version and includes the Confluent JDBC plugin:

  class { 'kafka_connect':
    config_storage_replication_factor => 3,
    offset_storage_replication_factor => 3,
    status_storage_replication_factor => 3,
    bootstrap_servers                 => [
    confluent_hub_plugins             => [ 'confluentinc/kafka-connect-jdbc:10.7.4' ],
    package_ensure                    => '7.5.2-1',

Managing connectors through the helper class

The helper class is designed to work with connector data defined in hiera.

The main class needs to be included/declared. If only the connector management functionality is desired, there's a flag to exclude the standard setup stuff:

  class { 'kafka_connect':
    manage_connectors_only => true,

The following sections provide examples of specific functionality through hiera data.

Add a Connector

The connector config data should be added to hiera with the following layout.

    name: 'my-kc-connector'
      my.config.key: "value"
      my.other.config: "other_value"

Update an existing Connector

Simply make changes to the connector config hash, as needed.

    name: 'my-kc-connector'
      my.config.key: "new_value"
      my.other.config: "other_new_value"

Remove a Connector

There's a parameter, enable_delete, that by default is set to false and must first be overwritten to support this. Then use the optional ensure key in the connector data hash and set it to 'absent'.

kafka_connect::enable_delete: true
    name: 'my-jdbc-connector'
    ensure: 'absent'

NOTE: be sure to remove it from the secrets array list as well, if present.

Pause a Connector

The provider supports ensuring the connector state is either running (default) or paused. Similar to removing, use the ensure key in the connector data hash and set it to 'paused'.

    name: 'my-jdbc-connector'
    ensure: 'paused'
      my.config.key: "value"
      my.other.config: "other_value"

Remove the ensure line or set it to 'running' to unpause/resume.

Managing Secrets Config Data

Support for Externalized Secrets is provided through kafka_connect::secrets. This enables things like database passwords, etc., to be separated from the normal config and just loaded into memory when the connector starts.

The following is a basic DB connection example defined in YAML.

    name: 'my-jdbc-connector'
      connection.url: "jdbc:postgresql://"
      connection.user: "my-user"
      connection.password: "${file:/etc/kafka-connect/}"

The password is then added, preferably via EYAML, with the file and var names used in the config.

      - 'my-jdbc-connector'
    key: 'jdbc-sink-connection-password'
    value: 'ENC[PKCS7,encrypted-passwd-value]'

To add multiple secrets to a single file, use the kv_data hash. Continuing with the example above, to instead have individual secret vars for each of the connection configs:

      - 'my-jdbc-connector'
      jdbc-sink-connection-url: 'ENC[PKCS7,encrypted-url-value]'
      jdbc-sink-connection-user: 'ENC[PKCS7,encrypted-user-value]'
      jdbc-sink-connection-password: 'ENC[PKCS7,encrypted-passwd-value]'

The connectors array should contain a list of connector names that reference it in the config. This allows for automatic update/refresh (via REST API restart POST) if the password value is changed.

To later remove unused files, use the optional ensure hash key and set it to 'absent'.

    ensure: 'absent'

Managing connectors directly through the resource type

WARNING: Breaking change in v2.0.0

In release v2.0.0 the type and provider were renamed from manage_connector to kc_connector. Usage and functionality remain the same.


Ensure a connector exists and the running config matches the file config:

  kc_connector { 'some-kc-connector-name' :
    ensure      => 'present',
    config_file => '/etc/kafka-connect/',
    port        => 8084,

To pause:

  kc_connector { 'some-kc-connector-name' :
    connector_state_ensure => 'PAUSED',

To remove:

  kc_connector { 'some-kc-connector-name' :
    ensure        => 'absent',
    enable_delete => true,

Command to remove through the Puppet RAL:

$ puppet resource kc_connector some-kc-connector-name ensure=absent enable_delete=true


Tested with Confluent 7.x on Amazon Linux 2 and Ubuntu 22.04.

Currently only distributed mode setup is supported.

Known Issues

When the enable_delete parameter is set to false and a connector is set to absent, Puppet still says there is a removal (i.e., lies). A similar situation occurs with the config_updated property when both it and config_file are not specified. There are warnings output along with the notices in these scenarios.


The project is held at GitHub:

Issue reports and pull requests are welcome.