Skip to content

crossplane-contrib/provider-kafka

Repository files navigation

provider-kafka

provider-kafka is a Crossplane Provider that is used to manage Kafka resources.

Usage

  1. Create a provider secret containing a json like the following, see expected schema here:

    {
      "brokers":[
        "kafka-dev-0.kafka-dev-headless:9092"
       ],
       "sasl":{
         "mechanism":"PLAIN",
         "username":"user",
         "password":"<your-password>"
       }
    }
    
  2. Create a k8s secret containing above config:

    kubectl -n crossplane-system create secret generic kafka-creds --from-file=credentials=kc.json
    
  3. Create a ProviderConfig, see this as an example.

  4. Create a managed resource see, see this for an example creating a Kafka topic.

Development

Setting up a Development Kafka Cluster

The following instructions will setup a development environment where you will have a locally running Kafka installation (SASL-Plain enabled). To change the configuration of your instance further, please see available helm parameters here.

  1. (Optional) Create a local kind cluster unless you want to develop against an existing k8s cluster.

  2. Install the Kafka helm chart:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    kubectl create ns kafka-cluster
    helm upgrade --install kafka-dev -n kafka-cluster bitnami/kafka \
      --version 20.0.5 \
      --set auth.clientProtocol=sasl \
      --set deleteTopicEnable=true \
      --set authorizerClassName="kafka.security.authorizer.AclAuthorizer" \
      --wait
    

    Username is "user", obtain password using the following

    kubectl -n kafka-cluster exec kafka-dev-0 -- cat /opt/bitnami/kafka/config/kafka_jaas.conf
    

    Create the Kubernetes secret by adding a JSON filed called kc.json with the following contents

    {
       "brokers": [
          "kafka-dev-0.kafka-dev-headless:9092"
       ],
       "sasl": {
          "mechanism": "PLAIN",
          "username": "user",
          "password": "<password-you-obtained-in-step-2>"
       }
    }

    Once this file is created, apply it by running the following command

    kubectl -n kafka-cluster create secret generic kafka-creds --from-file=credentials=kc.json
  3. Install kubefwd.

  4. Run kubefwd for kafka-cluster namespace which will make internal k8s services locally accessible:

    sudo kubefwd svc -n kafka-cluster
    
  5. To run tests, export the KAFKA_PASSWORD environment variable using the password from step 2

    export KAFKA_PASSWORD="<password-you-obtained-in-step-2>"
    
  6. (optional) Install the kafka cli.

  7. (optional) Configure the kafka cli to talk against local Kafka installation:

    1. Create a config file for the client with the following content at ~/.kcl/config.toml:

      seed_brokers = ["kafka-dev-0.kafka-dev-headless:9092"]
      timeout_ms = 10000
      
      [sasl]
      method = "plain"
      user = "user"
      pass = "<password-you-obtained-in-step-2>"
      
      1. Verify that cli could talk to the Kafka cluster:
      export  KCL_CONFIG_DIR=~/.kcl
      
      kcl metadata --all
      

Building and Running the provider locally

Run against a Kubernetes cluster:

make run

Build, push, and install:

make all

Build image:

make image

Push image:

make push

Build binary:

make build