Skip to content

Yaml Deployment Details

Eva Tuczai edited this page Aug 15, 2023 · 86 revisions

Kubeturbo Deploy via YAMLs

Review the prerequisites.

This document describes the resources you will create to deploy kubeturbo, and values you would want to change for your deployment.

Use the sample yamls provided here. A template for a single yaml for all resources is available here.

Deploy KubeTurbo

Resources Overview

1. Create a Namespace

Use an existing namespace, or create one where to deploy kubeturbo. The default and yaml examples all will use turbo.

kubectl create namespace turbo

2. Create a Service Account

The default and yaml examples will all use service account named turbo-user in the turbo namespace.

kubectl create sa turbo-user -n turbo

Yaml sample for SA

3. Create a Cluster Role Binding

You have the following options for a Cluster Role to be associated with Kubeturbo that will govern permissions to discover and execute actions:

  • Option 1: Using the default (cluster-admin) - The default Cluster Role that kubeturbo will use is the builtin cluster-admin Role. Yaml sample for creating default Cluster Role Binding using cluster-admin Role.

  • Option 2: Using a least privilege role that allows actions to execute actions - You can choose to run with a custom Cluster Role that provides minimum privileges with ability to execute actions which is detailed here

  • Option 3: Using a read only role - You can choose to run with a custom Cluster Role that provides read-only privileges which allows for discovery and metrics to be collected but cannot execute actions which is detailed here

4. Choose how to store Turbonomic Server username and password for kubeturbo to use (one or the other, not both)

5. Create a configMap . The configMap resources provides the Turbonomic Server details to kubeturbo.

Yaml sample for ConfigMap

The ConfigMap serves three functions:

  1. Turbo Server Credentials: Defines how to connect to the Turbonomic Server.
  2. Node Role Policies: How to identify nodes by role and automatically create HA policies for them.
  3. Unique cluster identification to the Turbo Server. To distinguish between different Kubernetes clusters, supply a unique targetName value which will name the Kubernetes cluster groups created in Turbonomic.

Use the sample yaml, either the full single yaml or just configmap resource. Modify the following parameters for your environment. For more information refer to ConfigMap details

  • serverMeta.version
  • serverMeta.turboServer
  • restAPIConfig.opsManagerUserName
  • restAPIConfig.opsManagerPassword
  • targetConfig.targetName

6. Create a deployment for kubeturbo.

Yaml sample for Deployment

KubeTurbo will run a single replica to monitor and manage your entire cluster. The only value that may require a change in this resource is the image tag version for the KubeTurbo container. The image tag used will depend on and should always match your Turbonomic Server version. NOTE: If you are running Classic Turbo (Turbo v6.4.x or CWOM 2.3.x) this version is no longer supported as of August 31, 2021. If you are on the latest Turbonomic platform (v8.x, or SaaS), use a kubeturbo image with a tag that matches the Turbonomic Server version you are running. Go here to find details on which versions to use: Turbonomic Server -> kubeturbo version.

7. Validate data in the Turbonomic UI. Now that you have successfully deployed the KubeTurbo probe, log into the Turbo Server to experience the goodness.

Other notes

ConfigMap details

The following chart will help you populate the required values for your configMap.

Property Purpose Required Default Value Accepted Values
serverMeta.version Turbo Server version yes - all versions none x.x.x. After 6.3+, only the first version.major is required
serverMeta.turboServer Server URL yes - all versions none https://{yourServerIPAddressOrFQN}
serverMeta.proxy Proxy URL optional none http://username:password@proxyserver:proxyport or http://proxyserver:proxyport
restAPIConfig.opsManagerUserName user to log into Turbo. yes - all versions none can use k8s secret. See Turbo Server Credentials for more information
restAPIConfig.opsManagerPassword password to log into Turbo yes - all versions none can use k8s secret. See Turbo Server Credentials for more information
targetConfig.targetName uniquely identifies k8s clusters yes - all versions "Name_Your_Cluster" upper lower case, limited special characters "-" or "_"
HANodeConfig.nodeRoles Used to automate policies to keep nodes of same role limited to 1 per ESX host or AZ starting with 6.4.3+ node-role.kubernetes.io/master any value for label key value node-role.kubernetes.io. values in quotes & comma separated "master","worker","app" etc. See Node Role Policies for more information
annotationWhitelist The annotationWhitelist provides a mechanism for discovering annotations for kubernetes objects. In order to collect annotations, provide a regular expression for each entity type for which the annotations are desired. optional None These regular expressions accept the RE2 syntax (except for \C) as defined here: https://github.com/google/re2/wiki/Syntax
logging.level(in the turbo-autoreload.config section) Changing the logging level without pod restart optional 2 Integer value which is greater than 0

Working with a Private Repo

If you would like to pull required container images into your own repo, refer to this article here.

Kubeturbo Logging

For details on how to collect and configure Kubeturbo Logging go here.

Updating KubeTurbo

Updating KubeTurbo Image

KubeTurbo image will be updated because 1) you have updated the Turbo Server, or 2) you were instructed by support. The KubeTurbo image you will use will be the same tag version as the Server. Also see Server Versions and KubeTurbo tag and review Releases for more details.

Update your manifest:

  1. Use the deployment yaml that was applied to create the kubeturbo deployment resource
  2. Edit the image: tag
  3. Apply the change: kubectl apply -f {yourDeployment}.yaml -n turbo

Or update the running kubeturbo deployment:

  1. Edit the deployment via the k8s/OS dashboard or using the CLI kubectl or oc “edit deployment” command. In the CLI example below, substitute your values for kubeturbo deployment name (example “kubeturbo”) and namespace/project (example “turbo”): kubectl edit deployment kubeturbo -n turbo
  2. Modify either the image: or imagePullPolicy:. Use image.pullPolicy of “Always” if the image location and tag have not changed, to force the newer image to be pulled. Default value is “IfNotPresent”.
  3. Once edited, the kubeturbo pod should redeploy
  4. Repeat for every kubernetes / OpenShift cluster with a kubeturbo pod

There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally