-
Notifications
You must be signed in to change notification settings - Fork 75
Yaml Deployment Details
Review the prerequisites.
This document describes the resources you will create to deploy kubeturbo, and values you would want to change for your deployment.
Use the sample yamls provided here. A template for a single yaml for all resources is available here.
1. Create a Namespace
Use an existing namespace, or create one where to deploy kubeturbo. The default and yaml examples all will use turbo
.
kubectl create namespace turbo
2. Create a Service Account
The default and yaml examples will all use service account named turbo-user
in the turbo
namespace.
kubectl create sa turbo-user -n turbo
Yaml sample for SA
3. Create a Cluster Role Binding
You have the following options for a Cluster Role to be associated with Kubeturbo that will govern permissions to discover and execute actions:
-
Option 1: Using the default (
cluster-admin
) - The default Cluster Role that kubeturbo will use is the builtincluster-admin
Role. Yaml sample for creating default Cluster Role Binding usingcluster-admin
Role. -
Option 2: Using a least privilege role that allows actions to execute actions - You can choose to run with a custom Cluster Role that provides minimum privileges with ability to execute actions which is detailed here
-
Option 3: Using a read only role - You can choose to run with a custom Cluster Role that provides read-only privileges which allows for discovery and metrics to be collected but cannot execute actions which is detailed here
4. Choose how to store Turbonomic Server username and password for kubeturbo to use (one or the other, not both)
- Option 1: Kubernetes Secret
- Option 2: Plain Text
5. Create a configMap . The configMap resources provides the Turbonomic Server details to kubeturbo.
Yaml sample for ConfigMap
The ConfigMap serves three functions:
- Turbo Server Credentials: Defines how to connect to the Turbonomic Server.
- Node Role Policies: How to identify nodes by role and automatically create HA policies for them.
- Unique cluster identification to the Turbo Server. To distinguish between different Kubernetes clusters, supply a unique targetName value which will name the Kubernetes cluster groups created in Turbonomic.
Use the sample yaml, either the full single yaml or just configmap resource. Modify the following parameters for your environment. For more information refer to ConfigMap details
- serverMeta.version
- serverMeta.turboServer
- restAPIConfig.opsManagerUserName
- restAPIConfig.opsManagerPassword
- targetConfig.targetName
6. Create a deployment for kubeturbo.
Yaml sample for Deployment
KubeTurbo will run a single replica to monitor and manage your entire cluster. The only value that may require a change in this resource is the image tag version for the KubeTurbo container. The image tag used will depend on and should always match your Turbonomic Server version. NOTE: If you are running Classic Turbo (Turbo v6.4.x or CWOM 2.3.x) this version is no longer supported as of August 31, 2021. If you are on the latest Turbonomic platform (v8.x, or SaaS), use a kubeturbo image with a tag that matches the Turbonomic Server version you are running. Go here to find details on which versions to use: Turbonomic Server -> kubeturbo version.
7. Validate data in the Turbonomic UI. Now that you have successfully deployed the KubeTurbo probe, log into the Turbo Server to experience the goodness.
The following chart will help you populate the required values for your configMap.
Property | Purpose | Required | Default Value | Accepted Values |
---|---|---|---|---|
serverMeta.version | Turbo Server version | yes - all versions | none | x.x.x. After 6.3+, only the first version.major is required |
serverMeta.turboServer | Server URL | yes - all versions | none | https://{yourServerIPAddressOrFQN} |
serverMeta.proxy | Proxy URL | optional | none | http://username:password@proxyserver:proxyport or http://proxyserver:proxyport |
restAPIConfig.opsManagerUserName | user to log into Turbo. | yes - all versions | none | can use k8s secret. See Turbo Server Credentials for more information |
restAPIConfig.opsManagerPassword | password to log into Turbo | yes - all versions | none | can use k8s secret. See Turbo Server Credentials for more information |
targetConfig.targetName | uniquely identifies k8s clusters | yes - all versions | "Name_Your_Cluster" | upper lower case, limited special characters "-" or "_" |
HANodeConfig.nodeRoles | Used to automate policies to keep nodes of same role limited to 1 per ESX host or AZ | starting with 6.4.3+ | node-role.kubernetes.io/master |
any value for label key value node-role.kubernetes.io . values in quotes & comma separated "master" ,"worker","app" etc. See Node Role Policies for more information
|
annotationWhitelist | The annotationWhitelist provides a mechanism for discovering annotations for kubernetes objects. In order to collect annotations, provide a regular expression for each entity type for which the annotations are desired. | optional | None | These regular expressions accept the RE2 syntax (except for \C) as defined here: https://github.com/google/re2/wiki/Syntax |
logging.level(in the turbo-autoreload.config section) |
Changing the logging level without pod restart | optional | 2 | Integer value which is greater than 0 |
If you would like to pull required container images into your own repo, refer to this article here.
For details on how to collect and configure Kubeturbo Logging go here.
KubeTurbo image will be updated because 1) you have updated the Turbo Server, or 2) you were instructed by support. The KubeTurbo image you will use will be the same tag version as the Server. Also see Server Versions and KubeTurbo tag and review Releases for more details.
Update your manifest:
- Use the deployment yaml that was applied to create the kubeturbo deployment resource
- Edit the
image:
tag - Apply the change:
kubectl apply -f {yourDeployment}.yaml -n turbo
Or update the running kubeturbo deployment:
- Edit the deployment via the k8s/OS dashboard or using the CLI kubectl or oc “edit deployment” command. In the CLI example below, substitute your values for kubeturbo deployment name (example “kubeturbo”) and namespace/project (example “turbo”):
kubectl edit deployment kubeturbo -n turbo
- Modify either the
image:
orimagePullPolicy:
. Use image.pullPolicy of “Always” if the image location and tag have not changed, to force the newer image to be pulled. Default value is “IfNotPresent”. - Once edited, the kubeturbo pod should redeploy
- Repeat for every kubernetes / OpenShift cluster with a kubeturbo pod
There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.
Introduction
Kubeturbo Use Cases
Kubeturbo Deployment
Kubeturbo Config Details and Custom Configurations
Actions and how to leverage them
- Overview
-
Resizing or Vertical Scaling of Containerized Workloads
a. DeploymentConfigs with manual triggers in OpenShift Environments - Node Provision and Suspend (Cluster Scaling)
- SLO Horizontal Scaling
- Turbonomic Pod Moves (continuous rescheduling)
-
Pod move action technical details
a. Red Hat Openshift Environments
b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
- Startup and Connectivity Issues
- KubeTurbo Health Notification
- Logging: kubeturbo log collection and configuration options
- Startup or Validation Issues
- Stitching Issues
- Data Collection Issues
- Collect data for investigating Kubernetes deployment issue
- Changes to Cluster Role Names and Cluster Role Binding Names