-
Notifications
You must be signed in to change notification settings - Fork 75
Helm Deployment Details
This Helm deployment supports Helm 2 and Helm 3. Helm is a Kubernetes package manager that allows you to more easily manage charts, which are a way to package all resources associated with an application. Helm provides a way to package, deploy and update using simple commands, and provides a way to customize or update parameters of your resources, without the worry of yaml formatting. For more info see: Helm: The Kubernetes Package Manager
To use this method, you will already have helm configured for your environment (helm client, and if applicable a tiller server) and are familiar with how to use helm and chart repositories. Go to Helm Docs for an introduction and overview of Helm if needed.
The Helm Chart option will require you to clone this project to your local environment or the deployment will not work, and then create a local repo for the chart in this directory here. Index the local repo to be able to deploy kubeturbo which will create the following resources in the cluster:
- Namespace or Project (default is
turbo
) - Service Account and binding to cluster-admin clusterrole (default is "turbo-user" with "turbo-all-binding"-{My_Kubeturbo_name}-{My_Namespace} binding using a cluster-admin roleRef)
- Updated
configMap
containing required info for kubeturbo to connect to the Turbonomic Server - Deployment of kubeturbo
Note:
- The kubeturbo image tag version used will depend on your Turbonomic Server version. The kubeturbo tag version being deployed should always match the Turbonomic Server version you are running. For more info see CWOM -> Turbonomic Server -> kubeturbo version and review the Releases.
- Review general kubeturbo prerequisites
- Helm 2 or Helm 3 installed (needed to deploy via helm)
- Git installed (needed to clone kubeturbo repo locally)
- Clone Kubeturbo repo (needed to use for deployment)
- Kubeturbo will need information to find and register with the Turbonomic Server. These Turbonomic Server details are stored in a
configMap
resource. You set these parameters via the helm install command detailed in the steps below. See the Values table below for explanation of each of the parameters and what is required. - Determine if you can use a the default builtin Cluster Role
cluster-admin
or if you need to use a custom Cluster Role.- Option 1: Execute Actions Role
- Option 2: Read-Only Role
- Choose where and how to store Turbonomic Server username and password for kubeturbo to use (one or the other, not both)
- Option 1: Kubernetes Secret
- Option 2: Plain Text
-
Create a Kubernetes Secret for use in the deployment reference the guide here if needed. If none exists, kubeturbo will fall back to the username and password provided in the
configMap
, which are provided through the parameters ofrestAPIConfig.opsManagerUserName
andrestAPIConfig.opsManagerPassword
in Option 2 below. If neither exist of are invalid kubeturbo will fail to add itself as a target to your Turbonomic Server. -
Helm 3 example command to perform a dry run first, to make sure no errors in the command, substitute your environment values where you see { }. Make sure to resolve any errors before proceeding to the next step.
**NOTE when using the default secret name ofturbonomic-credentials
you don't need to specify the parameter--set restAPIConfig.turbonomicCredentialsSecretName
in the helm command, you only need to use it if you created a secret with a different name.
helm install --dry-run --debug {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}
- Helm 3 example command to run the install, substitute your environment values where you see { }.
helm install {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}
- Helm 3 example command to perform a dry run first, to make sure no errors in the command, substitute your environment values where you see { }. Make sure to resolve any errors before proceeding to the next step (this uses plain text username and password)
helm install --dry-run --debug {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.opsManagerUserName={TURBOSERVER_ADMINUSER} --set restAPIConfig.opsManagerPassword={TURBOSERVER_ADMINUSER_PWD} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}
- Helm 3 example command to run the install, substitute your environment values where you see { }. (this uses plain text username and password)
helm install {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --create-namespace --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.opsManagerUserName={TURBOSERVER_ADMINUSER} --set restAPIConfig.opsManagerPassword={TURBOSERVER_ADMINUSER_PWD} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}
-
Review helm command output (if successful it will look like the example below)
-
Check that the kubeturbo pod was deployed and running 1/1 (assuming you deployed into the
turbo
namespace)
kubectl get pods -n turbo
- Review Turbonomic UI under Settings, Target Configurations, Cloud Native. You should have a new target added automatically that has your Cluster Name in the Target Name (if successful it will look like the example below)
- If the target does not show up as per above in the Turbonomic Server after about 5 minutes there was probably an issue with the deployment that needs to be resolved. You can review the kubeturbo logs here as it will give you specific details as to what the issue might be. If you cannot resolve the issue please open a support ticket with IBM Turbonomic Support here
The following table shows the more commonly used values, which are also seen in the default values.yaml. Additionally you can modify the values.yaml
file directly in the cloned repo that was downloaded in the directory kubeturbo/deploy/kubeturbo/values.yaml
and update it directly.
Parameters that are default and/or required are noted.
Parameter | Default Value | Required / Opt to Change | Parameter Type |
---|---|---|---|
image.repository |
icr.io/cpopen/turbonomic/kubeturbo (IBM Cloud Container Registry) |
optional | path to repo |
image.tag | {currentVersion} | optional | kubeturbo tag |
image.pullPolicy | IfNotPresent | optional | |
image.busyboxRepository | busybox | optional | Busybox repository. This is overridden by cpufreqgetterRepository |
image.cpufreqgetterRepository | icr.io/cpopen/turbonomic/cpufreqgetter | optional | Repository used to get node cpufrequency. |
image.imagePullSecret | optional | Define the secret used to authenticate to the container image registry | |
roleName | cluster-admin | optional | Specify custom turbo-cluster-reader or turbo-cluster-admin role instead of the default cluster-admin role |
roleBinding | turbo-all-binding-{My_Kubeturbo_name}-{My_Namespace} | optional | Specify the name of clusterrolebinding |
serviceAccountName | turbo-user | optional | Specify the name of the serviceaccount |
serverMeta.version | 8.1 | required | number x.y that represents your Turbo Server version |
serverMeta.turboServer | required | https URL to log into Server | |
serverMeta.proxy | optional | Proxy URL http://username:password@proxyserver:proxyport or http://proxyserver:proxyport | |
restAPIConfig.opsManagerUserName | required or use k8s secret | Turbo Server user (local or AD) with admin role | |
restAPIConfig.opsManagerPassword | required or use k8s secret | Turbo Server user's password | |
restAPIConfig.turbonomicCredentialsSecretName | turbonomic-credentials | required only if using secret and not taking default secret name | secret that contains the turbo server admin user name and password |
targetConfig.targetName | "Your_k8s_cluster" | optional but required for multiple clusters | String, how you want to identify your cluster |
targetConfig.targetType | "Your_k8s_cluster" | optional - to be deprecated | String, to be used only for UI manual setup. |
args.logginglevel | 2 | optional | number |
args.kubelethttps | true | optional, change to false if k8s 1.10 or older | bolean |
args.kubeletport | 10250 | optional, change to 10255 if k8s 1.10 or older | number |
args.stitchuuid | true | optional, change to false if IaaS is VMM, Hyper-V | bolean |
args.pre16k8sVersion | false | optional | if Kubernetes version is older than 1.6, then add another arg for move/resize action |
args.cleanupSccImpersonationResources | true | optional | cleanup the resources for scc impersonation by default |
args.sccsupport | optional | required for OCP cluster, see here for more details | |
HANodeConfig.nodeRoles | "\"master\"" |
Optional. Used to automate policies to keep nodes of same role limited to 1 instance per ESX host or AZ (starting with 6.4.3+) | regex used, values in quotes & comma separated "\"master\"" (default),"\"worker\"","\"app\"" etc |
daemonPodDetectors.daemonPodNamespaces1 and daemonPodNamespaces2 | daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. | Optional but required to identify pods in the namespace to be ignored for cluster consolidation | regex used, values in quotes & comma separated"kube-system","kube-service-catalog","openshift-.*"
|
daemonPodDetectors.daemonPodNamePatterns | daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. | Optional but required to identify pods matching this pattern to be ignored for cluster consolidation | regex used .*ignorepod.*
|
annotationWhitelist | optional | The annotationWhitelist allows users to define regular expressions to allow kubeturbo to collect matching annotations for the specified entity type. By default, no annotations are collected. These regular expressions accept the RE2 syntax (except for \C) as defined here: https://github.com/google/re2/wiki/Syntax | |
logging.level | 2 | optional | Changing the logging level here doesn't require a restart on the pod but takes about 1 minute to take effect |
For more on
HANodeConfig
go to Node Role Policies and view the default values.yaml For more ondaemonPodDetectors
go to the YAMLs deploy option wiki page or YAMLS_README.md under kubeturbo/deploy/kubeturbo_yamls/
Deprecated parameters
Parameter | Default Value | Required / Opt to Change | Parameter Type |
---|---|---|---|
masterNodeDetectors.nodeNamePatterns | node name includes .*master.*
|
Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node name. If no match, this is ignored. | string, regex used, example: .*master.*
|
masterNodeDetectors.nodeLabels | any value for label key value node-role.kubernetes.io/master
|
Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node label key value pair, If no match, this is ignored. | regex used, specify the key as masterNodeDetectors.nodeLabelsKey such as node-role.kubernetes.io/master and the value as masterNodeDetectors.nodeLabelsValue such as .*
|
If you would like to pull required container images into your own repo, refer to this article here.
For details on how to collect and configure Kubeturbo Logging go here.
When you update the "Release" of Turbonomic or CWOM Server version for example from 8.8.6 -> 8.9.6, you will also need to update the "Release" number in the configMap
resource to reflect the "Release" version change such as from 8.8 -> 8.9. Additionally you may be instructed to update the kubeturbo pod image or you might have upgraded your Turbonomic Server which requires a kubeturbo image tag version change to match it. Determine which new tag version you will be using by going here: CWOM -> Turbonomic Server -> kubeturbo version and review Releases. You may be instructed by IBM Turbonomic Support to use a new image, or you may want to refresh the image to pick up a patch or new feature.
-
After the update, obtain the new Turbonomic Server version. To get this from the UI, go to Settings -> Updates and use the numeric version such as “8.8.6” or “8.9.6” (Build details not required)
-
You will update the values specific to your environment - substitute your values for { }
helm upgrade {DEPLOYMENT_NAME} {HELM-CHART_LOCATION} --namespace {KUBETURBO_NAMESPACE} --set serverMeta.turboServer={TURBOSERVER_URL} --set serverMeta.version={TURBOSERVER_VERSION} --set image.tag={KUBETURBO_VERSION} --set restAPIConfig.turbonomicCredentialsSecretName={YOUR_CUSTOM_SECRET_NAME} --set targetConfig.targetName={CLUSTER_DISPLAY_NAME}
-
The kubeturbo pod should have restarted to pick up new value
-
Repeat for every Kubernetes / OpenShift cluster with a kubeturbo pod deployed
There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.
Introduction
Kubeturbo Use Cases
Kubeturbo Deployment
Kubeturbo Config Details and Custom Configurations
Actions and how to leverage them
- Overview
-
Resizing or Vertical Scaling of Containerized Workloads
a. DeploymentConfigs with manual triggers in OpenShift Environments - Node Provision and Suspend (Cluster Scaling)
- SLO Horizontal Scaling
- Turbonomic Pod Moves (continuous rescheduling)
-
Pod move action technical details
a. Red Hat Openshift Environments
b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
- Startup and Connectivity Issues
- KubeTurbo Health Notification
- Logging: kubeturbo log collection and configuration options
- Startup or Validation Issues
- Stitching Issues
- Data Collection Issues
- Collect data for investigating Kubernetes deployment issue
- Changes to Cluster Role Names and Cluster Role Binding Names