|
| 1 | +# Koordinator v1.6.0 |
| 2 | + |
| 3 | +## Configuration |
| 4 | + |
| 5 | +Note that installing this chart directly means it will use the default template values for Koordinator. |
| 6 | + |
| 7 | +You may have to set your specific configurations if it is deployed into a production cluster, or you want to configure feature-gates. |
| 8 | + |
| 9 | +### Optional: chart parameters |
| 10 | + |
| 11 | +The following table lists the configurable parameters of the chart and their default values. |
| 12 | + |
| 13 | +| Parameter | Description | Default | |
| 14 | +|----------------------------------------------------|------------------------------------------------------------------|---------------------------------| |
| 15 | +| `featureGates` | Feature gates for Koordinator, empty string means all by default | ` ` | |
| 16 | +| `installation.namespace` | namespace for Koordinator installation | `koordinator-system` | |
| 17 | +| `installation.createNamespace` | Whether to create the installation.namespace | `true` | |
| 18 | +| `imageRepositoryHost` | Image repository host | `ghcr.io` | |
| 19 | +| `manager.log.level` | Log level that koord-manager printed | `4` | |
| 20 | +| `manager.replicas` | Replicas of koord-manager deployment | `2` | |
| 21 | +| `manager.image.repository` | Repository for koord-manager image | `koordinatorsh/koord-manager` | |
| 22 | +| `manager.image.tag` | Tag for koord-manager image | `v1.6.0` | |
| 23 | +| `manager.resources.limits.cpu` | CPU resource limit of koord-manager container | `1000m` | |
| 24 | +| `manager.resources.limits.memory` | Memory resource limit of koord-manager container | `1Gi` | |
| 25 | +| `manager.resources.requests.cpu` | CPU resource request of koord-manager container | `500m` | |
| 26 | +| `manager.resources.requests.memory` | Memory resource request of koord-manager container | `256Mi` | |
| 27 | +| `manager.metrics.port` | Port of metrics served | `8080` | |
| 28 | +| `manager.webhook.port` | Port of webhook served | `9443` | |
| 29 | +| `manager.nodeAffinity` | Node affinity policy for koord-manager pod | `{}` | |
| 30 | +| `manager.nodeSelector` | Node labels for koord-manager pod | `{}` | |
| 31 | +| `manager.tolerations` | Tolerations for koord-manager pod | `[]` | |
| 32 | +| `manager.resyncPeriod` | Resync period of informer koord-manager, defaults no resync | `0` | |
| 33 | +| `manager.hostNetwork` | Whether koord-manager pod should run with hostnetwork | `false` | |
| 34 | +| `scheduler.log.level` | Log level that koord-scheduler printed | `4` | |
| 35 | +| `scheduler.replicas` | Replicas of koord-scheduler deployment | `2` | |
| 36 | +| `scheduler.image.repository` | Repository for koord-scheduler image | `koordinatorsh/koord-scheduler` | |
| 37 | +| `scheduler.image.tag` | Tag for koord-scheduler image | `v1.6.0` | |
| 38 | +| `scheduler.resources.limits.cpu` | CPU resource limit of koord-scheduler container | `1000m` | |
| 39 | +| `scheduler.resources.limits.memory` | Memory resource limit of koord-scheduler container | `1Gi` | |
| 40 | +| `scheduler.resources.requests.cpu` | CPU resource request of koord-scheduler container | `500m` | |
| 41 | +| `scheduler.resources.requests.memory` | Memory resource request of koord-scheduler container | `256Mi` | |
| 42 | +| `scheduler.port` | Port of metrics served | `10251` | |
| 43 | +| `scheduler.nodeAffinity` | Node affinity policy for koord-scheduler pod | `{}` | |
| 44 | +| `scheduler.nodeSelector` | Node labels for koord-scheduler pod | `{}` | |
| 45 | +| `scheduler.tolerations` | Tolerations for koord-scheduler pod | `[]` | |
| 46 | +| `scheduler.hostNetwork` | Whether koord-scheduler pod should run with hostnetwork | `false` | |
| 47 | +| `koordlet.log.level` | Log level that koordlet printed | `4` | |
| 48 | +| `koordlet.image.repository` | Repository for koordlet image | `koordinatorsh/koordlet` | |
| 49 | +| `koordlet.image.tag` | Tag for koordlet image | `v1.6.0` | |
| 50 | +| `koordlet.resources.limits.cpu` | CPU resource limit of koordlet container | `500m` | |
| 51 | +| `koordlet.resources.limits.memory` | Memory resource limit of koordlet container | `256Mi` | |
| 52 | +| `koordlet.resources.requests.cpu` | CPU resource request of koordlet container | `0` | |
| 53 | +| `koordlet.resources.requests.memory` | Memory resource request of koordlet container | `0` | |
| 54 | +| `koordlet.affinity` | Affinity policy for koordlet pod | `{}` | |
| 55 | +| `koordlet.runtimeClassName` | RuntimeClassName for koordlet pod | ` ` | |
| 56 | +| `koordlet.enableServiceMonitor` | Whether to enable ServiceMonitor for koordlet | `false` | |
| 57 | +| `webhookConfiguration.failurePolicy.pods` | The failurePolicy for pods in mutating webhook configuration | `Ignore` | |
| 58 | +| `webhookConfiguration.failurePolicy.elasticquotas` | The failurePolicy for elasticQuotas in all webhook configuration | `Ignore` | |
| 59 | +| `webhookConfiguration.failurePolicy.nodeStatus` | The failurePolicy for node.status in all webhook configuration | `Ignore` | |
| 60 | +| `webhookConfiguration.failurePolicy.nodes` | The failurePolicy for nodes in all webhook configuration | `Ignore` | |
| 61 | +| `webhookConfiguration.timeoutSeconds` | The timeoutSeconds for all webhook configuration | `30` | |
| 62 | +| `crds.managed` | Koordinator will not install CRDs with chart if this is false | `true` | |
| 63 | +| `imagePullSecrets` | The list of image pull secrets for koordinator image | `false` | |
| 64 | + |
| 65 | +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install` or `helm upgrade`. |
| 66 | + |
| 67 | +### Optional: feature-gate |
| 68 | + |
| 69 | +Feature-gate controls some influential features in Koordinator: |
| 70 | + |
| 71 | +| Name | Description | Default | Effect (if closed) | |
| 72 | +| ------------------------- | ---------------------------------------------------------------- | ------- | -------------------------------------- | |
| 73 | +| `PodMutatingWebhook` | Whether to open a mutating webhook for Pod **create** | `true` | Don't inject koordinator.sh/qosClass, koordinator.sh/priority and don't replace koordinator extend resources ad so on | |
| 74 | +| `PodValidatingWebhook` | Whether to open a validating webhook for Pod **create/update** | `true` | It is possible to create some Pods that do not conform to the Koordinator specification, causing some unpredictable problems | |
| 75 | + |
| 76 | + |
| 77 | +If you want to configure the feature-gate, just set the parameter when install or upgrade. Such as: |
| 78 | + |
| 79 | +```bash |
| 80 | +$ helm install koordinator https://... --set featureGates="PodMutatingWebhook=true\,PodValidatingWebhook=true" |
| 81 | +``` |
| 82 | + |
| 83 | +If you want to enable all feature-gates, set the parameter as `featureGates=AllAlpha=true`. |
| 84 | + |
| 85 | +### Optional: install or upgrade specific CRDs |
| 86 | + |
| 87 | +If you want to skip specific CRDs during the installation or the upgrade, you can set the parameter `crds.<crdPluralName>` to `false` and install or upgrade them manually. |
| 88 | + |
| 89 | +```bash |
| 90 | +# skip install the CRD noderesourcetopologies.topology.node.k8s.io |
| 91 | +$ helm install koordinator https://... --set featureGates="crds.managed=true,crds.noderesourcetopologies=false" |
| 92 | +# only upgrade specific CRDs recommendations, clustercolocationprofiles, elasticquotaprofiles, ..., podgroups |
| 93 | +$ helm upgrade koordinator https://... --set featureGates="crds.managed=false,crds.recommendations=true,crds.clustercolocationprofiles=true,crds.elasticquotaprofiles=true,crds.elasticquotas=true,crds.devices=true,crds.podgroups=true" |
| 94 | +``` |
| 95 | + |
| 96 | +### Optional: the local image for China |
| 97 | + |
| 98 | +If you are in China and have problem to pull image from official DockerHub, you can use the registry hosted on Alibaba Cloud: |
| 99 | + |
| 100 | +```bash |
| 101 | +$ helm install koordinator https://... --set imageRepositoryHost=registry.cn-beijing.aliyuncs.com |
| 102 | +``` |
0 commit comments