diff --git "a/kubeadm/RockLinux+kubeadm+k8s-1.22.16\345\215\207\347\272\247\345\210\2601.22.17.md" "b/kubeadm/RockLinux+kubeadm+k8s-1.22.16\345\215\207\347\272\247\345\210\2601.22.17.md" index 428e0de..95dd90e 100644 --- "a/kubeadm/RockLinux+kubeadm+k8s-1.22.16\345\215\207\347\272\247\345\210\2601.22.17.md" +++ "b/kubeadm/RockLinux+kubeadm+k8s-1.22.16\345\215\207\347\272\247\345\210\2601.22.17.md" @@ -38,7 +38,7 @@ data: extraArgs: bind-address: 0.0.0.0 dns: - imageRepository: registry.hisun.netwarps.com/coredns + imageRepository: docker.io/coredns imageTag: 1.8.0 etcd: local: @@ -47,7 +47,7 @@ data: listen-client-urls: https://0.0.0.0:2379 listen-metrics-urls: http://0.0.0.0:2381 listen-peer-urls: https://0.0.0.0:2380 - imageRepository: registry.hisun.netwarps.com/google_containers + imageRepository: registry.netwarps.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.22.16 networking: @@ -73,23 +73,23 @@ kubeadm upgrade plan [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.22.16 [upgrade/versions] kubeadm version: v1.22.17 -I1124 15:36:20.229304 34486 version.go:255] remote version is much newer: v1.28.4; falling back to: stable-1.22 +I1204 15:00:41.772228 3962 version.go:255] remote version is much newer: v1.28.4; falling back to: stable-1.22 [upgrade/versions] Target version: v1.22.17 [upgrade/versions] Latest version in the v1.22 series: v1.22.17 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': -COMPONENT CURRENT TARGET -kubelet 11 x v1.22.16 v1.22.16 +COMPONENT CURRENT TARGET +kubelet 14 x v1.22.16 v1.22.17 Upgrade to the latest version in the v1.22 series: -COMPONENT CURRENT TARGET +COMPONENT CURRENT TARGET kube-apiserver v1.22.16 v1.22.17 kube-controller-manager v1.22.16 v1.22.17 kube-scheduler v1.22.16 v1.22.17 kube-proxy v1.22.16 v1.22.17 -CoreDNS 1.8.0 v1.8.4 -etcd 3.5.0-0 3.5.0-0 +CoreDNS 1.8.0 v1.8.4 +etcd 3.5.0-0 3.5.6-0 You can now apply the upgrade by executing the following command: @@ -127,8 +127,6 @@ kubeadm upgrade apply v1.22.17 你的容器网络接口(CNI)驱动应该提供了程序自身的升级说明。 参阅[插件](https://v1-23.docs.kubernetes.io/zh/docs/concepts/cluster-administration/addons/)页面查找你的 CNI 驱动, 并查看是否需要其他升级步骤。 -如果 CNI 驱动作为 DaemonSet 运行,则在其他控制平面节点上不需要此步骤。 - **flannel v0.20.1 升级到 v0.22.3** 下载flannel.yml @@ -143,7 +141,7 @@ curl -o kube-flannel-v0.22.3.yaml https://raw.githubusercontent.com/flannel-io/ ... net-conf.json: | { - "Network": "10.128.0.0/8", + "Network": "10.128.0.0/16", "Backend": { "Type": "vxlan" } @@ -189,29 +187,31 @@ kubectl apply -f kube-flannel-v0.22.3.yaml **对比服务配置** -升级前的配置备份在 `/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-23-16-50-42`,有可能手动修改过的服务配置,酌情使用kubeadm重新修改相关配置,参考:[重新配置 kubeadm 集群](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/) +升级前的配置备份在 `/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-04-15-01-38`,有可能手动修改过的服务配置,酌情使用kubeadm重新修改相关配置,参考:[重新配置 kubeadm 集群](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure/) ```sh -diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-23-16-50-42/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml +# cd /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-04-15-01-38 + +[root@master1 tmp]# diff kube-controller-manager.yaml /etc/kubernetes/manifests/kube-controller-manager.yaml 32c32 < image: registry.hisun.netwarps.com/google_containers/kube-controller-manager:v1.22.16 --- -> image: registry.hisun.netwarps.com/google_containers/kube-controller-manager:v1.22.17 -[root@master1 tmp]# diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-23-16-50-42/kube-scheduler.yaml /etc/kubernetes/manifests/kube-scheduler.yaml +> image: registry.netwarps.com/google_containers/kube-controller-manager:v1.22.17 +[root@master1 tmp]# diff kube-scheduler.yaml /etc/kubernetes/manifests/kube-scheduler.yaml 20c20 < image: registry.hisun.netwarps.com/google_containers/kube-scheduler:v1.22.16 --- -> image: registry.hisun.netwarps.com/google_containers/kube-scheduler:v1.22.17 -[root@master1 tmp]# diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-23-16-50-42/kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml +> image: registry.netwarps.com/google_containers/kube-scheduler:v1.22.17 +[root@master1 tmp]# diff kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml 44c44 < image: registry.hisun.netwarps.com/google_containers/kube-apiserver:v1.22.16 --- -> image: registry.hisun.netwarps.com/google_containers/kube-apiserver:v1.22.17 -[root@master1 tmp]# diff /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-23-16-50-42/etcd.yaml /etc/kubernetes/manifests/etcd.yaml +> image: registry.netwarps.com/google_containers/kube-apiserver:v1.22.17 +[root@master1 tmp]# diff etcd.yaml /etc/kubernetes/manifests/etcd.yaml 34c34 < image: registry.hisun.netwarps.com/google_containers/etcd:3.5.0-0 --- -> image: registry.hisun.netwarps.com/google_containers/etcd:3.5.6-0 +> image: registry.netwarps.com/google_containers/etcd:3.5.6-0 ``` **对于其它控制面节点** @@ -283,6 +283,8 @@ yum install -y kubeadm-1.22.17-0 --disableexcludes=kubernetes ### 腾空节点 +- 官方文档写需要腾空节点,实际测试不腾空节点集群也可以正常工作(待观察吧) + - 将节点标记为不可调度并驱逐所有负载,准备节点的维护: ```shell diff --git "a/logging/Elasticsearch\346\237\245\350\257\242\351\207\215\345\244\215\346\225\260\346\215\256.md" "b/logging/Elasticsearch\346\237\245\350\257\242\351\207\215\345\244\215\346\225\260\346\215\256.md" new file mode 100644 index 0000000..9e24638 --- /dev/null +++ "b/logging/Elasticsearch\346\237\245\350\257\242\351\207\215\345\244\215\346\225\260\346\215\256.md" @@ -0,0 +1,28 @@ +# Elasticsearch查询重复数据 + +## 查询重复数据 + +```sh +GET /web-rss-data/_search +{ + "query": { + "bool": { + "must": [ + { + "match_all": {} + } + ] + } + }, + "aggs": { + "unique_records": { + "terms": { + "field": "cid", + "min_doc_count": 2, + "size": 100000 + } + } + } +} +``` + diff --git "a/rancher/\350\247\243\345\206\263\346\233\264\346\226\260rancher2.6.13\345\220\216\346\212\245webhook\345\222\214fleet chart\347\211\210\346\234\254\344\270\215\346\224\257\346\214\201.md" "b/rancher/\350\247\243\345\206\263\346\233\264\346\226\260rancher2.6.13\345\220\216\346\212\245webhook\345\222\214fleet chart\347\211\210\346\234\254\344\270\215\346\224\257\346\214\201.md" new file mode 100644 index 0000000..ef0bdcc --- /dev/null +++ "b/rancher/\350\247\243\345\206\263\346\233\264\346\226\260rancher2.6.13\345\220\216\346\212\245webhook\345\222\214fleet chart\347\211\210\346\234\254\344\270\215\346\224\257\346\214\201.md" @@ -0,0 +1,123 @@ +# 解决更新rancher2.6.13后报webhook和fleet chart版本不支持 + +## 部署方式 + +模板渲染 + +```sh +helm template rancher ./rancher-2.6.13.tgz --output-dir . \ +--namespace cattle-system \ +--set hostname=rancher.example.com \ +--set replicas=2 \ +--set ingress.tls.source=secret \ +--set useBundledSystemChart=true \ +-f values.yaml +``` + +执行更新 + +``` +kubectl -n cattle-system apply -R -f ./rancher +``` + +## 报错信息 + +``` +2023/12/05 08:35:33 [ERROR] available chart version (100.0.2+up0.3.8) for fleet is less than the min version (100.2.3+up0.5.3) +2023/12/05 08:35:33 [ERROR] Failed to find system chart fleet will try again in 5 seconds: no chart name found +``` + +查看clusterrepos,发现这个版本 commit dc9ad74ba365f4ea15d173aac999f4e8134925f9 比较旧,不是最新的 + +```yaml +# kubectl get clusterrepos.catalog.cattle.io rancher-charts -o yaml + +apiVersion: catalog.cattle.io/v1 +kind: ClusterRepo +metadata: + creationTimestamp: "2021-12-31T04:06:55Z" + generation: 4 + name: rancher-charts + resourceVersion: "1199671909" + uid: 86962934-4176-477d-ad0a-d8c8c0af7469 +spec: + forceUpdate: "2022-10-19T10:49:54Z" + gitBranch: release-v2.6 + gitRepo: https://git.rancher.io/charts +status: + branch: release-v2.6 + commit: 5d21c199dc7db29b6a5c755558edf4f6343b4c2b + conditions: + - lastUpdateTime: "2022-10-19T10:49:55Z" + status: "True" + type: FollowerDownloaded + - lastUpdateTime: "2023-12-05T09:16:34Z" + status: "True" + type: Downloaded + downloadTime: "2023-12-05T09:16:34Z" + indexConfigMapName: rancher-charts-0-86962934-4176-477d-ad0a-d8c8c0af7469 + indexConfigMapNamespace: cattle-system + observedGeneration: 4 + url: https://git.rancher.io/charts +``` + + + +## 解决方法 + +删除 rancher 自动创建的 clusterrepos + +``` + kubectl delete clusterrepos.catalog.cattle.io rancher-charts + kubectl delete clusterrepos.catalog.cattle.io rancher-rke2-charts + kubectl delete clusterrepos.catalog.cattle.io rancher-partner-charts +``` + +重启rancher + +```sh +kubectl rollout restart deploy rancher -n cattle-system +``` + +查看 clusterrepo rancher-charts, 发现已经更新 + +```yaml +# kubectl get clusterrepos.catalog.cattle.io rancher-charts -o yaml + +apiVersion: catalog.cattle.io/v1 +kind: ClusterRepo +metadata: + creationTimestamp: "2023-12-05T09:28:51Z" + generation: 1 + name: rancher-charts + resourceVersion: "1199693562" + uid: 5d24e500-2a6f-4b68-bd31-f85928ca1d54 +spec: + gitBranch: release-v2.6 + gitRepo: https://git.rancher.io/charts +status: + branch: release-v2.6 + commit: dc9ad74ba365f4ea15d173aac999f4e8134925f9 + conditions: + - lastUpdateTime: "2023-12-05T09:28:51Z" + status: "True" + type: FollowerDownloaded + - lastUpdateTime: "2023-12-05T09:34:05Z" + status: "True" + type: Downloaded + downloadTime: "2023-12-05T09:34:05Z" + indexConfigMapName: rancher-charts-0-5d24e500-2a6f-4b68-bd31-f85928ca1d54 + indexConfigMapNamespace: cattle-system + observedGeneration: 1 + url: https://git.rancher.io/charts +``` + +查看rancher 日志,也不再报 ` fleet is less than the min version` 相关错误 + + + +## 参考 + +https://github.com/harvester/harvester/blob/a9006087711e92415960801ceca611febd04e937/package/upgrade/upgrade_manifests.sh#L193-L199 + +https://github.com/rancher/rancher/issues/36914 \ No newline at end of file