Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在centos8.4安装v0.8.0版本失败 #39

Open
xy2019devl opened this issue Oct 11, 2022 · 12 comments
Open

在centos8.4安装v0.8.0版本失败 #39

xy2019devl opened this issue Oct 11, 2022 · 12 comments
Assignees
Labels
question Further information is requested

Comments

@xy2019devl
Copy link

操作系统:
[root@k8snode04 ~]# cat /etc/redhat-release
CentOS Linux release 8.4.2105
内核版本:
[root@k8snode04 ~]# uname -a
Linux k8snode04 4.18.0-305.3.1.el8.x86_64 #1 SMP Tue Jun 1 16:14:33 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
kube-install版本:kube-install-allinone-v0.8.0.tgz
通过命令行安装失败:
[root@k8snode04 kube-install]# ./kube-install -exec sshcontrol -sship "192.168.50.54,192.168.50.55,192.168.50.56,192.168.50.57,192.168.50.58" -sshport 22 -sshpass "1qaz2wsx"

Opening SSH tunnel, please wait...

2022/10/11 12:57:01 Error waiting for command execution: exit status 1......
[Error] 2022-10-11 12:57:01.462401159 +0800 CST m=+260.489429147 Failed to open the SSH channel. Please use "root" user to manually open the SSH channel from the local host to the target host, or try to open the SSH channel again after executing the following command on the target host:


sudo sed -i "/PermitRootLogin/d" /etc/ssh/sshd_config
sudo sh -c "echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config"
sudo sed -i "/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/" /etc/ssh/ssh_config
sudo systemctl restart sshd

(If the SSH port of the host is not "22", use the "-sshport" to specify the correct port.)

Failed to open SSH tunnel!

[root@k8snode04 kube-install]#

root@k8snode04 kube-install]# ./kube-install -exec install -master "192.168.50.54,192.168.50.55,192.168.50.56" -node "192.168.50.54,192.168.50.55,192.168.50.56,192.168.50.57,192.168.50.58" -k8sver "1.24" -ostype "centos8" -label "k8s_prod" -softdir /data/k8s


[Info] 2022-10-11 12:23:49.96783815 +0800 CST m=+0.047450820 Installing kubernetes cluster, please wait ...

Kubernetes Cluster Label: k8s_prod
Kubernetes Version: Kubernetes v1.24
Kubernetes Master: 192.168.50.54,192.168.50.55,192.168.50.56
Kubernetes Node: 192.168.50.54,192.168.50.55,192.168.50.56,192.168.50.57,192.168.50.58
SSH Operation Port: 22
CNI Plug-in Type: flannel
Operating System Type: centos8
Automatically Upgrade OS Kernel: Not Support
System User for Installation: root

PLAY [master,node] *************************************************************

TASK [/root/kube-install/data/output/k8s_prod/sys/0x0000000000base/genfile : 0.Distributing deployment files to target host, please wait...] ***
fatal: [192.168.50.55]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.50.55 port 22: Connection timed out", "unreachable": true}
fatal: [192.168.50.54]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.50.54 port 22: Connection timed out", "unreachable": true}
fatal: [192.168.50.57]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.50.57 port 22: Connection timed out", "unreachable": true}
fatal: [192.168.50.58]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.50.58 port 22: Connection timed out", "unreachable": true}
fatal: [192.168.50.56]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.50.56 port 22: Connection timed out", "unreachable": true}

PLAY RECAP *********************************************************************
192.168.50.54 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
192.168.50.55 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
192.168.50.56 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
192.168.50.57 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
192.168.50.58 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0

[Info] 2022-10-11 12:24:01.793613504 +0800 CST m=+11.873226174 Cleaning and detection after installation are in progress. Please wait ...

[Error] 2022-10-11 12:33:01.80195567 +0800 CST m=+551.881568381 Kubernetes cluster install failed! k8s_prod cluster status is unhealthy!


在图形界面安装时,状态显示未知,安装失败。

@houseonline
Copy link
Collaborator

houseonline commented Oct 12, 2022

上面的ssh報錯已經很明顯了,這是因為你的服務器ssh免密通道創建失敗。 每個公司的安全合規要求與安全設置不一樣,所以我們的“-exec sshcontrol”可以供你參攷。 如果你遇到一些系統安全配寘導致服務器ssh免密通道創建失敗,可以嘗試自己手工進行打通。

解決方法:你可以手工創建k8snode04到192.168.50.54、192.168.50.55、192.168.50.56、192.168.50.57、192.168.50.58的ssh免密通道即可。

@houseonline
Copy link
Collaborator

houseonline commented Oct 12, 2022

手工創建SSH免密通道的方法舉例:


目的:hostA主機可以免密ssh hostB

操作:

1.執行ssh-keygen -t rsa

/root/. ssh/下有id_ rsa、id_ rsd. pub

2.執行cp /root/. ssh/id_ rsd/pub /root/.ssh/authorized_ keys

創建認證檔案成功

3.執行ssh-copy-id root@hostB

執行成功,hostB存在/root/. ssh/authorized_ key檔案

4.執行ssh hostB,連結成功。


手工創建ssh免密通道的方法很簡單,互聯網上也有很多方法可以蒐索參攷。

@houseonline
Copy link
Collaborator

待你的ssh免密通道創建成功之後,就可以使用kube-install的圖形介面或命令列進行正常安裝了。

@xy2019devl
Copy link
Author

xy2019devl commented Oct 12, 2022 via email

@xy2019devl
Copy link
Author

xy2019devl commented Oct 12, 2022 via email

@xy2019devl
Copy link
Author

安装的报错日志信息:

PLAY [master1] *****************************************************************

TASK [/root/kube-install/data/output/k8s_prod/sys/0x00000000addons : 0.Create addons directory] ***
changed: [192.168.3.161] => (item=coredns)
changed: [192.168.3.161] => (item=dashboard)
changed: [192.168.3.161] => (item=metrics-server)
changed: [192.168.3.161] => (item=heapster)
changed: [192.168.3.161] => (item=helm)
changed: [192.168.3.161] => (item=traefik)
changed: [192.168.3.161] => (item=registry)
changed: [192.168.3.161] => (item=temp)

TASK [/root/kube-install/data/output/k8s_prod/sys/0x00000000addons : 1.1 Create coredns.yaml file] ***
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/k8s_prod/sys/0x00000000addons : 1.2 Deploy coredns] ***
fatal: [192.168.3.161]: FAILED! => {"changed": true, "cmd": "/usr/sbin/kubectl --kubeconfig=/etc/kubernetes/ssl/kube-install.kubeconfig apply -f /opt/kube-install/k8s/addons/coredns/coredns.yaml", "delta": "0:01:48.803434", "end": "2022-10-12 14:52:55.642280", "msg": "non-zero return code", "rc": 1, "start": "2022-10-12 14:51:06.838846", "stderr": "Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead\nWarning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in v1.25+; use the "seccompProfile" field instead\nError from server (Timeout): error when retrieving current configuration of:\nResource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"\nName: "coredns", Namespace: "kube-system"\nfrom server for: "/opt/kube-install/k8s/addons/coredns/coredns.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts coredns)", "stderr_lines": ["Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead", "Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in v1.25+; use the "seccompProfile" field instead", "Error from server (Timeout): error when retrieving current configuration of:", "Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"", "Name: "coredns", Namespace: "kube-system"", "from server for: "/opt/kube-install/k8s/addons/coredns/coredns.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts coredns)"], "stdout": "clusterrole.rbac.authorization.k8s.io/system:coredns created\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns created\nconfigmap/coredns created\ndeployment.apps/coredns created\nservice/kube-dns created", "stdout_lines": ["clusterrole.rbac.authorization.k8s.io/system:coredns created", "clusterrolebinding.rbac.authorization.k8s.io/system:coredns created", "configmap/coredns created", "deployment.apps/coredns created", "service/kube-dns created"]}

PLAY RECAP *********************************************************************
192.168.3.161 : ok=150 changed=112 unreachable=0 failed=1 skipped=0 rescued=0 ignored=7
192.168.3.162 : ok=97 changed=64 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.163 : ok=97 changed=64 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.164 : ok=77 changed=53 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.165 : ok=77 changed=61 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6

[Error] 2022-10-12 14:52:55.945082336 +0800 CST m=+7285.287588335 Kubernetes install failed! There is an error in the process!


@xy2019devl
Copy link
Author

PLAY [etcd] ********************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 1.Cleaning up garbage files left in history] ***
ok: [192.168.3.161]
ok: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 2.Decompress etcd software package] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 3.Create /opt/kube-install/k8s/etcd data directory] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 4.Create etcd cert directory] ***
ok: [192.168.3.162]
ok: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 5.Distribution etcd cert file] ***
changed: [192.168.3.162]
ok: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 6.Check and replace etcd member status] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : shell] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 7.Create etcd service file] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000storage : 8.Start etcd service] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

PLAY [master,node] *************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/kubectl : 1.Create /root/.kube directory] ***
ok: [192.168.3.162]
ok: [192.168.3.161]
ok: [192.168.3.164]
ok: [192.168.3.165]
ok: [192.168.3.163]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/kubectl : 2.Create local.kubeconfig file] ***
changed: [192.168.3.165]
changed: [192.168.3.164]
changed: [192.168.3.162]
changed: [192.168.3.163]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/kubectl : 3.Generate kubectl program file] ***
changed: [192.168.3.163]
changed: [192.168.3.165]
changed: [192.168.3.164]
changed: [192.168.3.162]
changed: [192.168.3.161]

PLAY [master] ******************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/apiserver : 1.Distribution kube-apiserver cert] ***
ok: [192.168.3.161]
ok: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/apiserver : 2.Create kube-apiserver service startup file] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/apiserver : 3.Create /opt/kube-install/k8s/kubernetes/kube-apiserver directory] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/apiserver : 4.Start kube-apiserver service] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/apiserver : 5.Set IPVS rules] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

PLAY [master1] *****************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/api-rbac : 1.Wait 30s] ***
ok: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/api-rbac : 2.Create clusterrolebinding] ***
fatal: [192.168.3.161]: FAILED! => {"changed": true, "cmd": "kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap", "delta": "0:00:09.843358", "end": "2022-10-12 23:24:15.230199", "msg": "non-zero return code", "rc": 1, "start": "2022-10-12 23:24:05.386841", "stderr": "error: failed to create clusterrolebinding: Post "https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?fieldManager=kubectl-create&fieldValidation=Strict\": unexpected EOF", "stderr_lines": ["error: failed to create clusterrolebinding: Post "https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?fieldManager=kubectl-create&fieldValidation=Strict": unexpected EOF"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/api-rbac : shell] ***
fatal: [192.168.3.161]: FAILED! => {"changed": true, "cmd": "kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes", "delta": "0:00:00.031075", "end": "2022-10-12 23:24:15.818367", "msg": "non-zero return code", "rc": 1, "start": "2022-10-12 23:24:15.787292", "stderr": "error: failed to create clusterrolebinding: Post "https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?fieldManager=kubectl-create&fieldValidation=Strict\": dial tcp 127.0.0.1:6443: connect: connection refused", "stderr_lines": ["error: failed to create clusterrolebinding: Post "https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:6443: connect: connection refused"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/api-rbac : 3.Get kubelet-api-admin role info] ***
fatal: [192.168.3.161]: FAILED! => {"changed": true, "cmd": "kubectl describe clusterrole system:node-bootstrapper", "delta": "0:00:00.433389", "end": "2022-10-12 23:24:16.603646", "msg": "non-zero return code", "rc": 1, "start": "2022-10-12 23:24:16.170257", "stderr": "The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}
...ignoring

PLAY [master] ******************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/controller-manager : 1.Create kube-controller-manager service startup file] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/controller-manager : 2.Create /opt/kube-install/k8s/kubernetes/kube-controller-manager directory] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/controller-manager : 3.Start kube-controller-manager service] ***
changed: [192.168.3.162]
changed: [192.168.3.161]

PLAY [master] ******************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/scheduler : 1.Create kube-scheduler service startup file] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/scheduler : 2.Create /opt/kube-install/k8s/kubernetes/kube-scheduler directory] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x00000000master/scheduler : 3.Start kube-scheduler service] ***
changed: [192.168.3.161]
changed: [192.168.3.162]

PLAY [master1] *****************************************************************

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000network/flannel : 1.Create cniplugin directory] ***
changed: [192.168.3.161] => (item=flannel)
changed: [192.168.3.161] => (item=calico)
changed: [192.168.3.161] => (item=kuberouter)
changed: [192.168.3.161] => (item=weave)
changed: [192.168.3.161] => (item=cilium)

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000network/flannel : 2.Create flannel.yaml file] ***
changed: [192.168.3.161]

TASK [/root/kube-install/data/output/jzyz-k8s-prod/sys/0x0000000network/flannel : 3.Deploy flannel] ***

fatal: [192.168.3.161]: FAILED! => {"changed": true, "cmd": "/usr/sbin/kubectl --kubeconfig=/etc/kubernetes/ssl/kube-install.kubeconfig apply -f /opt/kube-install/k8s/cniplugin/flannel/flannel.yaml", "delta": "0:00:28.986573", "end": "2022-10-12 23:25:03.083417", "msg": "non-zero return code", "rc": 1, "start": "2022-10-12 23:24:34.096844", "stderr": "resource mapping not found for name: "psp.flannel.unprivileged" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"\nensure CRDs are installed first\nresource mapping not found for name: "flannel" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1"\nensure CRDs are installed first\nresource mapping not found for name: "flannel" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1"\nensure CRDs are installed first", "stderr_lines": ["resource mapping not found for name: "psp.flannel.unprivileged" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"", "ensure CRDs are installed first", "resource mapping not found for name: "flannel" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1"", "ensure CRDs are installed first", "resource mapping not found for name: "flannel" namespace: "" from "/opt/kube-install/k8s/cniplugin/flannel/flannel.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1"", "ensure CRDs are installed first"], "stdout": "serviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.apps/kube-flannel-ds created", "stdout_lines": ["serviceaccount/flannel created", "configmap/kube-flannel-cfg created", "daemonset.apps/kube-flannel-ds created"]}

PLAY RECAP *********************************************************************
192.168.3.161 : ok=123 changed=91 unreachable=0 failed=1 skipped=0 rescued=0 ignored=9
192.168.3.162 : ok=84 changed=55 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.163 : ok=64 changed=38 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.164 : ok=64 changed=36 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.165 : ok=64 changed=36 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6

[Info] 2022-10-12 23:25:03.184857691 +0800 CST m=+6901.265222912 Cleaning and detection after installation are in progress. Please wait ...

[Error] 2022-10-12 23:34:06.058730745 +0800 CST m=+7444.139095978 Kubernetes cluster install failed! jzyz-k8s-prod cluster status is unhealthy!


@houseonline
Copy link
Collaborator

houseonline commented Oct 17, 2022

可能的原因以及建議:
(1)如果你的環境曾經已經安裝過其他k8s,建議先卸載清理乾淨,以免造成衝突。 如果是使用kube-install安裝的k8s,無論是否安裝成功,都可以使用圖形介面或kube-install -exec uninstall命令列進行卸載清理。
(2)由於k8s-master與etcd混合部署的,按照etcd官方的建議,最好是部署奇數倍的節點。 例如,部署1個k8s-master、3個k8s-master、5個k8s-master節點等。不推薦使用偶數倍的k8s-master節點。 不過k8s-node節點數量沒有這種限制。

@xy2019devl
Copy link
Author

我是新创建的虚拟机安装的centos8.5 通过图形界面卸载了,重新安装也试过,还是这个问题

@xy2019devl
Copy link
Author

[root@localhost kube-install]# cat /etc/redhat-release
CentOS Linux release 8.5.2111
[root@localhost kube-install]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.165 master01
192.168.3.166 worker01
192.168.3.167 worker02
[root@localhost kube-install]#
[root@localhost kube-install]# cat /etc/redhat-release
CentOS Linux release 8.5.2111
[root@localhost kube-install]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.165 master01
192.168.3.166 worker01
192.168.3.167 worker02
[root@localhost kube-install]#

[root@localhost kube-install]# ./kube-install -init -ostype "centos8"

Initialization in progress, please wait...

Notice: If you are prompted to enter the password below, please enter the root password again!
[email protected]'s password:

Initialization completed!

[root@localhost kube-install]# ./kube-install -exec sshcontrol -sship "192.168.3.165,192.168.3.166,192.168.3.167" -sshpass "1qaz2wsx"

Opening SSH tunnel, please wait...

[Info] 2022-10-24 14:02:12.643061575 +0800 CST m=+1.098110262 Successfully open the SSH channel from local host to the target host (192.168.3.165,192.168.3.166,192.168.3.167)!

The SSH tunnel is opened!

[root@localhost kube-install]#
[root@localhost kube-install]# ./kube-install -exec install -master "192.168.3.165" -node "192.168.3.166,192.168.3.167" -k8sver "1.23" -ostype "centos8" -label "k8s-prod"


[Info] 2022-10-24 14:09:15.602774641 +0800 CST m=+0.147188261 Installing kubernetes cluster, please wait ...

Kubernetes Cluster Label: k8s-prod
Kubernetes Version: Kubernetes v1.23
Kubernetes Master: 192.168.3.165
Kubernetes Node: 192.168.3.166,192.168.3.167
CNI Plug-in Type: flannel
Operating System Type: centos8
Automatically Upgrade OS Kernel: Not Support
System User for Installation: root

PLAY [master,node] *************************************************************

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x0000000000base/genfile : 0.Distributing deployment files to target host, please wait...] ***
changed: [192.168.3.166]
changed: [192.168.3.167]
ok: [192.168.3.165]

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x0000000000base/genfile : file] ***
changed: [192.168.3.166]
changed: [192.168.3.167]
changed: [192.168.3.165]

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x0000000000base/genfile : copy] ***
changed: [192.168.3.165]
changed: [192.168.3.166]
changed: [192.168.3.167]

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x0000000000base/genfile : copy] ***
changed: [192.168.3.166]
changed: [192.168.3.167]
changed: [192.168.3.165]

中间省略******

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x00000000action/pushimages : wait_for] ***
ok: [192.168.3.166]
ok: [192.168.3.167]

PLAY [node] ********************************************************************

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x00000000finish/install : Create reboot config file] ***
changed: [192.168.3.167]
changed: [192.168.3.166]

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x00000000finish/install : Congratulations, kubernetes cluster installation and deployment is successful! "The operating system will automatically restart in 10 seconds to take effect on the cluster configuration."] ***
ok: [192.168.3.167]
ok: [192.168.3.166]

TASK [/opt/kube-install/data/output/k8s-prod/sys/0x00000000finish/install : shell] ***
changed: [192.168.3.166]
changed: [192.168.3.167]

PLAY RECAP *********************************************************************
192.168.3.165 : ok=129 changed=114 unreachable=0 failed=0 skipped=0 rescued=0 ignored=6
192.168.3.166 : ok=91 changed=67 unreachable=0 failed=0 skipped=0 rescued=0 ignored=11
192.168.3.167 : ok=91 changed=67 unreachable=0 failed=0 skipped=0 rescued=0 ignored=11

[Info] 2022-10-24 14:46:12.195787518 +0800 CST m=+2216.740201120 Cleaning and detection after installation are in progress. Please wait ...

[Info] 2022-10-24 14:46:12.206554586 +0800 CST m=+2216.750968182 Kubernetes cluster install completed!


[root@localhost kube-install]# pwd
/opt/kube-install
[root@localhost kube-install]# ls -lh
total 46M
drwxr-xr-x 7 root root 75 Nov 11 2021 data
drwxr-xr-x 3 root root 4.0K Nov 25 2021 docs
drwxr-xr-x 7 root root 79 Oct 24 14:43 k8s
-rwxr-xr-x 1 root root 46M Dec 16 2021 kube-install
-rw-r--r-- 1 root root 15K Dec 13 2021 kube-install.go
-rw-r--r-- 1 root root 425 Oct 24 14:00 kube-install.service
drwxr-xr-x 2 root root 267 Nov 26 2021 lib
-rw-r--r-- 1 root root 12K Nov 11 2021 LICENSE
-rw-r--r-- 1 root root 1.5K Oct 24 14:45 loginkey.txt
-rw-r--r-- 1 root root 150 Nov 11 2021 Makefile
drwxr-xr-x 8 root root 277 Dec 13 2021 pkg
-rw-r--r-- 1 root root 4.1K Nov 11 2021 README0.1.md
-rw-r--r-- 1 root root 4.2K Nov 11 2021 README0.2.md
-rw-r--r-- 1 root root 6.9K Nov 11 2021 README0.3.md
-rw-r--r-- 1 root root 8.0K Nov 11 2021 README0.4.md
-rw-r--r-- 1 root root 8.9K Nov 11 2021 README0.5.md
-rw-r--r-- 1 root root 7.9K Nov 11 2021 README0.6.md
-rw-r--r-- 1 root root 18K Nov 25 2021 README0.7-jp.md
-rw-r--r-- 1 root root 14K Nov 25 2021 README0.7.md
-rw-r--r-- 1 root root 14K Nov 25 2021 README0.7-zh-hk.md
-rw-r--r-- 1 root root 14K Nov 25 2021 README0.7-zh.md
-rw-r--r-- 1 root root 14K Nov 25 2021 README.md
drwxr-xr-x 4 root root 34 Nov 11 2021 static
drwxr-xr-x 12 root root 246 Nov 11 2021 sys
drwxr-xr-x 2 root root 32 Nov 11 2021 yaml
[root@localhost kube-install]#

这样是安装成功了吗?

@houseonline
Copy link
Collaborator

houseonline commented Oct 31, 2022

你是不是把安裝包存放的路徑和安裝的目標路徑設定成為一樣可? 從你的日誌來看,應該都是/opt/kube-install。 這樣會讓你的/opt/kube-install目錄看上去比較混亂。
建議你將兩者分開:要麼你不要將安裝包放在/opt/kube-install目錄; 如果你將安裝包存放在了/opt/kube-install的話,可以通過參數修改安裝的目標路徑。

@houseonline
Copy link
Collaborator

[root@localhost kube-install]# cat /etc/redhat-release CentOS Linux release 8.5.2111 [root@localhost kube-install]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain
中间省略******

这样是安装成功了吗?

看上去沒有任何報錯,應該是安裝成功了。

@houseonline houseonline self-assigned this Nov 16, 2022
@houseonline houseonline added the question Further information is requested label Nov 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants