- Kubernetes master:
curl -sSL https://get.kismatic.com/kubernetes/master.sh | sudo sh
- Kubernetes nodes:
curl -sSL https://get.kismatic.com/kubernetes/node.sh | sudo sh
- Requirements:
The quick start scripts above will automatically add the kismatic repository as a source. Alternatively, you can manually add it with the command:
rpm -Uvh https://repos.kismatic.com/el/7/x86_64/kismatic-repo-el-7-1.x86_64.rpm
Manually Install Master: sudo yum install kubernetes-master
Manually Install Nodes: sudo yum install kubernetes-node
Configure Kubernetes Master
To configure services, edit configuration files in /etc/kubernetes/master
or /etc/kubernetes/node
To override services, copy /lib/systemd/system
to /etc/systemd/system
to override. This allows you to customize local services without worrying about losing changes as the result of a package upgrade.
Enable HTTPS (optional)
By default, the apiserver is setup using insecure http and assumes a firewall will provide enough security. To enable https support, certs must be generated. A basic key has already been generated during install at /srv/kubernetes/server.key
. To generate certs:
cd /srv/kubernetes
curl -O -L https://raw.githubusercontent.com/kubernetes/kubernetes/v1.1.2/cluster/saltbase/salt/generate-cert/make-ca-cert.sh
chmod +x make-ca-cert.sh
#Set to your master ip and already chosen cluster ip range
master_ip=127.0.0.1
SERVICE_CLUSTER_IP_RANGE=10.254.0.0/16
#Assumes cluster is named 'kubernetes' for DNS
./make-ca-cert.sh ${master_ip} IP:${master_ip},IP:${SERVICE_CLUSTER_IP_RANGE%.*}.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local;
For AWS the external IP can be automatically discovered using:
./make-ca-cert.sh _use_aws_external_ip_ (other args...)
Update the kubernetes configuration files with the newly generated keys
#kube-apiserver
--secure-port=443 --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key --client-ca-file=/srv/kubernetes/ca.crt
#kube-controller-manager
--root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key
Start services sudo systemctl start kube-apiserver sudo systemctl start kube-scheduler sudo systemctl start kube-controller-manager sudo systemctl start kubelet
test with
curl -L http://<master ip>:8080/version
Set --master=<master ip>
for a node in /etc/kubernetes/node/kube-proxy.conf
and --master=<master ip>
in /etc/kubernetes/node/kubelet.conf
sudo systemctl start kube-proxy sudo systemctl start kubelet
Check services are running
systemctl list-units -t service --all
Install docker on Ubuntu or Debian and etcd
To configure services, edit the file with the same name in /etc/default
Master
sudo service kube-apiserver start
sudo service kube-scheduler start
sudo service kube-controller-manager start
sudo service kubelet start
Configure Node
Set the --master
for a node in /etc/default/kube-proxy
and --api-servers
in /etc/default/kubelet
sudo service kube-proxy start
sudo service kubelet start
- Make sure to have fpm installed along with rpmbuild
- run
K8S_VERSION=1.2.0 ./build_kubernetes.sh
- CentOS
$ cd vagrant/centos; vagrant destroy && vagrant up
$ vagrant ssh master
$ sudo K8S_VERSION=1.2.0 /vagrant/test/master.sh
$ vagrant ssh node
$ sudo K8S_VERSION=1.2.0 /vagrant/test/node.sh
- Ubuntu
# Trusty
$ cd vagrant/ubuntu/trusty; vagrant destroy && vagrant up
$ vagrant ssh master
$ sudo K8S_VERSION=1.2.0 /vagrant/test/master.sh
$ vagrant ssh node
$ sudo K8S_VERSION=1.2.0 /vagrant/test/node.sh
# Vivid
$ cd vagrant/ubuntu/vivid; vagrant destroy && vagrant up
$ vagrant ssh master
$ sudo K8S_VERSION=1.2.0 /vagrant/test/master.sh
$ vagrant ssh node
$ sudo K8S_VERSION=1.2.0 /vagrant/test/node.sh
-
ruby
-
prerequisites:
gem install fpm