Skip to content

Commit

Permalink
Merge pull request #5 from dougbtv/upgrade16
Browse files Browse the repository at this point in the history
Upgrade to kube 1.6.1 beta
  • Loading branch information
dougbtv authored Apr 5, 2017
2 parents 9e2b76b + 258fdb3 commit 7e5f383
Show file tree
Hide file tree
Showing 17 changed files with 315 additions and 80 deletions.
72 changes: 72 additions & 0 deletions docs/scratch.rbac.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# RBAC


[Here's the bible](https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions)

Help from [@liggitt](https://github.com/liggitt) (Jordan Liggitt)

```
liggitt [1:08 PM]
if you’re on 1.6 with RBAC, you’ll want to keep https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions close at hand (edited)
[1:10]
most things don’t define their own roles (which is fine, the default `view`, `edit`, `admin`, `cluster-admin` roles cover a ton of use cases), but very few apps explain the API permissions they require (some need none, some need read-only access, some assume they are root, etc)
dougbtv [1:12 PM]
awesome, appreciate the pointer, insightful on the roles. looking forward to getting my feet wet with RBAC, too
liggitt [1:12 PM]
handing out permissions to the service accounts you’re running apps with is part of running an app on your cluster. if you don’t care what has access, you can grant really broad permissions and be done with it. if you want to know what is doing what, you can get more granular. (edited)
```

## Flannel

[flannel]https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml

## Now having trouble with flannel having network connections outside...

[wan connectivity doesn't work](http://pasteall.org/338143)
[IP tables forward didn't work](http://pasteall.org/338157)

Some ideas from slack:

```
foxie [9:04 AM]
@dougbtv when has it started? Have you by any chance updated to docker 1.13.x or 1.17.x?
dougbtv [9:11 AM]
@foxie I'm on `Docker version 17.03.1-ce, build c6d412e`
[9:11]
good chance that when I was using kube 1.5 I was using 1.12.x
[9:14]
(I might try to reinstall the cluster with 1.12, that's a good possibility to eliminate! appreciate the brain cycles)
foxie [9:16 AM]
you may want to try
[9:16]
iptables -P FORWARD ACCEPT
[9:16]
they changed that with 1.13
dougbtv [9:21 AM]
awesome idea -- didn't exactly work for me, but, I think you're onto something, for some reason those iptables rules look like I'm missing something and I can't quite put my finger on it. fwiw, here's the results of giving that a try: http://pasteall.org/338157
```


Tried inserting the rule at the top for fun...

## Roll-back to docker 1.12

Let's see what we can do...

[Following this issue-comment](https://github.com/kubernetes/kubeadm/issues/212#issuecomment-291413672)

We need to specifically do this one:

> (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) add "--cgroup-driver=systemd" at the end of the last line.
=> This is because Docker uses systemd for cgroup-driver while kubelet uses cgroupfs for cgroup-driver.
12 changes: 6 additions & 6 deletions inventory/vms.inventory
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
kube-master ansible_host=192.168.122.65
kube-minion-1 ansible_host=192.168.122.56
kube-minion-2 ansible_host=192.168.122.62
kube-minion-3 ansible_host=192.168.122.239
kube-master ansible_host=192.168.122.227
kube-minion-1 ansible_host=192.168.122.17
kube-minion-2 ansible_host=192.168.122.216
kube-minion-3 ansible_host=192.168.122.41

[master]
kube-master
Expand All @@ -19,7 +19,7 @@ kube-minion-3

[all_vms:vars]
ansible_ssh_user=centos
ansible_become=true
ansible_become_user=root
# ansible_become=true
# ansible_become_user=root
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p [email protected]"'
ansible_ssh_private_key_file=/home/doug/.ssh/id_testvms
23 changes: 22 additions & 1 deletion kube-install.yml
Original file line number Diff line number Diff line change
@@ -1,21 +1,42 @@
---
- hosts: all_vms
become: true
become_user: root
vars_files:
- vars/all.yml
tasks: []
roles:
- { role: docker-install }
# - { role: docker-install }
- { role: kube-install }
- { role: multus-cni, when: pod_network_type == "multus" }

- hosts: master
become: true
become_user: root
vars_files:
- vars/all.yml
tasks: []
roles:
- { role: kube-init }
- { role: kube-template-cni }

# ---- placeholder: kube-cni
# without become.

- hosts: master
vars_files:
- vars/all.yml
vars:
kubectl_environment:
KUBECONFIG: "{{ kubectl_home }}/admin.conf"
tasks: []
roles:
- { role: kube-cni }


- hosts: minions
become: true
become_user: root
vars_files:
- vars/all.yml
pre_tasks:
Expand Down
2 changes: 2 additions & 0 deletions kube-teardown.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
---
- hosts: all_vms
become: true
become_user: root
vars_files:
- vars/all.yml
tasks: []
Expand Down
59 changes: 59 additions & 0 deletions roles/kube-cni/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---

# ----------- flannel
- name: Apply the flannel RBAC
shell: >
kubectl create -f /etc/flannel-rbac.yaml
environment: "{{ kubectl_environment }}"
args:
creates: "{{ kubectl_home }}/.kubeadm-podnetwork-complete"
when: pod_network_type == "flannel"

- name: Apply the flannel podnetwork
shell: >
kubectl apply -f /etc/flannel.yaml > /tmp/podnetwork-apply.log
environment: "{{ kubectl_environment }}"
args:
creates: "{{ kubectl_home }}/.kubeadm-podnetwork-complete"
when: pod_network_type == "flannel"


# ----------- weave
- name: Apply the weave podnetwork
shell: >
kubectl apply -f https://git.io/weave-kube > /tmp/podnetwork-apply.log
environment: "{{ kubectl_environment }}"
args:
creates: "{{ kubectl_home }}/.kubeadm-podnetwork-complete"
when: pod_network_type == "weave"

# ----------- multus
- name: Apply the multus podnetwork
shell: >
kubectl apply -f /etc/multus.yaml > /tmp/podnetwork-apply.log
environment: "{{ kubectl_environment }}"
args:
creates: "{{ kubectl_home }}/.kubeadm-podnetwork-complete"
when: pod_network_type == "multus"

# ----------- all network types
- name: Mark podnetwork applied
file:
path: "{{ kubectl_home }}/.kubeadm-podnetwork-complete"
state: directory
when: pod_network_type != "none"

# --------------------------------------------------------------
# --------------- This didn't work because it's not scheduled.
# --------------- because no nodes were joined yet.
# --------------------------------------------------------------
# - name: Wait until the kube-dns pod is up and running
# shell: >
# kubectl get pods --all-namespaces | grep -P "kube-dns.+4/4.+Running"
# environment: "{{ kubectl_environment }}"
# register: kube_dns_result
# until: kube_dns_result.rc == 0
# retries: 60
# delay: 3
# ignore_errors: yes
# when: pod_network_type != "none"
19 changes: 0 additions & 19 deletions roles/kube-init/tasks/flannel.yml

This file was deleted.

45 changes: 22 additions & 23 deletions roles/kube-init/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
# abandonded for now...
- name: Run kubeadm init
shell: >
kubeadm init {{ arg_pod_network }} > /etc/kubeadm.init.txt
kubeadm init {{ arg_pod_network }} > /var/log/kubeadm.init.log
args:
creates: /etc/.kubeadm-complete

Expand All @@ -29,36 +29,35 @@

- name: Get join command
shell: >
cat /etc/kubeadm.init.txt | grep "kubeadm join"
cat /var/log/kubeadm.init.log | grep "kubeadm join"
register: kubeadm_join_output

- name: Set fact with join command
set_fact:
kubeadm_join_command: "{{ kubeadm_join_output.stdout }}"

# --------------------------------------- end flannel.
# -------- Copy in admin.conf

- include: flannel.yml
when: pod_network_type == "flannel"
# ---- Kube 1.6, apparently you can't use kubectl as root? weird/awesome.

- include: weave.yml
when: pod_network_type == "weave"
# sudo cp /etc/kubernetes/admin.conf $HOME/
# sudo chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf

- include: multus.yml
when: pod_network_type == "multus"
- name: Copy admin.conf to kubectl user's home
shell: >
cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/admin.conf
args:
creates: "{{ kubectl_home }}/admin.conf"

- name: Mark podnetwork applied
- name: Set admin.conf ownership
file:
path: /etc/.kubeadm-podnetwork-complete
state: directory
when: pod_network_type != "none"

- name: Wait until the kube-dns pod is up and running
shell: >
kubectl get pods --all-namespaces | grep -P "kube-dns.+4/4.+Running"
register: kube_dns_result
until: kube_dns_result.rc == 0
retries: 60
delay: 3
ignore_errors: yes
when: pod_network_type != "none"
path: "{{ kubectl_home }}/admin.conf"
owner: "{{ kubectl_user }}"
group: "{{ kubectl_group }}"

- name: Add KUBECONFIG env for admin.conf to .bashrc
lineinfile:
dest: "{{ kubectl_home }}/.bashrc"
regexp: "KUBECONFIG"
line: "export KUBECONFIG={{ kubectl_home }}/admin.conf"
16 changes: 0 additions & 16 deletions roles/kube-init/tasks/multus.yml

This file was deleted.

10 changes: 0 additions & 10 deletions roles/kube-init/tasks/weave.yml

This file was deleted.

35 changes: 35 additions & 0 deletions roles/kube-install/tasks/kube-16-workaround.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
- name: Stat the complete semaphor
stat:
path: /etc/.kube16workaround_complete
register: wsemaphor

- name: Remove kubelet
file:
path: /bin/kubelet
state: absent
when: not wsemaphor.stat.exists

- name: Remove kubeadm
file:
path: /bin/kubeadm
state: absent
when: not wsemaphor.stat.exists

- name: Curl kubelet binary
get_url:
url: https://storage.googleapis.com/kubernetes-release-dev/ci/v1.6.1-beta.0.12+018a96913f57f9/bin/linux/amd64/kubelet
dest: /bin/kubelet
mode: 0755
when: not wsemaphor.stat.exists

- name: Curl kubeadm binary
get_url:
url: https://storage.googleapis.com/kubernetes-release-dev/ci/v1.6.1-beta.0.12+018a96913f57f9/bin/linux/amd64/kubeadm
dest: /bin/kubeadm
mode: 0755
when: not wsemaphor.stat.exists

- name: Mark as complete
file:
path: /etc/.kube16workaround_complete
state: directory
22 changes: 21 additions & 1 deletion roles/kube-install/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,35 @@
name: "{{ item }}"
state: latest
with_items:
- docker
- kubelet
- kubeadm
- kubectl
- kubernetes-cni

- name: Include kube 1.6.0 work-around
include: kube-16-workaround.yml
when: kube_16_workaround

- name: Remove default kubadm.conf ExecStart
lineinfile:
dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
regexp: 'KUBELET_EXTRA_ARGS$'
state: absent

- name: Add custom kubadm.conf ExecStart
lineinfile:
dest: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
regexp: 'systemd$'
line: 'ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS --cgroup-driver=systemd'

- name: Reload systemd units after changing 10-kubeadm.conf
command: systemctl daemon-reload

- name: Start and enable services
service:
name: "{{ item }}"
state: started
state: restarted
enabled: yes
with_items:
- docker
Expand Down
Loading

0 comments on commit 7e5f383

Please sign in to comment.