Skip to content

Latest commit

 

History

History

tce-managed

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Tanzu Community Edition (TCE) on vSphere

Instructions for installing Tanzu Community Edition (TCE) on vSphere.

Download the OVA

Create a VM folder named "tanzu-community-edition".

Download a Kubernetes OVA from VMware Customer Connect: https://customerconnect.vmware.com/downloads/get-download?downloadGroup=TCE-0110 (I downloaded Photon v3 Kubernetes v1.22.5 OVA)

Import the OVA into vCenter in the "tanzu-community-edition" folder. I specified the following settings:

  • Storage: VMStorage
  • Network: vm-network-140 (this is my network with DHCP for TCE)

Once the OVA is uploaded, convert it to a template (Right click, Template -> Convert to Template)

Create and Configure a Bootstrap VM

Create a VM for working with TCE. The VM I created has the following characteristics:

  • Ubuntu Desktop 20.04 (LTS)
  • 8 vCPU
  • 32 GB RAM
  • 256 GB Storage

I did a minimal install initially. We will add a few items...

Install OpenSSH Server

SSH can be usefull in many cases, so let's install it:

sudo apt-get update

sudo apt-get install openssh-server

sudo ufw allow ssh

Install Docker

Install Docker based on the instructions at https://docs.docker.com/engine/install/ubuntu/

Make sure to setup the docker group so you can run docker without sudo.

Install Homebrew

Install homebrew with instructions from here: https://brew.sh/

It is important to follow the post install steps for downloading the brew dependencies and installing gcc!

Install Kubectl

brew install kubectl

Install Carvel Tools

brew tap vmware-tanzu/carvel
brew install ytt kbld kapp imgpkg kwt vendir

Install Knative CLI

brew install kn

Install Kpack CLI

brew tap vmware-tanzu/kpack-cli
brew install kp

Install TCE

brew install vmware-tanzu/tanzu/tanzu-community-edition

/home/linuxbrew/.linuxbrew/Cellar/tanzu-community-edition/v0.11.0/libexec/configure-tce.sh

Create an SSH Key

ssh-keygen -t rsa -b 4096 -C "[email protected]"

Create a Management Cluster

(This will open a browser, so it cannot be done from SSH. Login to the VM and open a terminal instead.)

tanzu management-cluster create --ui

Select the provider for vSphere and follow the wizard. This was very straight forward. Mainly pay attention to networking:

  • Network: vm-network-140
  • Control Plane Endpoint: 192.168.140.240

Create a Workload Cluster

Copy the workload cluster configuration (note that the filename will be different for different installs, but it is usually the only file in the directory after an initil install):

mkdir tce-config

cp ~/.config/tanzu/tkg/clusterconfigs/z8c6uhzh1p.yaml .

mv z8c6uhzh1p.yaml workload-cluster.yaml

Edit the file and change the following settings at a minimum:

  • CLUSTER_NAME: workload-cluster
  • VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.140.241
  • WORKER_MACHINE_COUNT: "3"

Save the file, then create the cluster:

tanzu cluster create --file workload-cluster.yaml

Once the cluster is created, gain access to it through Kubeconfig:

This will add the config to your context for easy use:

tanzu cluster kubeconfig get workload-cluster --admin

This will export the config so you can use it on different machines:

tanzu cluster kubeconfig get workload-cluster --admin --export-file workload-cluster-kubeconfig.yaml

Install MetalLB for LoadBalancing

For this install, I did not install NSX Advanced Load Balancer. Rather, I will use MetalLB for providing support for LoadBalancer services in a workload cluster. This post by William Lam is helpful: https://williamlam.com/2021/10/quick-tip-install-metallb-as-service-load-balancer-with-tanzu-community-edition-tce.html

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

Create a configuration file for MetalLB:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.140.220-192.168.140.239

Apply the config map:

kubectl apply -f metallb-config.yaml

Test it:

kubectl run kuard --restart=Never --image=gcr.io/kuar-demo/kuard-amd64:blue

kubectl expose pod kuard --type=LoadBalancer --port=80 --target-port=8080

Usefull Commands

Display the Kapp values schema for a package

kubectl get package knative-serving.community.tanzu.vmware.com.1.0.0 -n tanzu-package-repo-global -o yaml