- When starting a new experiment on Cloudlab, select the small-lan profile
- In the profile parameterization page, - Set Number of Nodes as 11 - Set OS image as Ubuntu 18.04 - Set physical node type as c220g2 - Please check Temp Filesystem Max Space - Keep Temporary Filesystem Mount Point as default (/mydata)
- We use
node-0
as master node.node-1
tonode-10
are used as worker node.
On the master node and worker nodes, run
sudo chown -R $(id -u):$(id -g) <mount point(to be used as extra storage)>
cd <mount point>
git clone https://github.com/ucr-serverless/mu-deployment.git
cd <mount point>/mu-deployment
Then run export MYMOUNT=<mount point>
with the added storage mount point name
- if your Temporary Filesystem Mount Point is as default (/mydata), please run
sudo chown -R $(id -u):$(id -g) /mydata
cd /mydata
git clone https://github.com/ucr-serverless/mu-deployment.git
cd /mydata/mu-deployment
export MYMOUNT=/mydata
- Run
./100-docker_install.sh
without sudo on both master node and worker node - Run
source ~/.bashrc
- On master node, run
./200-k8s_install.sh master <master node IP address>
- On worker node, run
./200-k8s_install.sh slave
and then use thekubeadm join ...
command obtained at the end of the previous step run in the master node to join the k8s cluster. Run thekubeadm join
command with sudo - run
echo 'source <(kubectl completion bash)' >>~/.bashrc && source ~/.bashrc
./300-git_clone.sh
- If the system login name is different from the docker name then, run
export DOCKER_USER=<docker name>
- On master node, run
./400-prerequisite.sh
- On master node, run
sudo docker login
to login with your dockerhub account - On master node, run
${MYMOUNT}/istio/out/linux_amd64/istioctl manifest install -f istio-de.yaml
to setup custom istio NOTE: we use the built-up image in shixiongqi's docker registery directly - Edit the resource usage of
istio-ingressgateway
deployment. Set CPU as 16 and memory as 40Gi.
- If the system login name is different from the docker name then, run
export DOCKER_USER=<docker name>
- On master node, run
./400-prerequisite.sh
- On master node, run
sudo docker login
to login with your dockerhub account - On master node run
./500-build_istio.sh
withoutsudo
. - On master node, hardcode the dockerhub account in istio-de.yaml and then run
${MYMOUNT}/istio/out/linux_amd64/istioctl manifest install -f istio-de.yaml
to setup custom istio or run501-install_custom_istio.sh
To uninstall, run ${MYMOUNT}/istio/out/linux_amd64/istioctl x uninstall --purge
or run ./502-uninstall_custom_istio.sh
- Apply the Placement Decision CRD definition and API server permission
kubectl apply -f placementDecisionCrdDefinition.yaml
kubectl apply -f metric_authority.yaml
- If you haven't done the above steps, please complete them before moving to step 2.
- On master node, run
./600-ko_install.sh
. Pleasesource ~/.bashrc
after you run the script. - On master node, run
./601-go_dep_install.sh
- On master node, run
sudo docker login
to login to your dockerhub account - Change permission of ko
sudo chown -R $(id -u):$(id -g) /users/$(id -nu)/.docker
sudo chmod g+rwx "/users/$(id -nu)/.docker" -R
- Depending on the experiment (MU, RPS or CC), modify the Knative source as instructed in section below.
- On master node, run
ko apply -f $GOPATH/src/knative.dev/serving/config/
to build and install knative To uninstall, runko delete -f $GOPATH/src/knative.dev/serving/config/
- The termination of the
knative-serving
ns takes a long time. Please be paitent before theknative-serving
ns gets terminated. - Run
ko delete -f $GOPATH/src/knative.dev/serving/config/
to kill all Knative pods. Waiting before all the KNative pods get killed - run
kubectl get ns
. Wait untilknative-serving
ns gets killed. - Switch back to default controller manager.
- Run
${MYMOUNT}/istio/out/linux_amd64/istioctl x uninstall --purge
or./502-uninstall_custom_istio.sh
to uninstall Istio. Waiting before all the Istio pods get killed - Run
./500-build_istio.sh
withoutsudo
. - Run
./501-install_custom_istio.sh
Tips: if the binary cannot be built in /mydata/kubernetes/, download the customized repository to /users/sqi009/ and then complie again
- Compiling the customized controller manager
cd kubernetes/
make WHAT=cmd/kube-controller-manager KUBE_BUILD_PLATFORMS=linux/amd64
- Terminate the kube-controller-manager Pod
sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# Change `image: k8s.gcr.io/kube-controller-manager:v1.19.8` to `#image: shixiongqi/customized-kube-controller-manager:v1.1`.
# `customized-kube-controller-manager:v1.1` will crash which is an alternative way to terminate the `kube-controller-manager` Pod, although this is not the perfect method
# Save the changes to the default manifest
# Check whether the pod crashes. If not, try to scale the deployment, so it will crash
- Execute the binary file of kube-controller-manager
sudo ./_output/bin/kube-controller-manager --kubeconfig=/etc/kubernetes/admin.conf
- Clone loadtest in ucr-serverless and move to loadtest dir
git clone https://github.com/ucr-serverless/mu-loadtest.git loadtest
cd loadtest
- Install loadtest dependencies
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt install -y nodejs
npm install stdio log testing websocket confinode agentkeepalive https-proxy-agent
- Loadtest command for experiment-1 and experiment-2
node sample/knative-variable-rps1.js > Workload1LOG & node sample/knative-variable-rps2.js > Workload2LOG &
- Modify the service YAML file: Change CPU/Mem usage if necessary. Change autoscaling policy to
custom2
. Check SLO value, target value, etc. - Experiment preparation:
- Re-build Knative if any changes has been made:
ko apply -f config/
- Apply service YAML:kubectl apply -f service.yaml
- Swtich to customized kube-controller-manager - Start VLOG in autoscaler - Rename loadtest log if needed
- Modify the service YAML file: Change CPU/Mem usage if necessary. Change autoscaling policy to
rps
orconcurrency
. Check SLO value, target value, etc. - Make sure you are using
e1bd60b2e8cae46dec00d939c1860deb4b5f586c
branch in Knative. Otherwise, dogit checkout e1bd60b2e8cae46dec00d939c1860deb4b5f586c
- Do
git apply defaultChanges.go
after checkout toe1bd60b2e8cae46dec00d939c1860deb4b5f586c
- Experiment preparation:
- Re-build Knative if any changes has been made:
ko apply -f config/
- Apply service YAML:kubectl apply -f service.yaml
- Swtich to default kube-controller-manager - Start VLOG in autoscaler - Rename loadtest log if needed