Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubezoo101 #189

Merged
merged 15 commits into from
Apr 7, 2024
2 changes: 1 addition & 1 deletion Keda101/keda-lab.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,4 +234,4 @@ With the above configuration, a new Keda job will start every time a message is

## Conclusion

This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples).
This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples).
55 changes: 55 additions & 0 deletions Kubezoo/kubezoo-lab.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Kubezoo Lab

Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. You could go ahead and use [Minikube](https://minikube.sigs.k8s.io/docs/start/), or you could create a cluster using [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). You can also use any Kubernetes cluster you have at the ready. Let's start by cloning the [KubeZoo repo](https://github.com/kubewharf/kubezoo.git):

```
git clone https://github.com/kubewharf/kubezoo.git
```

Now, go to the root of the repo you just cloned, and run the `make` command:

```
make local-up
```

This will get Kubezoo up and running on port 6443 as long as the port is free. Check to see if the API resources are up and running:

```
kubectl api-resources --context zoo
```

Now, let's create a sample tenant. For this, we will be using the config/setup/sample_tenant.yamlsample_tenant.yaml provided in the repo. If you take a look at the tenant yaml file, you will notice that this is a custom resource of type "tenant", and contains just a few lines specifying the type of resources this tenant requires. The name of the tenant is "111111". Since this is a regular Kubernetes resource, let's go ahead and deploy this tenant as we would a normal yaml:

```
kubectl apply -f config/setup/sample_tenant.yaml --context zoo
```

Check that the tenant is has been setup:

```
kubectl get tenant 111111 --context zoo
```

Since this tenant is basically a "cluster" in itself, it has it's own kubeconfig that gets created for it. You can extract it using:

```
kubectl get tenant 111111 --context zoo -o jsonpath='{.metadata.annotations.kubezoo\.io\/tenant\.kubeconfig\.base64}' | base64 --decode > 111111.kubeconfig
```

You should now be able to deploy all sorts of resources to the tenant by specifying the kubeconfig. For example, if you were to deploy a file called "application.yaml" into the tenant, you would use:

```
kubectl apply -f application.yaml --kubeconfig 111111.kubeconfig
```

You can check the pod as the tenant by specifying the kubeconfig as before:

```
kubectl get po --kubeconfig 111111.kubeconfig
```

The pod would have been created in the namespace that you assigned to the tenant. If you were to have multiple tenants, you would not be able to see the pods of the other tenants as long as you only have the kubeconfig of the tenant that you are dealing with, which allows for better isolation. Using your regular kubeconfig as a cluster admin, if you were to list all pods with `kubectl get po -A`, you would be able to see all the pods of all the tenants separated by namespace.

# Conclusion

This brings us to the end of the section on Kubezoo. Hopefully, by now, you understand what a multi-tenant system is, what the benefits of such a system are, and what possible challenges you could face when using a system. You also know what Kubezoo can do to help alleviate these challenges, specifically when you have constraints such as a smaller development team and a large number of small clients. We also covered a lab on setting up Kubezoo in a kind cluster and deploying the items to the Kubezoo tenant, as well as showing how to interact with multiple tenants as a cluster admin. This covers the basics of Kubezoo. If you want to learn more on the topic, the official Kubezoo [GitHub page](https://github.com/kubewharf/kubezoo) is the best place to start.
9 changes: 9 additions & 0 deletions Kubezoo/what-is-kubezoo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Kubezoo

If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant".

However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other.

Now take a different situation. Instead of having 3 large clients, you have hundreds of small users. Each user needs to quickly run workloads in their own private "cluster", and it needs to be quick and efficient. This would be a pretty much impossible-to-manage situation without the proper tools. If we are talking about an average-sized team, it becomes infeasible from a manpower perspective to be able to handle these kinds of quick changes.

This is where Kubezoo comes in. The solution they provide is Kubernetes API as a Service (KAaaS). Kubezoo allows you to easily share your cluster among hundreds of tenants, and allows sharing both the control plane and the data plane. This makes the resource efficiency as high as simply having a namespace for each tenant. However, unlike a namespace isolation method, this also has increased API compatibility as well as resource isolation. So while there are several different multi-tenancy options to choose from, Kubezoo is one of the best when it comes to handling a large number of small tenants.
Loading