From 4250896fba2f8d789b8de5bb0b4372d87141a17f Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Wed, 13 Dec 2023 11:02:46 +0530 Subject: [PATCH 01/13] Kubezoo start --- Kubezoo/what-is-kubezoo.md | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 Kubezoo/what-is-kubezoo.md diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md new file mode 100644 index 00000000..cb7af288 --- /dev/null +++ b/Kubezoo/what-is-kubezoo.md @@ -0,0 +1,3 @@ +# Kubezoo + +If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant". \ No newline at end of file From 7646908f228665c649df68855d9abd21e3a2bb05 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Thu, 14 Dec 2023 11:37:13 +0530 Subject: [PATCH 02/13] Kubezoo cont. --- Kubezoo/what-is-kubezoo.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md index cb7af288..49cf18ab 100644 --- a/Kubezoo/what-is-kubezoo.md +++ b/Kubezoo/what-is-kubezoo.md @@ -1,3 +1,5 @@ # Kubezoo -If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant". \ No newline at end of file +If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant". + +However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other. \ No newline at end of file From 956dc0686a4ff065644d327f7b7a0ff75b5023a5 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 15 Dec 2023 10:59:58 +0530 Subject: [PATCH 03/13] Kubezoo cont. --- Kubezoo/what-is-kubezoo.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md index 49cf18ab..30119c55 100644 --- a/Kubezoo/what-is-kubezoo.md +++ b/Kubezoo/what-is-kubezoo.md @@ -2,4 +2,4 @@ If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant". -However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other. \ No newline at end of file +However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other. \ No newline at end of file From 6a954ab819affbda91e3f334a7f67d6cddea06a2 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 15 Dec 2023 12:09:01 +0530 Subject: [PATCH 04/13] Kubezoo cont. --- Kubezoo/what-is-kubezoo.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md index 30119c55..a1334bfe 100644 --- a/Kubezoo/what-is-kubezoo.md +++ b/Kubezoo/what-is-kubezoo.md @@ -2,4 +2,8 @@ If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant". -However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other. \ No newline at end of file +However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other. + +Now take a different situation. Instead of having 3 large clients, you have hundreds of small users. Each user needs to quickly run workloads in their own private "cluster", and it needs to be quick and efficient. This would be a pretty much impossible-to-manage situation without the proper tools. If we are talking about an average-sized team, it becomes infeasible from a manpower perspective to be able to handle these kinds of quick changes. + +This is where Kubezoo comes in. The solution they provide is Kubernetes API as a Service (KAaaS). \ No newline at end of file From 722a05c75f3a6de541f4415193e6557eeed79d56 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Sun, 17 Dec 2023 12:07:50 +0530 Subject: [PATCH 05/13] Started Lab --- Kubezoo/what-is-kubezoo.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md index 64345371..5b9eaab4 100644 --- a/Kubezoo/what-is-kubezoo.md +++ b/Kubezoo/what-is-kubezoo.md @@ -10,4 +10,14 @@ This is where Kubezoo comes in. The solution they provide is Kubernetes API as a # Lab -Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. \ No newline at end of file +Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. You could go ahead and use [Minikube](https://minikube.sigs.k8s.io/docs/start/), or you could create a cluster using [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). You can also use any Kubernetes cluster you have at the ready. Let's start by cloning the [KubeZoo repo](https://github.com/kubewharf/kubezoo.git): + +``` +git clone https://github.com/kubewharf/kubezoo.git +``` + +Now, go to the root of the repo you just cloned, and run the `make` command: + +``` +make local-up +``` \ No newline at end of file From 491107597dc4d53c8145f1673555bf070c2818fd Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Tue, 19 Dec 2023 11:54:13 +0530 Subject: [PATCH 06/13] Kubezoo cont. --- Kubezoo/kubezoo-lab.md | 19 +++++++++++++++++++ Kubezoo/what-is-kubezoo.md | 16 +--------------- 2 files changed, 20 insertions(+), 15 deletions(-) create mode 100644 Kubezoo/kubezoo-lab.md diff --git a/Kubezoo/kubezoo-lab.md b/Kubezoo/kubezoo-lab.md new file mode 100644 index 00000000..b4b73385 --- /dev/null +++ b/Kubezoo/kubezoo-lab.md @@ -0,0 +1,19 @@ +# Kubezoo Lab + +Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. You could go ahead and use [Minikube](https://minikube.sigs.k8s.io/docs/start/), or you could create a cluster using [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). You can also use any Kubernetes cluster you have at the ready. Let's start by cloning the [KubeZoo repo](https://github.com/kubewharf/kubezoo.git): + +``` +git clone https://github.com/kubewharf/kubezoo.git +``` + +Now, go to the root of the repo you just cloned, and run the `make` command: + +``` +make local-up +``` + +This will get Kubezoo up and running on port 6443 as long as the port is free. Check to see if the API resources are up and running: + +``` +kubectl api-resources --context zoo +``` \ No newline at end of file diff --git a/Kubezoo/what-is-kubezoo.md b/Kubezoo/what-is-kubezoo.md index 5b9eaab4..e9f4193d 100644 --- a/Kubezoo/what-is-kubezoo.md +++ b/Kubezoo/what-is-kubezoo.md @@ -6,18 +6,4 @@ However, this solution comes with its own host of problems. The biggest issue is Now take a different situation. Instead of having 3 large clients, you have hundreds of small users. Each user needs to quickly run workloads in their own private "cluster", and it needs to be quick and efficient. This would be a pretty much impossible-to-manage situation without the proper tools. If we are talking about an average-sized team, it becomes infeasible from a manpower perspective to be able to handle these kinds of quick changes. -This is where Kubezoo comes in. The solution they provide is Kubernetes API as a Service (KAaaS). Kubezoo allows you to easily share your cluster among hundreds of tenants, and allows sharing both the control plane and the data plane. This makes the resource efficiency as high as simply having a namespace for each tenant. However, unlike a namespace isolation method, this also has increased API compatibility as well as resource isolation. So while there are several different multi-tenancy options to choose from, Kubezoo is one of the best when it comes to handling a large number of small tenants. - -# Lab - -Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. You could go ahead and use [Minikube](https://minikube.sigs.k8s.io/docs/start/), or you could create a cluster using [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). You can also use any Kubernetes cluster you have at the ready. Let's start by cloning the [KubeZoo repo](https://github.com/kubewharf/kubezoo.git): - -``` -git clone https://github.com/kubewharf/kubezoo.git -``` - -Now, go to the root of the repo you just cloned, and run the `make` command: - -``` -make local-up -``` \ No newline at end of file +This is where Kubezoo comes in. The solution they provide is Kubernetes API as a Service (KAaaS). Kubezoo allows you to easily share your cluster among hundreds of tenants, and allows sharing both the control plane and the data plane. This makes the resource efficiency as high as simply having a namespace for each tenant. However, unlike a namespace isolation method, this also has increased API compatibility as well as resource isolation. So while there are several different multi-tenancy options to choose from, Kubezoo is one of the best when it comes to handling a large number of small tenants. \ No newline at end of file From 5dfd1f20f9ac576ae7d859aac994f364b2fc2ce2 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Wed, 20 Dec 2023 12:13:07 +0530 Subject: [PATCH 07/13] Kubezoo cont. --- Kubezoo/kubezoo-lab.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/Kubezoo/kubezoo-lab.md b/Kubezoo/kubezoo-lab.md index b4b73385..9eb5f043 100644 --- a/Kubezoo/kubezoo-lab.md +++ b/Kubezoo/kubezoo-lab.md @@ -16,4 +16,16 @@ This will get Kubezoo up and running on port 6443 as long as the port is free. C ``` kubectl api-resources --context zoo +``` + +Now, let's create a sample tenant. For this, we will be using the config/setup/sample_tenant.yamlsample_tenant.yaml provided in the repo. If you take a look at the tenant yaml file, you will notice that this is a custom resource of type "tenant", and contains just a few lines specifying the type of resources this tenant requires. The name of the tenant is "111111". Since this is a regular Kubernetes resource, let's go ahead and deploy this tenant as we would a normal yaml: + +``` +kubectl apply -f config/setup/sample_tenant.yaml --context zoo +``` + +Check that the tenant is has been setup: + +``` +kubectl get tenant 111111 --context zoo ``` \ No newline at end of file From 993a49c947a5d6bcd2ce60ea67774a6bab18d64f Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Thu, 21 Dec 2023 11:03:44 +0530 Subject: [PATCH 08/13] Kubezoo cont. --- Kubezoo/kubezoo-lab.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/Kubezoo/kubezoo-lab.md b/Kubezoo/kubezoo-lab.md index 9eb5f043..661606e5 100644 --- a/Kubezoo/kubezoo-lab.md +++ b/Kubezoo/kubezoo-lab.md @@ -28,4 +28,16 @@ Check that the tenant is has been setup: ``` kubectl get tenant 111111 --context zoo +``` + +Since this tenant is basically a "cluster" in itself, it has it's own kubeconfig that gets created for it. You can extract it using: + +``` +kubectl get tenant 111111 --context zoo -o jsonpath='{.metadata.annotations.kubezoo\.io\/tenant\.kubeconfig\.base64}' | base64 --decode > 111111.kubeconfig +``` + +You should now be able to deploy all sorts of resources to the tenant by specifying the kubeconfig. For example, if you were to deploy a file called "application.yaml" into the tenant, you would use: + +``` +kubectl apply -f application.yaml --kubeconfig 111111.kubeconfig ``` \ No newline at end of file From 079d2683cc4bc887c4ca920db85781d32bd3bb1a Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Fri, 22 Dec 2023 12:27:50 +0530 Subject: [PATCH 09/13] Finishing kubezoo --- Kubezoo/kubezoo-lab.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/Kubezoo/kubezoo-lab.md b/Kubezoo/kubezoo-lab.md index 661606e5..c78c20f8 100644 --- a/Kubezoo/kubezoo-lab.md +++ b/Kubezoo/kubezoo-lab.md @@ -40,4 +40,14 @@ You should now be able to deploy all sorts of resources to the tenant by specify ``` kubectl apply -f application.yaml --kubeconfig 111111.kubeconfig -``` \ No newline at end of file +``` + +You can check the pod as the tenant by specifying the kubeconfig as before: + +``` +kubectl get po --kubeconfig 111111.kubeconfig +``` + +The pod would have been created in the namespace that you assigned to the tenant. If you were to have multiple tenants, you would not be able to see the pods of the other tenants as long as you only have the kubeconfig of the tenant that you are dealing with, which allows for better isolation. Using your regular kubeconfig as a cluster admin, if you were to list all pods with `kubectl get po -A`, you would be able to see all the pods of all the tenants separated by namespace. + +# Conclusion \ No newline at end of file From 97e31e85f47a41a069aae3d588baf8c6e419f489 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Sat, 23 Dec 2023 12:45:37 +0530 Subject: [PATCH 10/13] Kubezoo finished --- Kubezoo/kubezoo-lab.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/Kubezoo/kubezoo-lab.md b/Kubezoo/kubezoo-lab.md index c78c20f8..be2142c1 100644 --- a/Kubezoo/kubezoo-lab.md +++ b/Kubezoo/kubezoo-lab.md @@ -50,4 +50,6 @@ kubectl get po --kubeconfig 111111.kubeconfig The pod would have been created in the namespace that you assigned to the tenant. If you were to have multiple tenants, you would not be able to see the pods of the other tenants as long as you only have the kubeconfig of the tenant that you are dealing with, which allows for better isolation. Using your regular kubeconfig as a cluster admin, if you were to list all pods with `kubectl get po -A`, you would be able to see all the pods of all the tenants separated by namespace. -# Conclusion \ No newline at end of file +# Conclusion + +This brings us to the end of the section on Kubezoo. Hopefully, by now, you understand what a multi-tenant system is, what the benefits of such a system are, and what possible challenges you could face when using a system. You also know what Kubezoo can do to help alleviate these challenges, specifically when you have constraints such as a smaller development team and a large number of small clients. We also covered a lab on setting up Kubezoo in a kind cluster and deploying the items to the Kubezoo tenant, as well as showing how to interact with multiple tenants as a cluster admin. This covers the basics of Kubezoo. If you want to learn more on the topic, the official Kubezoo [GitHub page](https://github.com/kubewharf/kubezoo) is the best place to start. \ No newline at end of file From d7841dca9a95a8e9e227eb9a25c0f5a55a461174 Mon Sep 17 00:00:00 2001 From: Phantom-Intruder Date: Mon, 25 Dec 2023 12:03:45 +0530 Subject: [PATCH 11/13] Kubezoo finished --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index e9299ac4..a4fef8e5 100644 --- a/README.md +++ b/README.md @@ -327,6 +327,10 @@ A Curated List of Kubernetes Labs and Tutorials - [What is Disaster Recovery](./DisasterRecovery101/what-is-dr.md) - [DR Lab](./DisasterRecovery101/dr-lab.md) +## Kubezoo +- [What is Kubezoo](./Kubezoo/what-is-kubezoo.md) +- [Kubezoo lab](./Kubezoo/kubezoo-lab.md) + ## For Node Developers - [Kubernetes for Node Developers](./nodejs.md) From fb9a824792faaaa2d028a3bd5908c94331be9086 Mon Sep 17 00:00:00 2001 From: Mewantha Bandara Date: Sun, 7 Apr 2024 12:12:31 +0530 Subject: [PATCH 12/13] Update keda-lab.md --- Keda101/keda-lab.md | 30 +----------------------------- 1 file changed, 1 insertion(+), 29 deletions(-) diff --git a/Keda101/keda-lab.md b/Keda101/keda-lab.md index 4761d716..4ff9dd30 100644 --- a/Keda101/keda-lab.md +++ b/Keda101/keda-lab.md @@ -120,34 +120,6 @@ serviceAccount: The part that needs to be modified is the `annotations` section. So if you want to scale an EKS cluster based on SQS messages, then you first need an IAM role that has access to SQS, and you need to add this role arn as an annotation. -<<<<<<< HEAD -<<<<<<< HEAD -<<<<<<< HEAD -======= ->>>>>>> 862561a80f0cc61c79b3360b66436b6e72775f8c -``` -annotations: - eks.amazonaws.com/role-arn: arn:aws:iam:::role/ -``` - -Next, you need to change the ScaleObject resource. The mysql-hpa.yaml has the trigger specified as the mysql db. However, it does not have an option called `identityOwner`. This is becase we are not using authentication here, and therefore do not need such a thing. In order to add authentication, this key should be added and the value set to `operator`: - -``` -metadata: - ... - identityOwner: operator -``` - -And that's it! You only needed to modify two lines and you have full authorization among the cluster. - -While this is the easiest way to provide authentication, it is not the only way to do it. You could also change the `identityOwner` to `pod`, and create a `TriggerAuthentication` resource and feed in the AWS access keys (which isn't very secure), or have the keda service account assume a role that has access to the necessary resources (which is much more secure). There is a number of different ways to authorize, and these are covered in the [KEDA documentation](https://keda.sh/docs/1.4/concepts/authentication/). -<<<<<<< HEAD -======= -======= - ->>>>>>> 862561a80f0cc61c79b3360b66436b6e72775f8c -======= ->>>>>>> 6a954ab819affbda91e3f334a7f67d6cddea06a2 If you added the arn, then setting up authentication is a simple matter. While Keda provides resources specifically geared towards authentication, you won't need to use any of that. In the Keda authentication types, there exists a type called `operator`. This type allows the keda service account to directly acquire the role of the IAM arn you provided. As long as the arn has the permissions necessary, keda can function. The triggers will look like the following: ```yaml @@ -262,4 +234,4 @@ With the above configuration, a new Keda job will start every time a message is ## Conclusion -This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples). \ No newline at end of file +This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples). From 259f54cd074221ce087a3d46db13a0d362d5fcf1 Mon Sep 17 00:00:00 2001 From: Mewantha Bandara Date: Sun, 7 Apr 2024 12:39:49 +0530 Subject: [PATCH 13/13] Update README.md --- README.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/README.md b/README.md index a4fef8e5..e9299ac4 100644 --- a/README.md +++ b/README.md @@ -327,10 +327,6 @@ A Curated List of Kubernetes Labs and Tutorials - [What is Disaster Recovery](./DisasterRecovery101/what-is-dr.md) - [DR Lab](./DisasterRecovery101/dr-lab.md) -## Kubezoo -- [What is Kubezoo](./Kubezoo/what-is-kubezoo.md) -- [Kubezoo lab](./Kubezoo/kubezoo-lab.md) - ## For Node Developers - [Kubernetes for Node Developers](./nodejs.md)