diff --git a/Autoscaler101/autoscaler-lab.md b/Autoscaler101/autoscaler-lab.md index af3eb697..d3347e07 100644 --- a/Autoscaler101/autoscaler-lab.md +++ b/Autoscaler101/autoscaler-lab.md @@ -120,6 +120,7 @@ Watch the pods, and you will see that the resource limits are reached, after whi Now that we have gotten a complete look at the vertical pod autoscaler, let's take a look at the HPA. Create a file nginx-hpa.yml and paste the below contents into it. +``` apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: @@ -177,4 +178,4 @@ You should be able to see the memory limit getting reached, after which the numb ## Conclusion -That sums up the lab on autoscalers. In here, we discussed the two most commonly used in-built autoscalers: HPA and VPA. We also took a hands-on look at how the autoscalers worked. This is just the tip of the iceberg when it comes to scaling, however, and the subject of custom scalers that can scale based on metrics other than memory and CPU is vast. If you are interested in looking at more complicated scaling techniques, you could take a look at the [KEDA section](../Keda101/what-is-keda.md) to get some idea of the keda autoscaler. \ No newline at end of file +That sums up the lab on autoscalers. In here, we discussed the two most commonly used in-built autoscalers: HPA and VPA. We also took a hands-on look at how the autoscalers worked. This is just the tip of the iceberg when it comes to scaling, however, and the subject of custom scalers that can scale based on metrics other than memory and CPU is vast. If you are interested in looking at more complicated scaling techniques, you could take a look at the [KEDA section](../Keda101/what-is-keda.md) to get some idea of the keda autoscaler.