Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. doc changes for consistency and clarity. #496

Merged
merged 1 commit into from
Aug 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions ADOPTERS.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# KEDA HTTP Add-On Adopters
# KEDA HTTP Add-on Adopters

This page contains a list of organizations who are using KEDA's HTTP Add-On in production or at stages of testing.
This page contains a list of organizations who are using KEDA's HTTP Add-on in production or at stages of testing.

## Adopters

Expand All @@ -12,7 +12,7 @@ This page contains a list of organizations who are using KEDA's HTTP Add-On in p

You can easily become an adopter by sending a pull request to this file.

These are the adoption statusses that you can use:
These are the adoption statuses that you can use:

- ![production](https://img.shields.io/badge/-production-blue?style=flat)
- ![testing](https://img.shields.io/badge/-development%20&%20testing-green?style=flat)
67 changes: 37 additions & 30 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,36 +42,43 @@ Please find it at [docs/developing.md](./docs/developing.md).

### Pre-requisite:

- A running cluster with Keda installed.
- A running Kubernetes cluster with KEDA installed.
- [Mage](https://magefile.org/)
- [Helm](https://helm.sh/)
- [k9s](https://github.com/derailed/k9s)
- [k9s](https://github.com/derailed/k9s) (_optional_)
- Set the required environment variables explained [here](https://github.com/kedacore/http-add-on/blob/main/docs/developing.md#required-environment-variables).

### Building:
### Building:

- Fork & clone the repo : https://github.com/kedacore/http-add-on.git
- cd http-add-on
- Use Mage to build with :
```
mage build: build local binaries
mage dockerBuild: build docker images of the components
- Fork & clone the repo:
```console
$ git clone https://github.com/<your-username>/http-add-on.git
```
- Change into the repo directory:
```console
$ cd http-add-on
```
- Use Mage to build with:
```console
$ mage build # build local binaries
$ mage dockerBuild # build docker images of the components
```
If the environment variables are not setup , the docker build will fail so remember to export the right variable values.

### Deploying:

Custom KEDA-http-addon as an image
Custom HTTP Add-on as an image

- Make your changes in the code
- Build and publish images with your changes, remember to set your environment variables for images as per the registry of your choice and run
```
- Build and publish images with your changes, remember to set your environment variables for images as per the registry of your choice and run
```console
$ mage dockerBuild
```
If you want to deploy with docker or any other registry of your choice then use right address in setting the images.
If you want to deploy with docker or any other registry of your choice then use right address in setting the images.

There are local clusters with local registries provided, in such cases make sure to use and push your images to its local registry. In the case of MicroK8s, the address is `localhost:32000` and the helm install command would look like the following.

There are local clusters with local registries provided, in such cases make sure to use and push your images to its local registry. In the case of MicroK8s, the address is localhost:32000 and the helm install command would look like the following.
```
```console
$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ helm pull kedacore/keda-add-ons-http --untar --untardir ./charts
Expand All @@ -82,22 +89,22 @@ $ helm upgrade kedahttp ./charts/keda-add-ons-http \
--set image=localhost:32000/keda-http-operator \
--set images.scaler=localhost:32000/keda-http-scaler \
--set images.interceptor=localhost:32000/keda-http-interceptor
```
If you want to install the latest build of the HTTP Add on, set version to canary:
```
```
If you want to install the latest build of the HTTP Add-on, set version to `canary`:
```console
$ helm install http-add-on kedacore/keda-add-ons-http --create-namespace --namespace ${NAMESPACE} --set images.tag=canary
```
### Load testing with k9s:

K9s integrates Hey, a CLI tool to benchmark HTTP endpoints similar to AB bench. This preliminary feature currently supports benchmarking port-forwards and services. You can use this feature in load testing as follows:

- Install an application to scale: we used the sample provided for which you have to clone
```
- Install an application to scale, we use the provided sample -
```console
$ helm install xkcd ./examples/xkcd -n ${NAMESPACE}
```
- You'll need to clone the repository to get access to this chart. If you have your own Deployment and Service installed, you can go right to creating an HTTPScaledObject by
- You'll need to clone the repository to get access to this chart. If you have your own Deployment and Service installed, you can go right to creating an HTTPScaledObject. We use the provided sample HTTPScaledObject -
```
$ kubectl create -n $NAMESPACE -f examples/v0.2.0/httpscaledobject.yaml
$ kubectl create -n $NAMESPACE -f examples/v0.3.0/httpscaledobject.yaml
```
- Testing Your Installation using k9s:
```
Expand All @@ -112,7 +119,7 @@ K9s integrates Hey, a CLI tool to benchmark HTTP endpoints similar to AB bench.
(e) Enter the port-forward and apply <CTRL+L> to start a benchmark

(f) You can enter the port-forward to see the run stat details and performance.
```
```
>You can customize the benchmark in k9s also. It's explained well in [here](https://k9scli.io/topics/bench/).

## Developer Certificate of Origin: Signing your work
Expand Down Expand Up @@ -143,7 +150,7 @@ Signed-off-by: Random J Developer <[email protected]>

Git even has a `-s` command line option to append this automatically to your commit message:

```
```console
$ git commit -s -m 'This is my commit message'
```

Expand All @@ -153,10 +160,10 @@ Each Pull Request is checked whether or not commits in a Pull Request do contain

No worries - You can easily replay your changes, sign them and force push them!

```
git checkout <branch-name>
git reset $(git merge-base main <branch-name>)
git add -A
git commit -sm "one commit on <branch-name>"
git push --force
```console
$ git checkout <branch-name>
$ git reset $(git merge-base main <branch-name>)
$ git add -A
$ git commit -sm "one commit on <branch-name>"
$ git push --force
```
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ The KEDA HTTP Add-on allows Kubernetes users to automatically scale their HTTP s

| 🚧 **Project status: beta** 🚧|
|---------------------------------------------|
| ⚠ The HTTP Add-on currently is in [beta](https://github.com/kedacore/http-add-on/releases/tag/v0.1.0). We can't yet recommend it for production usage because we are still developing and testing it. It may have "rough edges" including missing documentation, bugs and other issues. It is currently provided as-is without support.
| ⚠ The HTTP Add-on currently is in [beta](https://github.com/kedacore/http-add-on/releases/latest). We can't yet recommend it for production usage because we are still developing and testing it. It may have "rough edges" including missing documentation, bugs and other issues. It is currently provided as-is without support.

## HTTP Autoscaling Made Simple

[KEDA](https://github.com/kedacore/keda) provides a reliable and well tested solution to scaling your workloads based on external events. The project supports a wide variety of [scalers](https://keda.sh/docs/2.2/scalers/) - sources of these events, in other words. These scalers are systems that produce precisely measurable events via an API.
[KEDA](https://github.com/kedacore/keda) provides a reliable and well tested solution to scaling your workloads based on external events. The project supports a wide variety of [scalers](https://keda.sh/docs/latest/scalers/) - sources of these events, in other words. These scalers are systems that produce precisely measurable events via an API.

KEDA does not, however, include an HTTP-based scaler out of the box for several reasons:

Expand All @@ -25,13 +25,13 @@ This project, often called KEDA-HTTP, exists to provide that scaling. It is comp

## Adopters - Become a listed KEDA user!

We are always happy to start list users who run KEDA's HTTP Add-On in production or are evaluating it, learn more about it [here](ADOPTERS.md).
We are always happy to start list users who run KEDA's HTTP Add-on in production or are evaluating it, learn more about it [here](ADOPTERS.md).

We welcome pull requests to list new adopters.

## Walkthrough

Although this is currently a **beta release** project, we have prepared a walkthrough document that with instructions on getting started for basic usage.
Although this is currently a **beta release** project, we have prepared a walkthrough document with instructions on getting started for basic usage.

See that document at [docs/walkthrough.md](./docs/walkthrough.md)

Expand Down
6 changes: 3 additions & 3 deletions RELEASE-PROCESS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The process of releasing a new version of the KEDA HTTP Add-on involves a few st

## 1: Current and new versions

Please go to the [releases page](https://github.com/kedacore/http-add-on/releases) and observe what the most recent release is. Specifically, note what the _tag_ of the release is. For example, if [version 0.1.0](https://github.com/kedacore/http-add-on/releases/tag/v0.1.0) is the latest release (it is as the time of this writing), the tag for that is `v0.1.0`.
Please go to the [releases page](https://github.com/kedacore/http-add-on/releases) and observe what the most recent release is. Specifically, note what the _tag_ of the release is. For example, if [version 0.3.0](https://github.com/kedacore/http-add-on/releases/tag/v0.3.0) is the latest release (it is as the time of this writing), the tag for that is `v0.3.0`.

To determine the new version, follow [SemVer guidelines](https://semver.org). Most releases will increment the PATCH or MINOR version number.

Expand All @@ -16,7 +16,7 @@ To determine the new version, follow [SemVer guidelines](https://semver.org). Mo

The title of the release should be "Version 1.2.3", substituting `1.2.3` with the new version number, and the Git tag should be `v1.2.3`, again substituting `1.2.3` with your new version number.

The release description should be a short to medium length summary of what has changed since the last release. The following link will give you a list of commits made since the `v0.1.0` tag: [github.com/kedacore/http-add-on/compare/v0.1.0...main](https://github.com/kedacore/http-add-on/compare/v0.1.0...main). Replace `v0.1.0` for your appropriate most recent last tag to get the commit list and base your release summary on that list.
The release description should be a short to medium length summary of what has changed since the last release. The following link will give you a list of commits made since the `v0.3.0` tag: [github.com/kedacore/http-add-on/compare/v0.3.0...main](https://github.com/kedacore/http-add-on/compare/v0.3.0...main). Replace `v0.3.0` for your appropriate most recent last tag to get the commit list and base your release summary on that list.

After you create the new release, automation in a GitHub action will build and deploy new container images.

Expand Down Expand Up @@ -44,7 +44,7 @@ images:
tag: 1.2.3
```

>Note: the container images generated by CI/CD in step 2 will have the same tag as the tag you created in the release, minus the `v` prefix. You can always see what images created by going to the container registry page for the [interceptor](https://github.com/orgs/kedacore/packages/container/package/http-add-on-interceptor), [operator](https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-operator) or [scaler](https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-scaler)
>Note: The container images generated by CI/CD in step 2 will have the same tag as the tag you created in the release, minus the `v` prefix. You can always see what images created by going to the container registry page for the [interceptor](https://github.com/orgs/kedacore/packages/container/package/http-add-on-interceptor), [operator](https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-operator) or [scaler](https://github.com/kedacore/http-add-on/pkgs/container/http-add-on-scaler)


Once you've made changes to the chart, here's how to do submit the change to the charts repository:
Expand Down
4 changes: 2 additions & 2 deletions docs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ There are three major components in this system. You can find more detail and di

- [Operator](../operator) - This component listens for events related to `HTTPScaledObject`s and creates, updates or removes internal machinery as appropriate.
- [Interceptor](../interceptor) - This component accepts and routes external HTTP traffic to the appropriate internal application, as appropriate.
- [Scaler](../scaler) - This component tracks the size of the pending HTTP request queue for a given app and reports it to KEDA. It acts as an [external scaler](https://keda.sh/docs/2.1/scalers/external-push/).
- [Scaler](../scaler) - This component tracks the size of the pending HTTP request queue for a given app and reports it to KEDA. It acts as an [external scaler](https://keda.sh/docs/latest/scalers/external-push/).
- [KEDA](https://keda.sh) - KEDA acts as the scaler for the user's HTTP application.

## Functionality Areas
Expand All @@ -35,7 +35,7 @@ The interceptor keeps track of the number of pending HTTP requests - HTTP reques

#### The Horizontal Pod Autoscaler

The HTTP Add-on works with the Kubernetes [Horizonal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) (HPA) -- via KEDA itself -- to execute scale-up and scale-down operations (except for scaling between zero and non-zero replicas). The add-on furnishes KEDA with two metrics - the current number of pending requests for a host, and the desired number (called `targetPendingRequests` in the [HTTPScaledObject](./ref/v0.2.0/http_scaled_object.md)). KEDA then sends these metrics to the HPA, which uses them as the `currentMetricValue` and `desiredMetricValue`, respectively, in the [HPA Algorithm](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details).
The HTTP Add-on works with the Kubernetes [Horizonal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) (HPA) -- via KEDA itself -- to execute scale-up and scale-down operations (except for scaling between zero and non-zero replicas). The add-on furnishes KEDA with two metrics - the current number of pending requests for a host, and the desired number (called `targetPendingRequests` in the [HTTPScaledObject](./ref/v0.3.0/http_scaled_object.md)). KEDA then sends these metrics to the HPA, which uses them as the `currentMetricValue` and `desiredMetricValue`, respectively, in the [HPA Algorithm](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details).

The net effect is that the add-on scales up when your app grows to more pending requests than the `targetPendingRequests` value, and scales down when it has fewer than that value.

Expand Down
32 changes: 16 additions & 16 deletions docs/developing.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ The most useful and common commands from the root directory are listed below. Pl

- `mage build`: Builds all the binaries for local testing.
- `mage test`: Tests the entire codebase
- `mage dockerbuild`: Builds all docker images
- `mage dockerBuild`: Builds all docker images
- Please see the below "Environment Variables" section for more information on this command
- `mage dockerpush`: Pushes all docker images, without building them first
- `mage dockerPush`: Pushes all docker images, without building them first
- Please see the below "Environment Variables" section for more information on this command

### In the Operator Directory
Expand Down Expand Up @@ -88,7 +88,7 @@ We'll also assume that you have set the `$NAMESPACE` environment variable in you

To establish one, run the following command in a separate terminal window:

```shell
```console
kubectl proxy -p 9898
```

Expand All @@ -98,9 +98,9 @@ kubectl proxy -p 9898

The second way to communicate with these services is almost the opposite as the previous. Instead of bringing the API server to you with `kubectl proxy`, you'll be creating an execution environment closer to the API server.

First, launch a container with an interactive shell in Kubernetes with the following command (substituting your namespace in for `$NAMESPACE`):
First, launch a container with an interactive console in Kubernetes with the following command (substituting your namespace in for `$NAMESPACE`):

```shell
```console
kubectl run -it alpine --image=alpine -n $NAMESPACE
```

Expand All @@ -119,37 +119,37 @@ The admin server also performs following tasks:

Run the following `curl` command to get the running configuration of the interceptor:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/config
```

#### Routing Table

To prompt the interceptor to fetch the routing table, then print it out:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/routing_ping
```

Or, to just ask the interceptor to print out its routing table:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/routing_table
```

#### Queue Counts

To fetch the state of an individual interceptor's pending HTTP request queue:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/queue
```

#### Deployment Cache

To fetch the current state of an individual interceptor's deployment queue:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/deployments
```

Expand All @@ -163,7 +163,7 @@ Like the interceptor, the operator has an admin server that has HTTP endpoints a

Run the following `curl` command to get the running configuration of the operator:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-operator-admin:9090/proxy/config
```

Expand All @@ -173,7 +173,7 @@ The operator has a similar `/routing_table` endpoint as the interceptor. That da

Fetch the operator's routing table with the following command:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-operator-admin:9090/proxy/routing_table
```

Expand All @@ -183,9 +183,9 @@ Like the interceptor, the scaler has an HTTP admin interface against which you c

#### Configuration

Run the following `curl` command to get the running configuration of the interceptor:
Run the following `curl` command to get the running configuration of the scaler:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-external-scaler:9091/proxy/config
```

Expand All @@ -195,12 +195,12 @@ The external scaler fetches pending queue counts from each interceptor in the sy

For convenience, the scaler also provides a plain HTTP server from which you can also fetch these metrics. Fetch the queue counts from this HTTP server with the following command:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-external-scaler:9091/proxy/queue
```

Alternatively, you can prompt the scaler to fetch counts from all interceptors, aggregate, store, and return counts:

```shell
```console
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-external-scaler:9091/proxy/queue_ping
```
Loading