This document provides a high-level overview of Actions Runner Controller (ARC). ARC enables running Github Actions Runners on Kubernetes (K8s) clusters.
With this overview, you can get a foundation of basic scenarios and be capable of reviewing other advanced topics.
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform to automate your build, test, and deployment pipeline.
You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. Your workflow contains one or more jobs which can run in sequential order or in parallel. Each job will run inside its own runner and has one or more steps that either run a script that you define or run an action, which is a reusable extension that can simplify your workflow. To learn more about Actions - see "Learn Github Actions".
Runners execute the job that is assigned to them by Github Actions workflow. There are two types of Runners:
- Github-hosted runners - GitHub provides Linux, Windows, and macOS virtual machines to run your workflows. These virtual machines are hosted in the cloud by Github.
- Self-hosted runners - you can host your own self-hosted runners in your own data center or cloud infrastructure. ARC deploys self-hosted runners.
Self-hosted runners offer more control of hardware, operating system, and software tools than GitHub-hosted runners. With self-hosted runners, you can create custom hardware configurations that meet your needs with processing power or memory to run larger jobs, install software available on your local network, and choose an operating system not offered by GitHub-hosted runners.
Self-hosted runners can be physical, virtual, in a container, on-premises, or in a cloud.
- Traditional Deployment is having a physical machine, with OS and apps on it. The runner runs on this machine and executes any jobs. It comes with the cost of owning and operating the hardware 24/7 even if it isn't in use that entire time.
- Virtualized deployments are simpler to manage. Each runner runs on a virtual machine (VM) that runs on a host. There could be multiple such VMs running on the same host. VMs are complete OS’s and might take time to bring up everytime a clean environment is needed to run workflows.
- Containerized deployments are similar to VMs, but instead of bringing up entire VM’s, a container gets deployed.Kubernetes (K8s) provides a scalable and reproducible environment for containerized workloads. They are lightweight, loosely coupled, highly efficient and can be managed centrally. There are advantages to using Kubernetes (outlined "here."), but it is more complicated and less widely-understood than the other options. A managed provider makes this much simpler to run at scale.
Actions Runner Controller(ARC) makes it simpler to run self hosted runners on K8s managed containers.
ARC is a K8s controller to create self-hosted runners on your K8s cluster. With few commands, you can set up self hosted runners that can scale up and down based on demand. And since these could be ephemeral and based on containers, new instances of the runner can be brought up rapidly and cleanly.
We have a quick start guide that demonstrates how to easily deploy ARC into your K8s environment. For more details, see "QuickStart Guide."
ARC basically consists of a set of custom resources. An ARC deployment is applying these custom resources onto a K8s cluster. Once applied, it creates a set of Pods, with the Github Actions runner running within them. Github is now able to treat these Pods as self hosted runners and allocate jobs to them.
ARC consists of several custom resource definitions (Runner, Runner Set, Runner Deployment, Runner Replica Set and Horizontal Runner AutoScaler). For more information on CRDs, refer "Kubernetes Custom Resources."
The helm command (in the QuickStart guide) installs the custom resources into the actions-runner-system namespace.
helm install -f custom-values.yaml --wait --namespace actions-runner-system \
--create-namespace actions-runner-controller \
actions-runner-controller/actions-runner-controller
Once the custom resources are installed, another command deploys ARC into your K8s cluster.
The Deployment and Configure ARC
section in the Quick Start guide
lists the steps to deploy ARC using a runnerdeployment.yaml
file. Here, we will explain the details
For more details, see "QuickStart Guide."
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
replicas: 1
template:
spec:
repository: mumoshu/actions-runner-controller-ci
kind: RunnerDeployment
: indicates its a kind of custom resource RunnerDeployment.replicas: 1
: will deploy one replica. Multiple replicas can also be deployed ( more on that later).repository: mumoshu/actions-runner-controller-ci
: is the repository to link to when the pod comes up with the Actions runner (Note, this can be configured to link at the Enterprise or Organization level also).
When this configuration is applied with kubectl apply -f runnerdeployment.yaml
, ARC creates one pod example-runnerdeploy-[**]
with 2 containers runner
and docker
.
runner
container has the github runner component installed, docker
container has docker installed.
The GitHub hosted runners include a large amount of pre-installed software packages. For complete list, see "Runner images."
ARC maintains a few runner images with latest
aligning with GitHub's Ubuntu version. These images do not contain all of the software installed on the GitHub runners. They contain subset of packages from the GitHub runners: Basic CLI packages, git, docker and build-essentials. To install additional software, it is recommended to use the corresponding setup actions. For instance, actions/setup-java
for Java or actions/setup-node
for Node.
Now, all the setup and configuration is done. A workflow can be created in the same repository that could target the self hosted runner created from ARC. The workflow needs to have runs-on: self-hosted
so it can target the self host pool. For more information on targeting workflows to run on self hosted runners, see "Using Self-hosted runners."
With a small tweak to the replicas count (for eg - replicas: 2
) in the runnerdeployment.yaml
file, more runners can be created. Depending on the count of replicas, those many sets of pods would be created. As before, Each pod contains the two containers.
ARC also allows for scaling the runners dynamically. There are two mechanisms for dynamically scaling - (1) Webhook driven scaling and (2) Pull Driven scaling, This document describes the Pull Driven scaling model.
You can enable scaling with 3 steps
- Enable
HorizontalRunnerAutoscaler
- Create adeployment.yaml
file of typeHorizontalRunnerAutoscaler
. The schema for this file is defined below. - Scaling parameters -
minReplicas
andmaxReplicas
indicates the min and max number of replicas to scale to. - Scaling metrics - ARC currently supports
PercentageRunnersBusy
as a metric type. ThePercentageRunnersBusy
will poll GitHub for the number of runners in thebusy
state in the RunnerDeployment's namespace, it will then scale depending on how you have configured the scale factors.
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
# Your RunnerDeployment Here
name: example-runnerdeploy
kind: RunnerDeployment
minReplicas: 1
maxReplicas: 5
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'
For more details - please see "Pull Driven Scaling."
The period between polls is defined by the controller's --sync-period
flag. If this flag isn't provided then the controller defaults to a sync period of 1m
, this can be configured in seconds or minutes.
ARC supports several different advanced configuration.
- support for alternate runners : Setting up runner pods with Docker-In-Docker configuration.
- managing runner groups : Managing a set of running with runner groups thus making it easy to manage different groups within enterprise
- Webhook driven scaling.
Please refer to the documentation in this repo for further details.
The solution supports both GHEC (GitHub Enterprise Cloud) and GHES (GitHub Enterprise Server) editions as well as regular GitHub. Both PAT (personal access token) and GitHub App authentication works for installations that will be deploying either repository level and / or organization level runners. If you need to deploy enterprise level runners then you are restricted to PAT based authentication as GitHub doesn't support GitHub App based authentication for enterprise runners currently.
If you are deploying this solution into a GHES environment then you will need to be running version >= 3.6.0.
When deploying the solution for a GHES environment you need to provide an additional environment variable as part of the controller deployment:
kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system
Note: The repository maintainers do not have an enterprise environment (cloud or server). Support for the enterprise specific feature set is community driven and on a best effort basis. PRs from the community are welcome to add features and maintain support.
Cloud Tooling
The project supports being deployed on the various cloud Kubernetes platforms (e.g. EKS), it does not however aim to go beyond that. No cloud specific tooling is bundled in the base runner, this is an active decision to keep the overhead of maintaining the solution manageable.
Bundled Software
The GitHub hosted runners include a large amount of pre-installed software packages. GitHub maintains a list in README files at https://github.com/actions/virtual-environments/tree/main/images/linux.
This solution maintains a few Ubuntu based runner images, these images do not contain all of the software installed on the GitHub runners. The images contain the following subset of packages from the GitHub runners:
- Some basic CLI packages
- Git
- Git LFS
- Docker
- Docker Compose
The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: actions/setup-java
or actions/setup-node
If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the summerwind/actions-runner
image and then install the extra dependencies directly in the docker image:
FROM summerwind/actions-runner:latest
RUN sudo apt-get update -y \
&& sudo apt-get install $YOUR_PACKAGES
&& sudo rm -rf /var/lib/apt/lists/*
You can then configure the runner to use a custom docker image by configuring the image
field of a RunnerDeployment
or RunnerSet
:
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: custom-runner
spec:
repository: actions/actions-runner-controller
image: YOUR_CUSTOM_RUNNER_IMAGE