Skip to content

JonasBak/autoscaler-proxy

Repository files navigation

autoscaler-proxy (WIP)

This is a small server I created that proxies incoming tcp connections to a hetzner server that is created and destroyed on demand.

The intended usage for this is being able to run docker containers (but can be used for anything, it just proxies tcp traffic to unix:///var/run/docker.sock) on a more powerful (/more expensive) throwaway machine by simply setting the DOCKER_HOST environment variable, and being able to run containers from within a container without needing to mount the docker socket from the host.

The main motivation was to replace my drone.io setup that uses a Hetzner autoscaler with gitea actions, and doing it without too much work. This will hopefully work by just setting the DOCKER_HOST and running the gitea actions runner normally.

Test it with:

HCLOUD_TOKEN=your_token go run . example/docker/config.yml

# New terminal window:
DOCKER_HOST=tcp://127.0.0.1:8081 docker ps

This will create a server in Hetzner and let you run docker commands on it. Right now it takes about 30 seconds to one minute to create the server on first request (or when it is scaled down). The server will be deleted after being idle for a while, or when stopping the proxy.

The communication with the server is over SSH, using keys that are created on startup. The server is configured on creation using cloud-init.

Configuration

The proxy can read its configuration from a file. The default configuration is equivalent to this:

autoscaler:
  connection_timeout: 10m0s
  scaledown_after: 15m0s
  server_name_prefix: autoscaler
  server_type: cpx31
  server_image: docker-ce
  cloud_init_template:
    groups:
      - docker
    ssh_keys:
      rsa_private: ${SERVER_RSA_PRIVATE}
      rsa_public: ${SERVER_RSA_PUBLIC}
    ssh_pwauth: false
    users:
      - default
      - groups: users,docker
        lock_passwd: true
        name: autoscaler
        ssh_authorized_keys:
          - ${AUTOSCALER_AUTHORIZED_KEY}

If you haven't provided some of the fields, it will default to the values here.

The configuration file supports some basic templating for the following variables:

Field Description
SERVER_RSA_PRIVATE Private ssh key for the server, generated by the autoscaler
SERVER_RSA_PUBLIC Public ssh key for the server, generated by the autoscaler
AUTOSCALER_AUTHORIZED_KEY Public key of the autoscaler
AUTOSCALER_AUTHORIZED_KEY Public key of the autoscaler
env.[VARIABLE] The value of the environment variable specified (from the process running the autoscaler)

You can also put key-value pairs in autoscaler.cloud_init_variables and reference them the same way. If you want to add some secrets you can create a yaml file with sops and put the file name in autoscaler.cloud_init_variables_from.

The proxy supports having multiple upstreams, useful if you for example want to use the server for both normal ssh access and docker. An example of such a configuration file could look like this:

autoscaler:
  cloud_init_template:
    users:
      - default
      - name: autoscaler
        groups: users,docker
        lock_passwd: true
        ssh_authorized_keys:
          - "${AUTOSCALER_AUTHORIZED_KEY}"
          - |
            YOUR PUBLIC KEY
listen_addr:
  "127.0.0.1:8081":
    net: unix
    addr: "/var/run/docker.sock"
  "127.0.0.1:8082":
    net: tcp
    addr: "127.0.0.1:22"

This way you could access the docker daemon at 127.0.0.1:8081 and ssh at 127.0.0.1:8082, and both ports would scale the server up.

Another useful thing you could do is to override the cloud-init file to run for example tailscale at startup. One way to do this would be:

autoscaler:
  cloud_init_template:
    runcmd:
      - curl -fsSL https://tailscale.com/install.sh | sh
      - tailscale up --authkey ${TS_AUTHKEY}
  cloud_init_variables_from: secrets.yml
  wait_for:
    net: tcp
    addr: some_tailscale_ip:22

This would install tailscale and run tailscale up with an authkey from an encrypted file (secrets.yml). The autoscaler would wait until it was able to connect to some_tailscale_ip:22 from the server before starting to proxy connections, so we know the server is fully configured and working as intended.

There is also the option to configure procs, which lets you run other processes when starting this program. The processes are started when the server is ready to receive connections, and is stopped before shutting down the autoscaler. See example in example/act_runner/config.yml.

Autoscaling gitea runner

There is a working example of an autoscaling gitea runner in example/act_runner/, it uses a "slightly patched" version of the runner and scales a server up when doing ci jobs, and down when idle.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages