Skip to content
Ryan Sandor Richards edited this page Dec 10, 2015 · 19 revisions

The runnable staging environment is a version of the Runnable sandbox platform running within runnable. It is a meta-environment that we use for the following tasks:

  1. To test and trial our own Sandbox product
  2. To demo and test new code before it is put into production
  3. To ensure that our product is capable of fully encapsulating a complex infrastructure deployment

As such it is an important piece of our infrastructure as it provides a realistic perspective for both the product and engineering teams.

Running the runnable infrastructure within a sandbox environment is technically complex and challenging to do correctly (for it is by nature quite meta and hard to wrap one's head around). This document serves as the final "source of truth" for how the environment works, how to keep it working, and what to do when it is down.

Infrastructure

Currently, due to limitations within our product, the staging environment has been separated into three major pieces. They are:

  1. CodeNow Staging Sandbox - used to host and run projects that can run within the product.
  2. Stagings Docks ASG - an auto-scaling group that contains docks for use within staging. Note: this is not the docks that our production sandbox runs on, but the docks used by runnable running within our production sandbox.
  3. alpha-staging-services - A separate EC2 instance running within the VPC. This runs various pieces of the infrastructure that require persistent data and/or need persistent IP addresses. The goal is to have as few pieces of our infrastructure here as possible.

CodeNow Staging Sandbox

This is the sandbox running in our production product for the runnable service itself. To manage the sandbox simply use runnable to do so: runnable.io/CodeNow. Each of the repository based projects that run within the sandbox are configured to work with one-another and with services running externally in the VPC (all via DNS mappings).

Staging Docks Auto-Scaling Group

The staging docks auto-scaling group (ASG) is a special set of docks that have been provisioned to work with the sandboxed staging environment. The docks within the group work just like production docks, but use the staging consul (running on alpha-stage-data) for initialization and deployment.

The docks can be accessed, monitored, and mutated with the docks-cli tool just like any other environment. To get a list of the docks use: docks -e staging.

alpha-stage-data

The docks cannot be run within the sandbox but require consistent access to specific data-stores that can. Unfortunately because these services are running on docks, which by our definition are ephemeral, we cannot currently ensure that certain services will retain their data and IP addresses over the long term.

To remedy this situation we have two auxiliary EC2 instance called alpha-stage-data and alpha-stage-data-2. These instances run the following services:

  1. consul and vault - dock-init uses these, much easier to setup and maintain via Ansible at this time
  2. redis - data store for sauron, dock service, requires TCP
  3. rabbitmq - docker-listener etc., IP cannot switch
  4. swarm-manager - This will probably be OK to push back into the sandbox soon, but keeping it outside until things stabilize (easier to debug, etc.)

Each of these services can be deployed and modified via the ansible-playbook command using the stage-hosts inventory. For example, to deploy rabbitmq one would simply run the following command from the devops-scripts/ansible directory:

ansible-playbook -i stage-hosts/ rabbitmq.yml

Each service is a direct analog to that of a service running in production. If you have any questions about the role of a particular service ask your fellow engineer for an explanation.

For more information setting up the alpha-stage-data node from scratch, view this guide.

Clone this wiki locally