-
Notifications
You must be signed in to change notification settings - Fork 0
Staging Handbook
The runnable staging environment is a version of the Runnable sandbox platform running within runnable. It is a meta-environment that we use for the following tasks:
- To test and trial our own Sandbox product
- To demo and test new code before it is put into production
- To ensure that our product is capable of fully encapsulating a complex infrastructure deployment
As such it is an important piece of our infrastructure as it provides a realistic perspective for both the product and engineering teams.
Running the runnable infrastructure within a sandbox environment is technically complex and challenging to do correctly (for it is by nature quite meta and hard to wrap one's head around). This document serves as the final "source of truth" for how the environment works, how to keep it working, and what to do when it is down.

Currently, due to limitations within our product, the staging environment has been separated into three major pieces. They are:
- CodeNow Staging Sandbox - used to host and run projects that can run within the product.
- Stagings Docks ASG - an auto-scaling group that contains docks for use within staging. Note: this is not the docks that our production sandbox runs on, but the docks used by runnable running within our production sandbox.
- alpha-staging-services - A separate EC2 instance running within the VPC. This runs various pieces of the infrastructure that require persistent data and/or need persistent IP addresses. The goal is to have as few pieces of our infrastructure here as possible.
This is the sandbox running in our production product for the runnable service itself. To manage the sandbox simply use runnable to do so: runnable.io/CodeNow. Each of the repository based projects that run within the sandbox are configured to work with one-another and with services running externally in the VPC (all via DNS mappings).
The staging docks auto-scaling group (ASG) is a special set of docks that have been provisioned to work with the sandboxed staging environment. The docks within the group work just like production docks, but use the staging consul (running on alpha-stage-data) for initialization and deployment.
The docks can be accessed, monitored, and mutated with the docks-cli tool just like any other environment. To get a list of the docks use: docks -e staging.
The docks cannot be run within the sandbox but require consistent access to specific data-stores that can. Unfortunately because these services are running on docks, which by our definition are ephemeral, we cannot currently ensure that certain services will retain their data and IP addresses over the long term.
To remedy this situation we have two auxiliary EC2 instance called alpha-stage-data and alpha-stage-data-2. These instances run the following services:
-
consulandvault- dock-init uses these, much easier to setup and maintain via Ansible at this time -
redis- data store for sauron, dock service, requires TCP -
rabbitmq- docker-listener etc., IP cannot switch -
swarm-manager- This will probably be OK to push back into the sandbox soon, but keeping it outside until things stabilize (easier to debug, etc.)
The rest of this section will give specific instructions on how to setup the data instances from scratch via ansible.
-
From your local
devops-scripts/ansibledirectory, run the following command:ansible-playbook -i stage-hosts/ consul.yml -
The initial runs of this playbook with fail due to docker not being fully installed on the
alpha-stage-data*docks. To proceed get to the point where it fails onalpha-stage-data. At this point restart that instance in EC2 and re-run the command with-e restart=true. -
Now that consul is installed you will need to seed it with the data needed by dock-init, to do so run:
ansible-playbook -i stage-hosts/ consul-values.yml -e write_values=yes -
Next we must install vault, to do so run:
ansible-playbook -i stage-hosts/ vault.yml -
ssh alpha-stage-data -
sudo docker exec -it $(sudo docker ps | grep 'vault' | awk '{print $1}') sh -
vault init -address http://127.0.0.1:8200 -
Record the output from the init command and set the appropriate variables in
devops-scripts/ansible/stage-hosts/varibles -
ansible-playbook -i stage-hosts/ vault-values.yml -e write_root_creds=yes -
ansible-playbook -i stage-hosts/ vault-values.yml -e write_values=yes
-
Run
ansible-playbook -i stage-hosts/ redis.yml. This playbook includes the role for installing docker on the alpha-stage-data host. The first run of the playbook will intentionally fail, as the first pass needs to install packages that require a system reboot before actually installing docker and, eventually, creating a redis container. -
In AWS reboot the alpha-stage-data host manually.
-
Once alpha-stage-data has rebooted run the following:
ansible-playbook -i stage-hosts/ redis.yml -e restart=true. The-e restart=trueindicates to the docker role that we are ready to install docker after a reboot, and the playbook should run successfully.
- One and done:
ansible-playbook -i stage-hosts/ swarm-manager.yml