-
Notifications
You must be signed in to change notification settings - Fork 49
DOC: Zero configuration (almost) benchmarking w Nested containers
Do you want to skip the usual packaging involved in setting up the workload?
While we don't recommend completely divorcing yourself form learning about the applications, we do have an established mechanism in place for doing so that is almost zero-configuration (aside from making sure that the credentials you're using for your cloud provider are operational and that your cloud itself is not broken).
We do this, essentially, the same way a container-management system does: with Docker. Instead of booting a pre-created VM snapshot with the workload already baked into it, we will pull a container from the Docker Hub (or from a private repository) instead that does not require you to prepare anything.
Advantages:
- You don't have to lift a finger for the workloads.
- Your base image becomes almost irrelevant and the workload should run on nearly any Linux distribution (knock on wood).
- You get more consistent application versions => You can pin the application version (Because it's docker)
Disadvantages:
- Startup times are MUCH longer. Because the entire docker image holding the workload has to get re-downloaded every single time. (You can work around this by snapshotting the VM, but that kind of defeats the whole point).
- You will likely have to learn have to run your own private docker repository if you want to build newer versions of the application and pin them. We do not have the resources to customize these for you. This really isn't a disadvantage so much as it is you having to learn a new technology.
In your CloudBench configuration file cbtool/configs/XXXXX_cloud_definitions.txt, create (or append to) the following subsections:
[VM_DEFAULTS]
NEST_CONTAINERS_ENABLED = $True # will use CloudBench-owned dockerhub images by default.
# Here is a private repo example, if you want to prepare your own Dockerfile or use ours:
# NEST_CONTAINERS_REPOSITORY = 192.168.0.1:5000/path/of/image # The port is not important. It can be whatever.
NEST_CONTAINERS_INSECURE_REGISTRY = $True # This allows the use of HTTP instead of HTTPS for local repositories
[VM_TEMPLATES]
# The cloudinit part is important and required. These are the dependencies required to get the nested container deployed via cloud-init.
# If you don't have cloud-init, then this won't work unless you go prepare a snapshot.
NEST_CONTAINERS_BASE_IMAGE = size:NA, imageids:1, imageid1:vanilla-ubuntu-or-other-image, cloudinit_packages:bc;jq;docker.io;python;redis-server;ntp
And that's pretty much it! The most important part is the vanilla-ubuntu-or-other-image part. This needs to be a reference to whatever empty, vanilla linux distribution you have available. (Obviously, the only thing that we have tested is Ubuntu, so if you run into trouble with other distributions, we would have to make pull requests accordingly to deal with them).
Once you've got that configuration in place, here is a running example on DigitalOcean:
(DROPLETS2R5PY3) aiattach nullworkload
status: VM size "01-512" (vcpus-vmem GB) mapped to cloud-specific flavor "1gb".
status: Will instruct cloud-init to install: bc;jq;docker.io;python;redis-server;ntp;redis-tools on vm_1 (0E51593D-B731-5E5A-AA60-B2FDE813EC1A), part of ai_1 (B581B41D-6703-5F97-9B41-15DD913C7889)
status: Starting instance "cb-mariner-MYDIGITALOCEAN-vm1" on DigitalOcean Cloud, using the image "18.04.3 (LTS) x64" (65416) and size "1gb", container: ubuntu_cb_nullworkload, connected to network "private", under tenant "staging1" (userdata is "True").
status: Reservation ID for vm_1 (0E51593D-B731-5E5A-AA60-B2FDE813EC1A), part of ai_1 (B581B41D-6703-5F97-9B41-15DD913C7889) is: 4946110
status: Waiting for vm_1 (0E51593D-B731-5E5A-AA60-B2FDE813EC1A), part of ai_1 (B581B41D-6703-5F97-9B41-15DD913C7889), to start...
As you can see above, we are booting a vanilla "18.04.3 (LTS) x64" virtual machine in the cloud.
In the log message above, you should also see "container: ubuntu_cb_nullworkload". This is the name of the container in the registry that will be pulled. So, if you're preparing your own registry, you want to either use the same naming scheme or override the name using the configuration file.
Then finally, you should see messages in /var/log/cloudbench/XXXX_remotescripts.log to the effect of:
$ tail -f /var/log/cloudbench/XXXXX_remotescripts.log
Starting generic VM post_boot configuration
..... snip ......
Executing "post_boot_steps" function
Restarting service "docker", with command "sudo systemctl restart docker", attempt 1 of 7...
Service "docker" was successfully restarted
Enabling service "docker", with command "sudo systemctl enable docker"...
Downloading container image from: 192.168.0.1:5000/python3/ubuntu_cb_nullworkload
Image pulled, starting container...
Stopping service "sshd" with command "sudo systemctl stop sshd"...
Disabling service "sshd" with command "sudo systemctl disable sshd"...
Container started, settling... (sudo docker run -u ${username} -it -d --name cbnested --privileged --net host --env CB_SSH_PUB_KEY="$(cat ~/.ssh/id_rsa.pub)" -v ~/:${NEST_EXPORTED_HOME} ${image} bash -c "sudo rm -f ${NEST_POST_BOOT_FLAG}; sudo mkdir ${userpath}/$username; sudo rsync -arz ${NEST_EXPORTED_HOME}/* ${NEST_EXPORTED_HOME}/.* ${userpath}/${username}/; sudo chmod 755 ${userpath}/${username}; sudo chown -R ${username}:${username} ${userpath}/${username}; (sudo bash /etc/my_init.d/inject_pubkey_and_start_ssh.sh &); cd; source ~/cbtool/scripts/common/cb_common.sh; syslog_netcat \"Running post_boot inside container...\"; ~/cbtool/scripts/common/cb_post_boot_container.sh; code=\$?; if [ \$code -gt 0 ] ; then echo post boot failed; exit \$code; fi; touch ${NEST_POST_BOOT_FLAG}; sleep infinity")
Return code: 2
Still waiting on container startup. Attempts left: 399
Return code: 2
Still waiting on container startup. Attempts left: 398
...... snip .....
Running post_boot inside container...
Return code: 2
Still waiting on container startup. Attempts left: 385
Inside the container now.
Copying application-specific scripts to the home directory
Creating symbolic link /usr/lib64 -> /usr/lib
Return code: 2
Stopping service "ganglia-monitor" with command "sudo service ganglia-monitor stop"...
Still waiting on container startup. Attempts left: 384
Disabling service "ganglia-monitor" with command "sudo update-rc.d -f ganglia-monitor remove"...
Stopping service "gmetad" with command "sudo service gmetad stop"...
Disabling service "gmetad" with command "sudo update-rc.d -f gmetad remove"...
Killing previously running ganglia monitoring processes on cb-mariner-DROPLETS2R5PY3-vm1
...... snip .....
As you can see above, we have started an empty Virtual Machine and pulled the workload's docker image ON-THE-FLY into the virtual machine to be prepared for startup.
At that point, everything else in CloudBench works the same way, except that the entire application benchmark runs inside of the container instead of the VM.
The container runs in host networking mode and the SSH daemon access will have been transfered from the base virtual machine to the container. If you want to get back into the base VM, you'll need to connect to it directly.
Good luck!