-
Notifications
You must be signed in to change notification settings - Fork 49
HOWTO: Preparing your cloud to be driven by CBTOOL
-
The CBTOOL Orchestrator Node (ON) will require access to the every newly created instance (just once). If you are running the ON inside the cloud, it should not be a problem. If you are running you CBTOOL Orchestretor Node outside of the cloud, please follow these instructions
-
For each cloud, you will need basically three pieces of information :
a) Access info (i.e., URL for the API endpoint)
b) Authentication info (i.e., username/password, tokens)
c) Location info (e.g., Region or Availability Zone)
d) The name or identifier for at least one base (unconfigured) Ubuntu or RHEL/Centos/Fedora image to be used later for the creation of the workloads (e.g., "ami-a9d276c9" on EC2's "us-west-2" or "ubuntu-1604-xenial-v20161221" on Google Compute Engine"). A good example on how to "override" the default parameters for "roles" (such as
imageid1
) directly on your private configuration file can be seen here
- Amazon EC2
- OpenStack
- Google Compute Engine
- DigitalOcean
- Docker/Swarm (Parallel Docker Manager)
- LXD/LXC (Parallel Container Manager)
- Kubernetes
- Libvirt (Parallel Libvirt Manager)
- VMWare vCloud
- CloudStack
- SoftLayer
-
The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the EC2 API and to the instantiated VMs (once created), through either their private or public IP addresses.
-
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) AWS access key (EC2_ACCESS)
b) EC2 security group (EC2_SECURITY_GROUPS)
c) AWS secret key (EC2_CREDENTIALS)
d) EC2 Region, being the default us-east-1 (EC2_INITIAL_VMCS)
e) The name of an user that CBTOOL will use to login on these (EC2_LOGIN). In case of EC2, if the user account does not exist already on the instance at boot time, it will be created by
cloud-init
-
IMPORTANT: On the most current version of the code, SSH key pairs are automatically created and managed by CBTOOL. If you insist in using your own keys (** NOT RECOMMENDED **), then there are two additional parameters that will require changes:
I) Create a new key pair on EC2 (e.g., cbkey) and download the private key file to ~/cbtool/credentials/ (don't forget to chmod 600) (EC2_KEY_NAME)
II) Just repeat the key name from item I). It is complicated :-) (EC2_SSH_KEY_NAME), but it can be a lot more than really complicated :-) (it depends mostly on your cloud's capabilities and configurations).
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the nova API endpoints and to the instantiated VMs (once created), through their fixed IP addresses.
-
Currently, we are transitioning from a "native" cloud adapter for OpenStack (OSK) to a "Libcloud"-based one (OS). For now, we recommend the use of the native adapter. More information on the FAQ
-
Pieces of information needed for your private configuration file:
a) IP address for the nova API endpoint (OSK_ACCESS) If the URL is reachable through a hostname that contains dashes (-), please replace those with the word _dash_.
For instance, OSK_ACCESS=http://my-cloud-controller.mycloud.com:5000/v2.0/ should be rewritten as OSK_ACCESS=http://my_dash_cloud_dash_controller.mycloud.com:5000/v2.0/
b) API username, password and tenant name (OSK_CREDENTIALS). Normally, this is simply a triple <user>-<password>-<tenant> (e.g., admin-temp4now-admin). If HTTPS access is required, then the parameter should be <user>-<password>-<tenant>-<cacert> (path to certificate) instead of simply <user>-<password>-<tenant> (e.g., OSK_CREDENTIALS = admin-abcdef-admin-/etc/openstack/openstack.crt).
c) The name of an already existing security group, obtained with
openstack security group list
(OSK_SECURITY_GROUPS)d) The name of an already existing Region (usually, just "RegionOne" OSK_INITIAL_VMCS
e) The name of an user that CBTOOL will use to login on these (OSK_LOGIN). In case of OpenStack, if the user account does not exist already on the instance at boot time, it will be created by
cloud-init
f) The name of an already existing network, obtained with the command
openstack network list
(OSK_NETNAME).
IMPORTANT It is also possible for an user to just leverage an already functional "credentials file" (e.g., stacker
for OpenStack. In this case, a)
(OSK_ACCESS) and b)
(OSK_CREDENTIALS) become much simpler. Here follows an example:
stack@osk4cbvm1:~$ cat ~/.stackrc
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=temp4now
export OS_AUTH_URL=http://100.100.0.10/identity/v3
export OS_NO_CACHE=1
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_ID=default
export OS_INTERFACE=public
And then on the private configuration file the user simply specifies
[USER-DEFINED : CLOUDOPTION_MYOSK]
OSK_ACCESS=~/.stackrc
OSK_CREDENTIALS=AUTO
OSK_INITIAL_VMCS = <same as above>
OSK_SECURITY_GROUPS = <same as above>
OSK_NETNAME = <same as above>
OSK_SSH_KEY_NAME = <same as above>
OSK_KEY_NAME = <same as above>
OSK_LOGIN = <same as above>
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
Execute
gcloud auth login --no-launch-browser
. This command will output an URL that has to be accessed from a browser. It will produce an authentication string that has to be pasted back on the command’s prompt. -
Execute
gcloud auth application-default login
. This command will also output an URL that has to be accessed from a browser. It will produce an authentication string that has to be pasted back on the command’s prompt. -
Execute
gcloud config set project YOUR-PROJECT-ID
, where YOUR-PROJECT-ID is the ID of the project. -
Test the success of the configuration authentication by running a command such as gcloud compute machine-types list.
-
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) The project name where instances will be effectively run, and the project name that houses the images. Both as a comma-separated string pair (this can be obtained with "gcloud info") (GCE_ACCESS)
b) Google Compute Engine Region, the default being us-east1-b (GCE_INITIAL_VMCS)
e) The name of an user that CBTOOL will use to login on these (GCE_LOGIN). In case of GCE, if the user account does not exist already on the instance at boot time, it will be created by
cloud-init
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
TBD
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) is requires network access to all Docker daemons running on each Host.
-
Each Docker daemon should be listening on a TCP port, which allows a remote client (CBTOOL) to establish communication and issue commands to it. This is not the default configuration for a newly installed Docker engine (by default, it only listens on
/var/run/docker.sock
, and thus a change in the start configuration option will most likely be required. Basically, you need to make use of the option-H
. For instance, if your Docker daemon is managed by SystemD (Ubuntu/Centos/Fedora), you will have to change the file/lib/system/system/docker.service
and make sure thatExecStart
parameter contains a string like-H tcp://0.0.0.0:2375
. -
CBTOOL will attempt to SSH into the running Docker instances. This will require an image that has a running SSH daemon ready. Examples can be found on https://hub.docker.com/r/ibmcb/cbtoolbt-ubuntu/ and https://hub.docker.com/r/ibmcb/cbtoolbt-phusion/.
-
If multiple Docker hosts are used, make sure that the containers can communicate through the (Docker) overlay network (https://docs.docker.com/engine/userguide/networking/get-started-overlay/).
-
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) Comma-separated connection URLs in the format tcp://<IP>:<PORT> (PDM_ACCESS)
b) A "Docker Region", for now just set as "world" (PDM_INITIAL_VMCS)
c) Docker network name (PDM_NETNAME)
d) The name of user already existing on the Docker images that CBTOOL will use to login on these (PDM_LOGIN)
-
IMPORTANT: By default, during the first execution, the CBTOOL "PDM" Cloud Adapter will try to pull pre-configured Docker images from https://hub.docker.com/r/ibmcb/
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) requires network access to all LXD daemons running on each Host. Install a relatively recent version of the lxd package (e.g.,
sudo apt-get install lxd lxd-client
on Ubuntu 16.04 will install lxd version 2.X)- Don't forget to run
lxd init
on all hosts after the initial installation.
- Don't forget to run
-
The CBTOOL ON will also require passwordless root access via SSH to all hosts running an LXD daemon. We fully agree that it is a far from acceptable situation, but until we have a "blessed" (by the LXD community) way of performing instance port mapping on the hosts, we have to rely on rinetd.
-
The package rinetd (
sudo apt-get install rinetd
/sudo yum install rinetd
) has to be installed on all hosts running an LXD daemon. -
CBTOOL will attempt to SSH into the running instances. This will require an image that has a running SSH daemon ready, such as
ubuntu:16.04
orimages:fedora/24
(e.g., executesudo lxc image copy ubuntu:16.04 local:
on each host), from https://us.images.linuxcontainers.org. -
If multiple LXD hosts are used, make sure that the containers can communicate through the some type of overlay network (useful example: https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/).
-
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) Comma-separated connection URLs in the format https://:) (PCM_ACCESS)
b) A password required to connect to each LXD host (PCM_CREDENTIALS)
c) A "LXD Region", for now just set as "world" (PCM_INITIAL_VMCS)
c) LXD network name, typically
lxdbr0
(PCM_NETNAME)d) The name of user already existing on the container images that CBTOOL will use to login on these (PCM_LOGIN)
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) requires network access to all worker nodes on the Kubernetes cluster.
-
By default, the (Docker) images used by different Virtual Application types are pulled for the "ibmcb" repository on Docker Hub. This can be changed by altering the parameter
IMAGE_PREFIX
on the section[VM_DEFAULTS : KUB_CLOUDCONFIG]
on your private configuration file. -
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) Path to your kubeconfig file, on the Orchestrator Node (KUB_ACCESS)
b) Simply set the credentials to
NOTUSED
(KUB_CREDENTIALS)c) A "Region". For now just set as "world" (KUB_INITIAL_VMCS)
c) A Kubernetes network name. For now, just set it as "default" (KUB_NETNAME)
d) The name of user already existing on the Docker images that CBTOOL will use to login on these (KUB_LOGIN). By the default, the images from the "ibmcb" account on Docker Hub use "ubuntu" (KUB_LOGIN)
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) requires network access to all libvirt daemons running on each Host. Install a relatively recent version of Libvirt and its dependencies (e.g.,
sudo apt-get install libvirt-dev libvirt-bin python-libvirt libvirt-clients qemu-utils genisoimage
on Ubuntu 17.10 will install version 3.X)- Make sure libvirt is properly configured, with all authentication disabled, and listening to TCP requests on port 16509 (by default). In Ubuntu, the following set of commands should take care of it :
sudo sed -i -e "s/#listen_tcp/listen_tcp/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#listen_tls/listen_tls/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#listen_addr/listen_addr/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/192.168.0.1/0.0.0.0/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#auth_tcp/auth_tcp/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#auth_unix_ro/auth_unix_ro/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#auth_unix_rw/auth_unix_rw/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/0770/0777/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/sasl/none/g" /etc/libvirt/libvirtd.conf sudo sed -i -e "s/#security_driver =.*/security_driver = \"none\"/g" /etc/libvirt/qemu.conf sudo sed -i -e "s/#libvirtd_opts.*/libvirtd_opts=\"-l\"/g" /etc/default/libvirtd sudo systemctl restart libvirtd
- Make sure that libvirt has at least one storage pool defined. By default, CBTOOL will attempt to use the
default
storage pool. In case it is not yet set, the following sequence of commands would take care of it:
sudo mkdir -p /var/lib/libvirt/images/ sudo virsh pool-define-as default dir - - - - /var/lib/libvirt/images/ sudo virsh pool-autostart default sudo virsh pool-start default
-
The CBTOOL ON will also require passwordless root access via SSH to all hosts running a Libvirt daemon. We fully agree that it is a far from acceptable situation, but until we have a "blessed" (by the Libvirt community) way of performing instance port mapping on the hosts, we have to rely on rinetd.
-
The package rinetd (
sudo apt-get install rinetd
/sudo yum install rinetd
) has to be installed on all hosts running a Libvirt daemon. -
CBTOOL will attempt to SSH into the running instances. For instance, download an Ubuntu cloud image or Centos cloud image (e.g., in qcow2 format) and place it on the path of the libvirt storage pool (by default
/var/lib/libvirt/images
. Do it for each individual Libvirt host that you intend to use (please remember to restart the libvirt daemon after adding new images to its storage pool). -
If multiple Libvirt hosts are used, make sure that the VMs can communicate through the some type of overlay network (useful example: https://www.packtpub.com/mapt/book/networking_and_servers/9781784399054/9/ch09lvl1sec65/configuring-open-vswitch-tunnels-with-vxlan).
- If a single Libvirt host is used, just make sure that NAT'ed virtual network
default
is defined and started (confirm withvirsh net-list --all
).
- If a single Libvirt host is used, just make sure that NAT'ed virtual network
-
Pieces of information needed for your private configuration file :
a) Comma-separated connection URLs in the format qemu+tcp://) (PLM_ACCESS)
b) A "Libvirt Region", for now just set as "world" (PLM_INITIAL_VMCS)
c) Libvirt network name, typically
default
(PLM_NETNAME)d) Libvirt storage pool name, typically
default
(PLM_POOLNAME)e) The name of user already existing on the VM images that CBTOOL will use to login on these (PLM_LOGIN)
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
TBD
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
TBD
NEXT STEP: Proceed to the section Preparing a VM to be used with CBTOOL on a real cloud
-
The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the Softlayer API and to the instantiated VMs (once created), through their backend IP addresses.
-
Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :
a) Username and API key (SLR_CREDENTIALS)
b) SoftLayer DataCenter, the default being dal05 (SLR_INITIAL_VMCS)
e) The name of user already existing on the VM images that CBTOOL will use to login on these (SLR_LOGIN)