-
Notifications
You must be signed in to change notification settings - Fork 9
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #75 from mapuri/docs
Add write up for using cluster manager in baremetal environment
- Loading branch information
Showing
3 changed files
with
70 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
##Preparing baremetal or personal VM environments for Cluster Management | ||
|
||
This document goes through the steps to prepare a baremetal or personal VM setup for cluster management. | ||
|
||
**Note:** | ||
- Unless explicitly mentioned all the steps below are done by logging into the same host. It is referred as *control host* below. | ||
- Right now we test cluster manager on Centos7.2. More OS variations shall be added in future. | ||
|
||
###0. Ensure following pre-requisites are met on the control host | ||
- ansible 2.0 or higher is installed. | ||
- git is installed. | ||
- a management user has been created. Let's call that user **cluster-admin** from now on. | ||
- note that `cluster-admin` can be an existing user. | ||
- this user needs to have **passwordless sudo** access configured. You can use `visudo` tool for this. | ||
- a ssh key has been generated for `cluster-admin`. You can use `ssh-keygen` tool for this. | ||
- the public key for `cluster-admin` user is added to all other hosts in your setup. You can use `ssh-copy-id cluster-admin@<hostname>` for this, where `<hostname>` is name of the host in your setup where `cluster-admin` is being added as authorized user. | ||
|
||
###1. download and install cluster manager on the control host | ||
``` | ||
# Login as `cluster-admin` user before running following commands | ||
git clone https://github.com/contiv/ansible.git | ||
cd ansible | ||
# Create inventory file | ||
echo [cluster-control] > /tmp/hosts | ||
echo node1 ansible_host=127.0.0.1 >> /tmp/hosts | ||
# Install cluster manager | ||
ansible-playbook --key-file=~/.ssh/id_rsa -i /tmp/hosts ./site.yml | ||
``` | ||
|
||
###2. setup cluster manager configuration on the control host | ||
Edit the cluster manager configuration file that is created at `/etc/default/clusterm/clusterm.conf` to setup the user and playbook-location information. A sample is shown below. `playbook-location` needs to be set as the path of ansible directory we cloned in previous step. `user` needs to be set as name of `cluster-admin` user and `priv_key_file` is the location of the `id_rsa` file of `cluster-admin` user. | ||
``` | ||
# cat /etc/default/clusterm/clusterm.conf | ||
{ | ||
"ansible": { | ||
"playbook-location": "/home/cluster-admin/ansible/", | ||
"user": "cluster-admin", | ||
"priv_key_file": "/home/cluster-admin/.ssh/id_rsa" | ||
} | ||
} | ||
``` | ||
After the changes look good, restart cluster manager | ||
``` | ||
sudo systemctl restart clusterm | ||
``` | ||
|
||
###3. provision rest of the nodes for discovery from the control host | ||
Cluster manager uses serf as discovery service for node-health monitoring and ease of management. Here we will provision all the hosts to be added to the discovery service. The command takes `<host-ip>` as an argument. This is the IP address of the interface (also referred as control-interface) connected to the subnet designated for carrying traffic of infrastructure services like serf, etcd, swarm etc | ||
``` | ||
clusterctl discover <host-ip> | ||
``` | ||
|
||
**Note**: Once the above command is run for a host, it shall start showing up in `clusterctl nodes get` output in a few minutes. | ||
|
||
###3. Ready to rock and roll! | ||
All set now, you can follow the cluster manager workflows as [described here](./README.md#get-list-of-discovered-nodes). |