Skip to content

Commit

Permalink
Merge pull request #75 from mapuri/docs
Browse files Browse the repository at this point in the history
Add write up for using cluster manager in baremetal environment
  • Loading branch information
mapuri committed Apr 1, 2016
2 parents 1d8e10b + 6302969 commit 565fc20
Show file tree
Hide file tree
Showing 3 changed files with 70 additions and 8 deletions.
6 changes: 5 additions & 1 deletion management/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
## 3 steps to Contiv Cluster Management

If you are trying cluster manager with baremetal hosts or a personal VM setup follow [this link](./baremetal.md) to setup the hosts. After that you can manage the cluster as described in the [step 3.](#3-login-to-the-first-node-to-manage-the-cluster) below.

To try with a built-in vagrant based VM environment continue reading.

### 0. Ensure correct dependencies are installed
- docker 1.9 or higher
- vagrant 1.7.3 or higher
Expand All @@ -23,7 +27,7 @@ sudo usermod -a -G docker `id -un`

### 2. launch three vagrant nodes.

**Note:** If you look at the project's `Vagrantfile`, you will notice that all the vagrant nodes (except for the first node) boot up with stock centos7.1 os and a `serf` agent running. `serf` is used as the node discovery service. This is intentional to meet the goal of limiting the amount of services that user needs to setup to start bringing up a cluster and hence making management easier.
**Note:** If you look at the project's `Vagrantfile`, you will notice that all the vagrant nodes (except for the first node) boot up with stock centos7.2 OS and a `serf` agent running. `serf` is used as the node discovery service. This is intentional to meet the goal of limiting the amount of services that user needs to setup to start bringing up a cluster and hence making management easier.
```
cd ../..
CONTIV_NODES=3 vagrant up
Expand Down
14 changes: 7 additions & 7 deletions management/ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,27 @@ have. They shall be tracked through github issues.
- [ ] cluster manager to allow changing some of the configuration without requiring
a restart.
- [ ] auto commission of nodes
- [ ] cluster manager to accept values for ansible specific group-vars etc as part of
- [x] cluster manager to accept values for ansible specific group-vars etc as part of
command line
- [ ] ability to assign roles/group to commissioned nodes both statically and dynamically
- [ ] harden the node lifecycle especially to deal with failures
- [ ] ansible playbooks to provision
- [ ] netmaster/netplugin service
- [x] netmaster/netplugin service
- [ ] host level network configuration like bundling NICs
- [ ] ovs
- [x] ovs
- [ ] volmaster/volplugin service
- [ ] ceph
- [x] etcd datastore
- [ ] consul datastore
- [ ] VIP service for high availability. haproxy??
- [x] VIP service for high availability.
- [x] docker stack including daemon, swarm
- [x] orca containers
- [ ] cluster manager
- [ ] collins
- [x] cluster manager
- [x] collins
- [ ] mysql over ceph storage
- [ ] what else?
- [ ] ansible playbooks for upgrade, cleanup and verify the above services
- [ ] add system-tests
- [x] add system-tests
- [x] configuration steps for control/first node. The first node is special, so need a
special way to commission it. For instance, collins is started as a container on it,
we need to figure a way to keep it running when the control node is commissioned
Expand Down
58 changes: 58 additions & 0 deletions management/baremetal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
##Preparing baremetal or personal VM environments for Cluster Management

This document goes through the steps to prepare a baremetal or personal VM setup for cluster management.

**Note:**
- Unless explicitly mentioned all the steps below are done by logging into the same host. It is referred as *control host* below.
- Right now we test cluster manager on Centos7.2. More OS variations shall be added in future.

###0. Ensure following pre-requisites are met on the control host
- ansible 2.0 or higher is installed.
- git is installed.
- a management user has been created. Let's call that user **cluster-admin** from now on.
- note that `cluster-admin` can be an existing user.
- this user needs to have **passwordless sudo** access configured. You can use `visudo` tool for this.
- a ssh key has been generated for `cluster-admin`. You can use `ssh-keygen` tool for this.
- the public key for `cluster-admin` user is added to all other hosts in your setup. You can use `ssh-copy-id cluster-admin@<hostname>` for this, where `<hostname>` is name of the host in your setup where `cluster-admin` is being added as authorized user.

###1. download and install cluster manager on the control host
```
# Login as `cluster-admin` user before running following commands
git clone https://github.com/contiv/ansible.git
cd ansible
# Create inventory file
echo [cluster-control] > /tmp/hosts
echo node1 ansible_host=127.0.0.1 >> /tmp/hosts
# Install cluster manager
ansible-playbook --key-file=~/.ssh/id_rsa -i /tmp/hosts ./site.yml
```

###2. setup cluster manager configuration on the control host
Edit the cluster manager configuration file that is created at `/etc/default/clusterm/clusterm.conf` to setup the user and playbook-location information. A sample is shown below. `playbook-location` needs to be set as the path of ansible directory we cloned in previous step. `user` needs to be set as name of `cluster-admin` user and `priv_key_file` is the location of the `id_rsa` file of `cluster-admin` user.
```
# cat /etc/default/clusterm/clusterm.conf
{
"ansible": {
"playbook-location": "/home/cluster-admin/ansible/",
"user": "cluster-admin",
"priv_key_file": "/home/cluster-admin/.ssh/id_rsa"
}
}
```
After the changes look good, restart cluster manager
```
sudo systemctl restart clusterm
```

###3. provision rest of the nodes for discovery from the control host
Cluster manager uses serf as discovery service for node-health monitoring and ease of management. Here we will provision all the hosts to be added to the discovery service. The command takes `<host-ip>` as an argument. This is the IP address of the interface (also referred as control-interface) connected to the subnet designated for carrying traffic of infrastructure services like serf, etcd, swarm etc
```
clusterctl discover <host-ip>
```

**Note**: Once the above command is run for a host, it shall start showing up in `clusterctl nodes get` output in a few minutes.

###3. Ready to rock and roll!
All set now, you can follow the cluster manager workflows as [described here](./README.md#get-list-of-discovered-nodes).

0 comments on commit 565fc20

Please sign in to comment.