Skip to content

Commit

Permalink
xxx
Browse files Browse the repository at this point in the history
  • Loading branch information
anandhakumarpalanisamy committed Aug 13, 2020
2 parents 549a87d + 46a890d commit 4f29218
Show file tree
Hide file tree
Showing 14 changed files with 149 additions and 6 deletions.
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,14 @@ python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-100

- Gluster FS is used as persistent storage for all docker services hosted by an organization. In is required to have a seperate GlusterFS cluster in order to run this project on each of the **remote machines** that will host the HLF. We have created an easily deployable package for creating a GlusterFS cluster. Please check: [https://github.com/bityoga/mysome_glusterfs] and follow the ReadMe there!

## ansible-semaphore Setup Instructions:

These instructions is to be used only when utilising ansible-semaphore for deployment.

**Refer :** [ansible-semaphore-setup-instructions](/wiki/semaphore_instructions/)

For normal deployment process, ignore this and follow the instructions below.

## Configuration
There are very few parameters to be configured currently. All configurations are made inside *group_vars/all.yml*.
- **GlusterFS Setup** !Required
Expand Down
Binary file added images/semaphore_1_create_key.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/semaphore_2_import_repository.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/semaphore_3_create_remote_inventory.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/semaphore_4_all_other_tasks.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/semaphore_4_mountfs_task.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/semaphore_4_task_list.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions roles/hlf/peer/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
- "CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS={{couchdb.name}}_{{item.name}}:5984"
- "CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME={{couchdb.name}}"
- "CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD={{couchdb.password}}"
- "CORE_CHAINCODE_BUILDER=hyperledger/fabric-ccenv:{{item.tag}}"
working_dir: "{{item.path}}"
placement:
constraints:
Expand Down
6 changes: 4 additions & 2 deletions roles/hlf_explorer/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,9 @@
when: hlf_explorer.switch == "on"


- shell: ls -tr /root/hlft-store/orgca/admin/msp/keystore/*_sk | tail -1
- name: Register admin secret key name
become: yes
shell: ls -tr /root/hlft-store/orgca/admin/msp/keystore/*_sk | tail -1
register: admin_secret_key_file

- name: Copy and Rename admin secret key '*_sk' file to a common name "admin_sk.key" which is specified in connection profile
Expand Down Expand Up @@ -346,4 +348,4 @@
# ignore_errors: yes
# raw:
# 'docker exec -it $(docker ps -qf "name=^{{hlf_explorer_db.name}}") "./createdb.sh" 2>&1'
# when: hlf_explorer.switch == "on" and inventory_hostname in groups.swarm_manager_prime
# when: hlf_explorer.switch == "on" and inventory_hostname in groups.swarm_manager_prime
71 changes: 71 additions & 0 deletions roles/mountfs/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---

# Setup GlusterFS as local mount on each node
# Install glusterfs-client
- name: Add new apt source
become: yes
apt_repository:
repo: ppa:gluster/glusterfs-{{ glusterd_version }}
filename: gluster.list
state: present
when: ansible_os_family == 'Debian'

# Install glusterfs-clients
- name: Add an RPM signing key, uses whichever key is at the URL
become: yes
rpm_key:
key: http://download.gluster.org/pub/gluster/glusterfs/{{ glusterd_version }}/rsa.pub
state: present
when: ansible_os_family == 'RedHat'

- name: Add new yum source
yum_repository:
name: glusterfs
description: glusterfs YUM repo
baseurl: https://download.gluster.org/pub/gluster/glusterfs/{{ glusterd_version }}/LATEST/RHEL/glusterfs-rhel8.repo
state: present
when: ansible_os_family == 'RedHat'

- name: Update repositories cache and install "glusterfs-client" and other required package
become: yes
package:
name: "{{item}}"
update_cache: yes
state: present
loop:
- "glusterfs-client"


# Check if GlusterFS is already mounted, if yes, unmount it
- name: Unmount GlusterFS if already mounted
become: yes
mount:
path: /root/hlft-store
state: unmounted

# Remove the local mount point folder, if it exists, so that we get the right permissions everytime
- name: Remove Mount Point
become: yes
file:
path: /root/hlft-store
state: absent

# Create the local mount point on each node for S3FS mount
- name: Create Mount Point
become: yes
become_user: "root"
file:
path: /root/hlft-store
state: directory
owner: "root"
group: "root"
mode: 0750

# Mount the volume on /root/myfiles
- name: Mount Gluster volume
become: yes
mount:
path: /root/hlft-store
src: "{{ gluster_cluster_host0 }}:/{{gluster_cluster_volume}}"
fstype: glusterfs
state: mounted
6 changes: 4 additions & 2 deletions roles/swarm/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
# Setup and Create a docker swarm creating the relevent managers and slaves as described in inventory/hosts

- name: "Get docker info"
become: yes
become_user: "root"
shell: docker info
register: docker_info
changed_when: false
Expand Down Expand Up @@ -40,7 +42,7 @@
docker_swarm:
state: join
join_token: "{{ hostvars[groups.swarm_manager_prime[0]]['result'].swarm_facts.JoinTokens.Manager }}"
advertise_addr: "eth0:4567"
advertise_addr: "{{ ansible_host }}:4567"
remote_addrs: ["{{ hostvars[groups.swarm_manager_prime[0]]['ansible_host'] }}:2377" ]
retries: 3
delay: 15
Expand All @@ -53,7 +55,7 @@
docker_swarm:
state: join
join_token: "{{ hostvars[groups.swarm_manager_prime[0]]['result'].swarm_facts.JoinTokens.Worker }}"
advertise_addr: "eth0:4567"
advertise_addr: "{{ ansible_host }}:4567"
remote_addrs: "{{ hostvars[groups.swarm_manager_prime[0]]['ansible_host'] }}"
retries: 3
delay: 20
Expand Down
4 changes: 2 additions & 2 deletions templates/hlf-explorer-network.json
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@
"tlsCACerts": {
"path": "/tmp/crypto/peer1/tls-msp/tlscacerts/tls-tlsca-7054.pem"
},
"url": "grpcs://{{ ansible_default_ipv4.address }}:{{peer1.port}}",
"url": "grpcs://{{ hostvars[groups.swarm_manager_prime[0]]['ansible_host'] }}:{{peer1.port}}",
"grpcOptions": {
"ssl-target-name-override": "{{peer1.name}}"
}
Expand All @@ -59,7 +59,7 @@
"tlsCACerts": {
"path": "/tmp/crypto/peer2/tls-msp/tlscacerts/tls-tlsca-7054.pem"
},
"url": "grpcs://{{ ansible_default_ipv4.address }}:{{peer2.port}}",
"url": "grpcs://{{ hostvars[groups.swarm_manager_prime[0]]['ansible_host'] }}:{{peer2.port}}",
"grpcOptions": {
"ssl-target-name-override": "{{peer2.name}}"
}
Expand Down
59 changes: 59 additions & 0 deletions wiki/semaphore_instructions/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Semaphore config instructions
## Pre-requisites:
- Ensure that **ansible-semaphore is set up**. Please see [https://github.com/ansible-semaphore/semaphore] for further details on setting up ansible-semaphore.
Once ansible-semaphore is installed, you can verify it by accessing its url.

## ansible-semaphore Instructions:

- **1) Create Access keys** - Click "Key Store" from the side-menu and click "create key" on top right corner
- **Github ssh key** - ssh key of the github repository to be used
- **Remote Machines ssh keys** - ssh key to access the resource machines of the cloud provider

![alt text](../../images/semaphore_1_create_key.png)

- **2) Import github repository** - Click "Playbook Repositories" from the side-menu and click "create repository" on top right corner
- Import the **fabric_as_code** using the **Github ssh key** created in step 1

![alt text](../../images/semaphore_2_import_repository.png)

- **3) Update Inventory** - Click "Inventory" from the side-menu and click "create inventory" on top right corner
- Create one inventory for **Remote machines** using the **Remote Machines ssh keys** created in step 1

**Remote machine inventory example :**

![alt text](../../images/semaphore_3_create_remote_inventory.png)


- **Edit Remote machine inventory -** click "edit inventory" on the **remote machine's inventory** and update the ip addresses of the machines.

![alt text](../../images/semaphore_3_edit_remote_machine_inventory.png)

- **4) Create Task Templates** - Click "Task Templates" from the side-menu and click "new template" on top right corner
- Create 11 Task Templates for the 11 playbooks as shown in the figure below:

![alt text](../../images/semaphore_4_task_list.png)


- **Extra CLI arugments for 013.mount_fs.yml task -** ["-u","root","--extra-vars","gluster_cluster_host0='164.90.198.91' gluster_cluster_volume='gfs0'"]

- Replace "root" as per the username having root access in the remote machines.
- Replace "164.90.198.91" with one of the glusterfs machine's ip address.

![alt text](../../images/semaphore_4_mountfs_task.png)

- **Extra CLI arugments for all other tasks -** ["-u","root"] (Replace "root" as per the username having root access in the remote machines)

![alt text](../../images/semaphore_4_all_other_tasks.png)

- **5) Run Tasks**
- Run all of the 11 Tasks one by one.
- Click "run" green button on the right corner of the each tasks
- Click "run" green button on the bottom right corner of the "create task" pop up








0 comments on commit 4f29218

Please sign in to comment.