Skip to content

Commit 3db623a

Browse files
committed
Introduce docs directory
We want to add more workloads (stalefulset, daemonset), more storage (CephFS), and the new OCM discovered applications. Each variant require specific instructions or example commands so it is best done using one file per variant, where you can experiment with complete disaster recovery flow for a single variant. - Move the content we had in the main README.md to `docs/initial-setup.md` and `docs/ocm-rbd.md` - Move the content we had in `workloads/kubevirt/README.md` to `docs/ovm-kubevirt.md`. - Add `docs/create-environment.md` with short instructions how to create a testing environment for playing with the samples in this repo. - Change `README.md` to a short overview linking to all other documents. The new docs directory can be used next to create more structure documentation like https://docs.readthedocs.io. Signed-off-by: Nir Soffer <[email protected]>
1 parent c9c4928 commit 3db623a

File tree

6 files changed

+749
-538
lines changed

6 files changed

+749
-538
lines changed

README.md

+15-162
Original file line numberDiff line numberDiff line change
@@ -1,171 +1,24 @@
11
# ocm-ramen-samples
22

3-
OCM Stateful application samples, including Ramen resources.
3+
OCM Stateful application samples, including *Ramen* resources.
44

5-
## Initial setup
6-
7-
1. Clone this git repository to get started:
8-
9-
```
10-
git clone https://github.com/RamenDR/ocm-ramen-samples.git
11-
cd ocm-ramen-samples
12-
```
13-
14-
1. Switch kubeconfig to point to the OCM Hub cluster
15-
16-
```
17-
kubectl config use-context hub
18-
```
19-
20-
1. Create DRClusters and DRPolicy
21-
22-
When using the ramen testing environment this is not needed, but if
23-
you are using your own Kubernetes clusters you need to create the
24-
resources.
25-
26-
Modify the DRCluster and DRpolicy resources in the `ramen` directory
27-
to match the actual cluster names in your environment, and apply
28-
the kustomization:
29-
30-
```
31-
kubectl apply -k ramen
32-
```
33-
34-
This creates DRPolicy and DRCluster resources in the cluster
35-
namespace that can be viewed using:
36-
37-
```
38-
kubectl get drcluster,drpolicy
39-
```
40-
41-
1. Setup the common OCM channel resources on the hub:
42-
43-
```
44-
kubectl apply -k channel
45-
```
46-
47-
This creates a Channel resource in the `ramen-samples` namespace and
48-
can be viewed using:
49-
50-
```
51-
kubectl get channel ramen-gitops -n ramen-samples
52-
```
53-
54-
## Sample applications
55-
56-
In the workloads directory provides samples that can be deployed on
57-
Kubernetes and OpenShift.
58-
59-
- deployment - busybox deployment
60-
- kubevirt
61-
- vm-pvc - PVC based VM
62-
- vm-dv - DataVolume based VM
63-
- vm-dvt - DataVolumeTemplate based VM
64-
65-
## Deploying a sample application
66-
67-
In the example we use the busybox deployment for Kubernetes regional DR
68-
environment using RBD storage:
69-
70-
subscription/deployment-k8s-regional-rbd
71-
72-
This application is deployed in the `deployment-rbd` namespace on the
73-
hub and managed clusters.
74-
75-
You can use other overlays to deploy on other cluster types or use
76-
different storage class. You can also create your own overlays based on
77-
the examples.
78-
79-
1. Deploy an OCM application subscription on hub:
5+
## Create an environment
806

81-
```
82-
kubectl apply -k subscription/deployment-k8s-regional-rbd
83-
```
7+
The easiest way to start is to use the *Ramen* testing environment
8+
created by the *drenv* tool. See
9+
[create environment](create-environment.md) to learn how to
10+
quickly create a your disaster recovery playground.
8411

85-
This creates the required Subscription, Placement, and
86-
ManagedClusterSetBinding resources for the deployment in the
87-
`deployment-rbd` namespace and can be viewed using:
88-
89-
```
90-
kubectl get subscription,placement -n deployment-rbd
91-
```
92-
93-
1. Inspect subscribed resources from the channel created in the same
94-
namespace on the ManagedCluster selected by the Placement.
95-
96-
The busybox deployment Placement `status` can be viewed on the hub
97-
using:
98-
99-
```
100-
kubectl get placement placement -n deployment-rbd
101-
```
102-
103-
The Busybox deployment subscribed resources, like the pod and the PVC
104-
can be viewed on the ManagedCluster using (example ManagedCluster
105-
`dr1`):
106-
107-
```
108-
kubectl get pod,pvc -n deployment-rbd --context dr1
109-
```
110-
111-
## Undeploying a sample application
112-
113-
To undeploy an application delete the subscription overlay used to
114-
deploy the application:
115-
116-
```
117-
kubectl delete -k subscription/deployment-k8s-regional-rbd
118-
```
119-
120-
## Enable DR for a deployed application
121-
122-
1. Change the Placement to be reconciled by Ramen
123-
124-
```
125-
kubectl annotate placement placement -n deployment-rbd \
126-
cluster.open-cluster-management.io/experimental-scheduling-disable=true
127-
```
128-
129-
1. Deploy a DRPlacementControl resource for the OCM application on the
130-
hub, for example:
131-
132-
```
133-
kubectl apply -k dr/deployment-k8s-regional-rbd
134-
```
135-
136-
This creates a DRPlacementControl resource for the busybox deployment
137-
in the `deployment-rbd` namespace and can be viewed using:
138-
139-
```
140-
kubectl get drpc -n deployment-rbd
141-
```
142-
143-
At this point the placement of the application is managed by *Ramen*.
144-
145-
## Disable DR for a DR enabled application
146-
147-
1. Ensure the placement is pointing to the cluster where the workload is
148-
currently placed to avoid data loss if OCM moves the application to
149-
another cluster.
150-
151-
The sample `placement` does not require any change, but if you are
152-
using an application created by OpenShift Console, you may need to
153-
change the cluster name in the placement.
154-
155-
1. Delete the drpc resource for the OCM application on the hub:
156-
157-
```
158-
kubectl delete -k dr/deployment-k8s-regional-rbd
159-
```
12+
## Initial setup
16013

161-
This deletes the DRPlacementControl resource for the busybox
162-
deployment, disabling replication and removing replicated data.
14+
Before experimenting with disaster recovery we need to configure the
15+
clusters. See [initial setup](docs/initial-setup.md) to learn how to set
16+
up your environment.
16317

164-
1. Change the Placement to be reconciled by OCM
18+
## Experiments
16519

166-
```
167-
kubectl annotate placement placement -n deployment-rbd \
168-
cluster.open-cluster-management.io/experimental-scheduling-disable-
169-
```
20+
After setting up your environment you can experiments with various
21+
workloads and storage types:
17022

171-
At this point the application is managed again by *OCM*.
23+
- Experiment with *OCM* managed [deployment](docs/ocm-rbd.md)
24+
- Experiment with *OCM* managed [virtual machine](docs/ocm-kubevirt.md)

docs/create-environment.md

+185
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
# Creating a test environment
2+
3+
This page will help you to set up an environment for experimenting with
4+
disaster recovery.
5+
6+
## What you’ll need
7+
8+
- Bare metal or virtual machine with nested virtualization enabled
9+
- 8 CPUs or more
10+
- 20 GiB of free memory
11+
- 100 GiB of free space
12+
- Internet connection
13+
- Linux - tested on *Fedora* 37, 38, and 39
14+
- non-root user with sudo privileges (all instructions are for non-root user)
15+
16+
## Setting up your machine
17+
18+
### Install libvirt
19+
20+
Install the `@virtualization` group - on Fedora you can use:
21+
22+
```
23+
sudo dnf install @virtualization
24+
```
25+
26+
Enable the libvirtd service.
27+
28+
```
29+
sudo systemctl enable libvirtd --now
30+
```
31+
32+
Add yourself to the libvirt group (required for minikube kvm2 driver).
33+
34+
```
35+
sudo usermod -a -G libvirt $(whoami)
36+
```
37+
38+
Logout and login again for the change above to be in effect.
39+
40+
### Install required packages
41+
42+
On Fedora you can use:
43+
44+
```
45+
sudo dnf install git make golang helm podman
46+
```
47+
48+
### Clone *Ramen* source locally
49+
50+
```
51+
git clone https://github.com/RamenDR/ramen.git
52+
```
53+
54+
Enter the `ramen` directory - all the commands in this guide assume you
55+
are in ramen root directory.
56+
57+
```
58+
cd ramen
59+
```
60+
61+
### Create a python virtual environment
62+
63+
To keep the ramen tools separate from your host python, we create a
64+
python virtual environment.
65+
66+
```
67+
make venv
68+
```
69+
70+
To activate the environment use:
71+
72+
```
73+
source venv
74+
```
75+
76+
To exit virtual environment issue command *deactivate*.
77+
78+
### Installing required tools
79+
80+
The drenv tool requires various tool for deploying the testing clusters.
81+
82+
#### minikube
83+
84+
On Fedora you can use:
85+
86+
```
87+
sudo dnf install https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
88+
```
89+
90+
Tested with version v1.31.1.
91+
92+
#### kubectl
93+
94+
See [Install and Set Up kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
95+
for details.
96+
97+
Tested with version v1.30.2.
98+
99+
#### clusteradm
100+
101+
See [Install clusteradm CLI tool](https://open-cluster-management.io/getting-started/installation/start-the-control-plane/#install-clusteradm-cli-tool)
102+
for the details.
103+
104+
Version v0.81 or later is required.
105+
106+
#### subctl
107+
108+
See [Submariner subctl installation](https://submariner.io/operations/deployment/subctl/)
109+
for the details.
110+
111+
Version v0.17.0 or later is required.
112+
113+
#### velero
114+
115+
See [Velero Basic Install](https://velero.io/docs/v1.12/basic-install/)
116+
for the details.
117+
118+
Tested with version v1.12.2.
119+
120+
#### virtctl
121+
122+
```
123+
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/v1.2.1/virtctl-v1.2.1-linux-amd64
124+
sudo install virtctl /usr/local/bin
125+
rm virtctl
126+
```
127+
128+
## Starting the test environment
129+
130+
Before using the `drenv` tool to start a test environment, you need to
131+
activate the python virtual environment:
132+
133+
```
134+
source venv
135+
```
136+
137+
Available environment files:
138+
139+
- `envs/regional-dr.yaml` - regional dr for testing workloads using RBD and CephFS storage
140+
- `envs/regional-dr-kubevirt.yaml` - regional dr for testing virtual machines using RBD storage
141+
142+
To start a Regional-DR environment use:
143+
144+
```
145+
(cd test; drenv start envs/regional-dr.yaml)
146+
```
147+
148+
Starting the environment takes 8-15 minutes, depending on your machine
149+
and internet connection.
150+
151+
## Build the ramen operator image
152+
153+
Build the *Ramen* operator container image:
154+
155+
```
156+
make docker-build
157+
```
158+
159+
> [!NOTE]
160+
> Select `docker.io/library/golang:1.21` when prompted.
161+
162+
This builds the image `quay.io/ramendr/ramen-operator:latest`
163+
164+
## Deploy and Configure the ramen operator
165+
166+
To deploy the *Ramen* operator in the test environment:
167+
168+
```
169+
ramenctl deploy test/envs/regional-dr.yaml
170+
ramenctl config test/envs/regional-dr.yaml
171+
```
172+
173+
Your environment is ready!
174+
175+
See [initial-setup](initial-setup.md) to learn how to set it up for
176+
experimenting with disaster recovery.
177+
178+
## Deleting the environment
179+
180+
To stop and delete the minikube clusters use `drenv delete` with the
181+
same environment file used to start the environment:
182+
183+
```
184+
(cd test; drenv delete envs/regional-dr.yaml)
185+
```

0 commit comments

Comments
 (0)