diff --git a/content/modules/ROOT/pages/pxe-osv-01.adoc b/content/modules/ROOT/pages/pxe-osv-01.adoc index 64c1694..6345ee3 100644 --- a/content/modules/ROOT/pages/pxe-osv-01.adoc +++ b/content/modules/ROOT/pages/pxe-osv-01.adoc @@ -29,7 +29,7 @@ The username is `{openshift_cluster_admin_username}` and the password is `{opens Red Hat OpenShift uses the `oc` cli utility. This utility has a similar syntax to kubectl, but with some OpenShift specific extensions. -Let’s look at the nodes that make up our cluster: +Let's look at the nodes that make up our cluster: [source,sh,role=execute] ---- @@ -79,14 +79,14 @@ Grab our StorageCluster specification: [source,sh,role=execute] ---- -curl -o px-spec.yaml 'https://install.portworx.com/3.1.6?operator=true&mc=false&kbver=1.29.10&ns=portworx&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D50%22%2C%22&ce=aws&r=17001&c=px-cluster-443e64d8-f2c7-47d2-b81b-295567465a84&osft=true&stork=true&csi=true&tel=false&st=k8s' +curl -o $HOME/px-spec.yaml 'https://install.portworx.com/3.1.6?operator=true&mc=false&kbver=1.29.10&ns=portworx&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D50%22%2C%22&ce=aws&r=17001&c=px-cluster-443e64d8-f2c7-47d2-b81b-295567465a84&osft=true&stork=true&csi=true&tel=false&st=k8s' ---- Take a look at our current manifest by using the cat utility: [source,sh,role=execute] ---- -cat px-spec.yaml +cat $HOME/px-spec.yaml ---- * Line 9 tells Portworx that we are installing on an Openshift cluster @@ -101,11 +101,17 @@ We can now apply the specification: [source,sh,role=execute] ---- -oc apply -f px-spec.yaml +oc apply -f $HOME/px-spec.yaml ---- -The install can take about 5 minutes. We can watch the containers come -up by running: +==== +[NOTE] + +The Portworx cluster pods can take up to 10 minutes to start. During this time, you will see pods restart. +This is expected behavior. +==== + +We can watch the containers come up by running: [source,sh,role=execute] ---- @@ -115,20 +121,13 @@ watch oc -n portworx get pods When all three of the `px-cluster` pods have a Ready status of `1/1` we can press `ctrl-c` to exit out of our watch command. -==== -[NOTE] - -The Portworx cluster pods can take up to 10 minutes to start. During this time, you will see pods restart. -This is expected behavior. -==== - === Task: Check the Portworx cluster status Portworx ships with a `pxctl` CLI utility that you can use for managing -Portworx. We’ll cover this utility more in the labs here in a little +Portworx. We'll cover this utility more in the labs here in a little bit! -For now, we’ll run `sudo pxctl status` via `oc` in one of the Portworx +For now, we'll run `sudo pxctl status` via `oc` in one of the Portworx pods to check the StorageCluster status. First, setup the `PX_POD` environment variable: @@ -149,7 +148,7 @@ oc exec -it $PX_POD -n portworx -- /opt/pwx/bin/pxctl status --color We now have a 3-node Portworx cluster up and running! -Let’s dive into our cluster status: - All 3 nodes are online and use +Let's dive into our cluster status: - All 3 nodes are online and use Kubernetes node names as the Portworx node IDs. * Portworx detected the block device media type as @@ -164,6 +163,7 @@ pxctl: [source,sh,role=execute] ---- echo "alias pxctl='PX_POD=\$(oc get pods -l name=portworx -n portworx --field-selector=status.phase==Running | grep \"1/1\" | awk \"NR==1{print \$1}\") && oc exec \$PX_POD -n portworx -- /opt/pwx/bin/pxctl'" >> ~/.bashrc + source ~/.bashrc ---- @@ -174,7 +174,6 @@ Now test out the alias: pxctl status --color ---- - == Storage Classes and Storage Profiles in Openshift Storage Classes are a Kubernetes concept that allows an administrator @@ -221,7 +220,8 @@ First, let's set the `gp3-csi` StorageClass to no longer be the default: [source,sh,role=execute] ---- -oc patch storageclass gp3-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' +oc patch storageclass gp3-csi \ + -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' ---- Run the following command to create a new yaml file for the block-based @@ -262,8 +262,7 @@ We also specified that the volumeBindingMode should be volume. See the -https://docs.portworx.com/portworx-enterprise/3.1/platform/openshift/ocp-bare-metal/operations/storage-operations/manage-kubevirt-vms)[Portworx -Documentation^] for further details. +https://docs.portworx.com/portworx-enterprise/3.1/platform/openshift/ocp-bare-metal/operations/storage-operations/manage-kubevirt-vms)[Portworx Documentation^] for further details. Also note that the `provisioner` is set to `pxd.portworx.com`. This means that our storage class will be using CSI rather than the in-tree @@ -272,19 +271,18 @@ provisioner. With our StorageClass created, we can now create move on to Storage Profiles. - - == Install and Configure Openshift Virtualization - === Task: Install the HyperConverged CR +The OpenShift Virtualization operator has already been installed for out environment. Now that the Portworx StorageCluster has been deployed and we have created the default storage class we can create the `HyperConverged` object that actually deploys OpenShift Virtualization to our cluster. We can install the HyperConverged CR using the following command: [source,sh,role=execute] ---- cat << EOF | oc apply -f - +--- apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: @@ -292,35 +290,35 @@ metadata: namespace: openshift-cnv spec: filesystemOverhead: - global: "0.08" + global: "0.08" EOF ---- -The installation can take a few moments. Verify that the HyperConverged object is running by monitoring the -pods in the `openshift-cnv` project: +The installation can take a few moments. Verify that the HyperConverged object is running by monitoring the +pods in the `openshift-cnv` project until all pods show in `Running` state and no new pods appear: [source,sh,role=execute] ---- watch oc -n openshift-cnv get pods ---- - ==== [NOTE] -It is possible to install the Operator and HyperConverged object using the Openshift UI. We have opted to use +It is also possible to install the Operator and HyperConverged object using the Openshift UI. We have opted to use the CLI to make the process more repeatable ==== === Task: Install Virtctl -Many functions we will use rely on a utility called `virtctl`. Virtctl allows us to interface with our virtual -machine through the control plane of Openshift. This means that we will not have to configure Openshift Networking -to interact with our virtual machines. +Many functions we will use rely on a utility called `virtctl`. Virtctl allows us to interface with our virtual +machine through the control plane of Openshift. This means that we will not have to configure Openshift Networking +to interact with our virtual machines. OpenShift Virtualization makes the matching version of `virtctl` tool available for download from our cluster. [source,sh,role=execute] ---- wget $(oc get consoleclidownload virtctl-clidownloads-kubevirt-hyperconverged -o json | jq -r '.spec.links[] | select(.text == "Download virtctl for Linux for x86_64") | .href') + tar -xvf virtctl.tar.gz chmod +x virtctl sudo mv virtctl /usr/local/bin @@ -351,4 +349,4 @@ https://docs.openshift.com/container-platform/4.16/virt/storage/virt-configuring documentation^]. -With Portworx and OSV installed, we are now ready to move on to the next lesson. \ No newline at end of file +With Portworx and OpenShift Virtualization installed and configured, we are now ready to move on to the next lab. diff --git a/content/modules/ROOT/pages/pxe-osv-02.adoc b/content/modules/ROOT/pages/pxe-osv-02.adoc index 6863e32..f50450d 100644 --- a/content/modules/ROOT/pages/pxe-osv-02.adoc +++ b/content/modules/ROOT/pages/pxe-osv-02.adoc @@ -18,10 +18,8 @@ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N "" === Task 2: Create an Openshift Secret with our SSH key: - [source,sh,role=execute] ---- -ID_RSA_PUB=$(cat ~/.ssh/id_rsa.pub) cat << EOF | oc apply -f - --- apiVersion: v1 @@ -90,18 +88,18 @@ EOF The above created a VM called `centos-stream9-example` that we will use for the rest of the labs. -Once the VM is running we can log into this VM using this command (if you do so make sure to exit again by pressing `Ctrl-D` or typing `exit` followed by pressing `Enter`): - ==== [TIP] - -It can take a couple of minutes for the VM to provision and start. +It can take a couple of minutes for the VM to provision and start. If the next command fails wait a few seconds and try again until it succeeds. ==== +Once the VM is running we can log into this VM using this command (if you do so make sure to exit again by pressing `Ctrl-D` or typing `exit` followed by pressing `Enter`): [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" ---- == Step 2 - Deploy a Virtual Machine using the Openshift Console @@ -135,10 +133,10 @@ image:create-vm-04.png[Select StorageClass] This will automatically start the virtual machine after a short provisioning process. -____ -It can take a couple of minutes for our VM to boot the -first time -____ +==== +[NOTE] +It can take a couple of minutes for our VM to boot for the first time +==== Explore the tabs for this virtual machine. We can view metrics, configure snapshots, and even view the YAML configuration to make @@ -146,9 +144,9 @@ automating easy. image:create-vm-06.png[Interact with VM] -____ -The Virtual Machine name will be different in your -environment -____ +==== +[NOTE] +The Virtual Machine name will be different in your environment +==== -Click `Next` to move on to the next challenge +Click `Next` to move on to the next lab. diff --git a/content/modules/ROOT/pages/pxe-osv-03.adoc b/content/modules/ROOT/pages/pxe-osv-03.adoc index d4c6793..2e8da7b 100644 --- a/content/modules/ROOT/pages/pxe-osv-03.adoc +++ b/content/modules/ROOT/pages/pxe-osv-03.adoc @@ -11,14 +11,20 @@ Let's write some data to a file: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'echo "Red Hat was here" > ~/text' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'echo "Red Hat was here" > ~/text' ---- We can read our file at any time by running: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'cat ~/text' ---- == Step 2 - Test Live Migrations @@ -71,7 +77,10 @@ Let's check to make sure our data is still there: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'cat ~/text' ---- == Step 3 - Providing HA @@ -82,26 +91,27 @@ We will now induce a failure in our OpenShift cluster. It is important to understand that in OpenShift Virtualization, VirtualMachines run inside pods. As we learned above, our virtual -machine is actually running from inside a pod. Let’s find out which node +machine is actually running from inside a pod. Let's find out which node our virtual machine is running on: [source,sh,role=execute] ---- NODENAME=$(oc get pods -o wide | grep 'Running' | awk '{print $7}' | head -n 1) + echo "Your VM is running on node: ${NODENAME}" ---- -Let’s cause a reboot of this node. We will now debug the node: +Let's cause a reboot of this node. We will now debug the node: [source,sh,role=execute] ---- oc debug node/$NODENAME ---- -____ -Running `oc debug node/$NODENAME` can take a few seconds as -a pod needs to be attached to the node. -____ +==== +[NOTE] +Running `oc debug node/$NODENAME` can take a few seconds as a pod needs to be attached to the node. +==== Chroot to the host @@ -115,9 +125,16 @@ MongoDB pod: [source,sh,role=execute] ---- -sudo reboot +reboot ---- +==== +[WARNING] +It is possible that your `showroom` lab interface (where you are reading these instructions) happens to be running on the node that you just rebooted. In that case the terminal on the right will disconnect. You will need to refresh your window until OpenShift notices that the node is down and re-schedules the showroom pod to an available node. + +To speed up the re-creation of the pod you can also find the showroom pod in the showroom namespace and delete the currently running pod. +==== + Let's watch our VM and pod status in the default namespace: [source,sh,role=execute] @@ -135,7 +152,10 @@ Let's check to make sure our data is still there: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'cat ~/text' ---- -You are ready to move to the next challenge. +You are ready to move to the next lab. diff --git a/content/modules/ROOT/pages/pxe-osv-04.adoc b/content/modules/ROOT/pages/pxe-osv-04.adoc index ab4a170..534b344 100644 --- a/content/modules/ROOT/pages/pxe-osv-04.adoc +++ b/content/modules/ROOT/pages/pxe-osv-04.adoc @@ -48,7 +48,7 @@ image:snapshot-vm-02.png[Take Snapshot] We can accept the default and select `Save` ==== -[Note] +[NOTE] You will see a warning that our `cloudinitdisk` will not be included in this snapshot. The `cloudinitdisk` is only used to configure our virtual machine and provide customizations. We can safely ignore @@ -62,11 +62,14 @@ image:snapshot-vm-03.png[Save Snapshot] Let's switch back to the command line. To make a change, we are simply going to make a change to our running virtual machine. -Let's accidently delete an important file: +Let's "accidently" delete an important file: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'sudo rm /etc/fstab' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'sudo rm /etc/fstab' ---- Oh no! `/etc/fstab` is an important file for the operation of our linux system. @@ -74,10 +77,13 @@ We can verify that the file is indeed missing by running: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat /etc/fstab' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'cat /etc/fstab' ---- -Let's fix our VM +Let's fix our VM. === Task 4: Restore our snapshot @@ -102,12 +108,13 @@ snapshot we created earlier and click image:snapshot-vm-05.png[restore VM] -____ +==== +[NOTE] Restoring a snapshot is a distructive operation as it discards all changes that were made to a virtual machine since the snapshot was taken. To avoid loosing data, it is possible to take a snapshot before restoring our virtual machine. -____ +==== Confirm the restore by clicking the `Restore` button. @@ -119,14 +126,17 @@ image:snapshot-vm-06.png[start VM] We can check on the progress of our virtual machine’s boot by clicking on the `Console` or `Overview` tab. -Task 5: Verify our restore +=== Task 5: Verify our restore After a couple of minutes, our VM should be running. Let’s verify that our fstab file is back in place: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat /etc/fstab' +virtctl ssh cloud-user@centos-stream9-example \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'cat /etc/fstab' ---- We can now see that the contents of the `/etc/fstab` file has been restored! diff --git a/content/modules/ROOT/pages/pxe-osv-05.adoc b/content/modules/ROOT/pages/pxe-osv-05.adoc index 91499f2..d8bb059 100644 --- a/content/modules/ROOT/pages/pxe-osv-05.adoc +++ b/content/modules/ROOT/pages/pxe-osv-05.adoc @@ -93,7 +93,6 @@ apiVersion: autopilot.libopenstorage.org/v1alpha1 kind: AutopilotRule metadata: name: volume-resize - namespace: vmtest spec: ##### selector filters the objects affected by this rule given labels selector: @@ -137,7 +136,8 @@ We will apply that label to our virtual machine's PVC. [source,sh,role=execute] ---- -oc label pvc centos-stream9-autopilot-data-disk app=autopilot --overwrite +oc label pvc centos-stream9-autopilot-data-disk \ + app=autopilot --overwrite ---- [source,sh,role=execute] @@ -161,12 +161,16 @@ until virtctl ssh cloud-user@centos-stream9-autopilot -i ~/.ssh/id_rsa -t "-o St sleep 10 done -virtctl ssh cloud-user@centos-stream9-autopilot -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c '(echo g; echo n; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/vdb && sudo mkfs.ext4 /dev/vdb1 && sudo mkdir /data && sudo mount /dev/vdb1 /data' +# Set up the filesystem and mount the disk as /data +virtctl ssh cloud-user@centos-stream9-autopilot \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c '(echo g; echo n; echo 1; echo ; echo ; echo w) | sudo fdisk /dev/vdb && sudo mkfs.ext4 /dev/vdb1 && sudo mkdir /data && sudo mount /dev/vdb1 /data' ---- == Task 5: Add some storage space -We will use the shred command to add some storage space to our virtual machine. +We will use the `shred` command to add some storage space to our virtual machine. We could of course log in to our VM though the console, but that would require that we log in to the virtual machine with the supplied password. @@ -174,11 +178,14 @@ One of the advantages of an extensible framework like Openshift is that much of === Task 6: Start filling the disk -Let's execute a command inside of our virtual machinen using `oc exec` +Let's execute a command to write data to the `/data` disk inside of our virtual machine: [source,sh,role=execute] ---- -virtctl ssh cloud-user@centos-stream9-autopilot -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'sudo touch /data/file; sudo shred -n 1 -s 900M /data/file'& +virtctl ssh cloud-user@centos-stream9-autopilot \ + -i ~/.ssh/id_rsa \ + -t "-o StrictHostKeyChecking=no" \ + -c 'sudo touch /data/file; sudo shred -n 1 -s 990M /data/file' ---- === Task 5: Observe the Portworx Autopilot events