Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
Chris Crow committed Dec 10, 2024
2 parents 821e7c2 + baac0fa commit 848c8fc
Show file tree
Hide file tree
Showing 5 changed files with 108 additions and 75 deletions.
62 changes: 30 additions & 32 deletions content/modules/ROOT/pages/pxe-osv-01.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The username is `{openshift_cluster_admin_username}` and the password is `{opens
Red Hat OpenShift uses the `oc` cli utility. This utility has a similar
syntax to kubectl, but with some OpenShift specific extensions.

Lets look at the nodes that make up our cluster:
Let's look at the nodes that make up our cluster:

[source,sh,role=execute]
----
Expand Down Expand Up @@ -79,14 +79,14 @@ Grab our StorageCluster specification:

[source,sh,role=execute]
----
curl -o px-spec.yaml 'https://install.portworx.com/3.1.6?operator=true&mc=false&kbver=1.29.10&ns=portworx&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D50%22%2C%22&ce=aws&r=17001&c=px-cluster-443e64d8-f2c7-47d2-b81b-295567465a84&osft=true&stork=true&csi=true&tel=false&st=k8s'
curl -o $HOME/px-spec.yaml 'https://install.portworx.com/3.1.6?operator=true&mc=false&kbver=1.29.10&ns=portworx&b=true&iop=6&s=%22type%3Dgp3%2Csize%3D50%22%2C%22&ce=aws&r=17001&c=px-cluster-443e64d8-f2c7-47d2-b81b-295567465a84&osft=true&stork=true&csi=true&tel=false&st=k8s'
----

Take a look at our current manifest by using the cat utility:

[source,sh,role=execute]
----
cat px-spec.yaml
cat $HOME/px-spec.yaml
----

* Line 9 tells Portworx that we are installing on an Openshift cluster
Expand All @@ -101,11 +101,17 @@ We can now apply the specification:

[source,sh,role=execute]
----
oc apply -f px-spec.yaml
oc apply -f $HOME/px-spec.yaml
----

The install can take about 5 minutes. We can watch the containers come
up by running:
====
[NOTE]
The Portworx cluster pods can take up to 10 minutes to start. During this time, you will see pods restart.
This is expected behavior.
====

We can watch the containers come up by running:

[source,sh,role=execute]
----
Expand All @@ -115,20 +121,13 @@ watch oc -n portworx get pods
When all three of the `px-cluster` pods have a Ready status of `1/1` we
can press `ctrl-c` to exit out of our watch command.

====
[NOTE]
The Portworx cluster pods can take up to 10 minutes to start. During this time, you will see pods restart.
This is expected behavior.
====

=== Task: Check the Portworx cluster status

Portworx ships with a `pxctl` CLI utility that you can use for managing
Portworx. Well cover this utility more in the labs here in a little
Portworx. We'll cover this utility more in the labs here in a little
bit!

For now, well run `sudo pxctl status` via `oc` in one of the Portworx
For now, we'll run `sudo pxctl status` via `oc` in one of the Portworx
pods to check the StorageCluster status.

First, setup the `PX_POD` environment variable:
Expand All @@ -149,7 +148,7 @@ oc exec -it $PX_POD -n portworx -- /opt/pwx/bin/pxctl status --color

We now have a 3-node Portworx cluster up and running!

Lets dive into our cluster status: - All 3 nodes are online and use
Let's dive into our cluster status: - All 3 nodes are online and use
Kubernetes node names as the Portworx node IDs.

* Portworx detected the block device media type as
Expand All @@ -164,6 +163,7 @@ pxctl:
[source,sh,role=execute]
----
echo "alias pxctl='PX_POD=\$(oc get pods -l name=portworx -n portworx --field-selector=status.phase==Running | grep \"1/1\" | awk \"NR==1{print \$1}\") && oc exec \$PX_POD -n portworx -- /opt/pwx/bin/pxctl'" >> ~/.bashrc
source ~/.bashrc
----

Expand All @@ -174,7 +174,6 @@ Now test out the alias:
pxctl status --color
----


== Storage Classes and Storage Profiles in Openshift

Storage Classes are a Kubernetes concept that allows an administrator
Expand Down Expand Up @@ -221,7 +220,8 @@ First, let's set the `gp3-csi` StorageClass to no longer be the default:

[source,sh,role=execute]
----
oc patch storageclass gp3-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
oc patch storageclass gp3-csi \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
----

Run the following command to create a new yaml file for the block-based
Expand Down Expand Up @@ -262,8 +262,7 @@ We also specified that the volumeBindingMode should be
volume.

See the
https://docs.portworx.com/portworx-enterprise/3.1/platform/openshift/ocp-bare-metal/operations/storage-operations/manage-kubevirt-vms)[Portworx
Documentation^] for further details.
https://docs.portworx.com/portworx-enterprise/3.1/platform/openshift/ocp-bare-metal/operations/storage-operations/manage-kubevirt-vms)[Portworx Documentation^] for further details.

Also note that the `provisioner` is set to `pxd.portworx.com`. This
means that our storage class will be using CSI rather than the in-tree
Expand All @@ -272,55 +271,54 @@ provisioner.
With our StorageClass created, we can now create move on to Storage
Profiles.



== Install and Configure Openshift Virtualization


=== Task: Install the HyperConverged CR

The OpenShift Virtualization operator has already been installed for out environment. Now that the Portworx StorageCluster has been deployed and we have created the default storage class we can create the `HyperConverged` object that actually deploys OpenShift Virtualization to our cluster.

We can install the HyperConverged CR using the following command:

[source,sh,role=execute]
----
cat << EOF | oc apply -f -
---
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
filesystemOverhead:
global: "0.08"
global: "0.08"
EOF
----

The installation can take a few moments. Verify that the HyperConverged object is running by monitoring the
pods in the `openshift-cnv` project:
The installation can take a few moments. Verify that the HyperConverged object is running by monitoring the
pods in the `openshift-cnv` project until all pods show in `Running` state and no new pods appear:

[source,sh,role=execute]
----
watch oc -n openshift-cnv get pods
----


====
[NOTE]
It is possible to install the Operator and HyperConverged object using the Openshift UI. We have opted to use
It is also possible to install the Operator and HyperConverged object using the Openshift UI. We have opted to use
the CLI to make the process more repeatable
====

=== Task: Install Virtctl

Many functions we will use rely on a utility called `virtctl`. Virtctl allows us to interface with our virtual
machine through the control plane of Openshift. This means that we will not have to configure Openshift Networking
to interact with our virtual machines.
Many functions we will use rely on a utility called `virtctl`. Virtctl allows us to interface with our virtual
machine through the control plane of Openshift. This means that we will not have to configure Openshift Networking
to interact with our virtual machines. OpenShift Virtualization makes the matching version of `virtctl` tool available for download from our cluster.

[source,sh,role=execute]
----
wget $(oc get consoleclidownload virtctl-clidownloads-kubevirt-hyperconverged -o json | jq -r '.spec.links[] | select(.text == "Download virtctl for Linux for x86_64") | .href')
tar -xvf virtctl.tar.gz
chmod +x virtctl
sudo mv virtctl /usr/local/bin
Expand Down Expand Up @@ -351,4 +349,4 @@ https://docs.openshift.com/container-platform/4.16/virt/storage/virt-configuring
documentation^].


With Portworx and OSV installed, we are now ready to move on to the next lesson.
With Portworx and OpenShift Virtualization installed and configured, we are now ready to move on to the next lab.
30 changes: 14 additions & 16 deletions content/modules/ROOT/pages/pxe-osv-02.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,8 @@ ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""

=== Task 2: Create an Openshift Secret with our SSH key:


[source,sh,role=execute]
----
ID_RSA_PUB=$(cat ~/.ssh/id_rsa.pub)
cat << EOF | oc apply -f -
---
apiVersion: v1
Expand Down Expand Up @@ -90,18 +88,18 @@ EOF

The above created a VM called `centos-stream9-example` that we will use for the rest of the labs.

Once the VM is running we can log into this VM using this command (if you do so make sure to exit again by pressing `Ctrl-D` or typing `exit` followed by pressing `Enter`):

====
[TIP]
It can take a couple of minutes for the VM to provision and start.
It can take a couple of minutes for the VM to provision and start. If the next command fails wait a few seconds and try again until it succeeds.
====

Once the VM is running we can log into this VM using this command (if you do so make sure to exit again by pressing `Ctrl-D` or typing `exit` followed by pressing `Enter`):

[source,sh,role=execute]
----
virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no"
virtctl ssh cloud-user@centos-stream9-example \
-i ~/.ssh/id_rsa \
-t "-o StrictHostKeyChecking=no"
----

== Step 2 - Deploy a Virtual Machine using the Openshift Console
Expand Down Expand Up @@ -135,20 +133,20 @@ image:create-vm-04.png[Select StorageClass]
This will automatically start the virtual machine after a short
provisioning process.

____
It can take a couple of minutes for our VM to boot the
first time
____
====
[NOTE]
It can take a couple of minutes for our VM to boot for the first time
====

Explore the tabs for this virtual machine. We can view metrics,
configure snapshots, and even view the YAML configuration to make
automating easy.

image:create-vm-06.png[Interact with VM]

____
The Virtual Machine name will be different in your
environment
____
====
[NOTE]
The Virtual Machine name will be different in your environment
====

Click `Next` to move on to the next challenge
Click `Next` to move on to the next lab.
44 changes: 32 additions & 12 deletions content/modules/ROOT/pages/pxe-osv-03.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,20 @@ Let's write some data to a file:

[source,sh,role=execute]
----
virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'echo "Red Hat was here" > ~/text'
virtctl ssh cloud-user@centos-stream9-example \
-i ~/.ssh/id_rsa \
-t "-o StrictHostKeyChecking=no" \
-c 'echo "Red Hat was here" > ~/text'
----

We can read our file at any time by running:

[source,sh,role=execute]
----
virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text'
virtctl ssh cloud-user@centos-stream9-example \
-i ~/.ssh/id_rsa \
-t "-o StrictHostKeyChecking=no" \
-c 'cat ~/text'
----

== Step 2 - Test Live Migrations
Expand Down Expand Up @@ -71,7 +77,10 @@ Let's check to make sure our data is still there:

[source,sh,role=execute]
----
virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text'
virtctl ssh cloud-user@centos-stream9-example \
-i ~/.ssh/id_rsa \
-t "-o StrictHostKeyChecking=no" \
-c 'cat ~/text'
----

== Step 3 - Providing HA
Expand All @@ -82,26 +91,27 @@ We will now induce a failure in our OpenShift cluster.

It is important to understand that in OpenShift Virtualization,
VirtualMachines run inside pods. As we learned above, our virtual
machine is actually running from inside a pod. Lets find out which node
machine is actually running from inside a pod. Let's find out which node
our virtual machine is running on:

[source,sh,role=execute]
----
NODENAME=$(oc get pods -o wide | grep 'Running' | awk '{print $7}' | head -n 1)
echo "Your VM is running on node: ${NODENAME}"
----

Lets cause a reboot of this node. We will now debug the node:
Let's cause a reboot of this node. We will now debug the node:

[source,sh,role=execute]
----
oc debug node/$NODENAME
----

____
Running `oc debug node/$NODENAME` can take a few seconds as
a pod needs to be attached to the node.
____
====
[NOTE]
Running `oc debug node/$NODENAME` can take a few seconds as a pod needs to be attached to the node.
====

Chroot to the host

Expand All @@ -115,9 +125,16 @@ MongoDB pod:

[source,sh,role=execute]
----
sudo reboot
reboot
----

====
[WARNING]
It is possible that your `showroom` lab interface (where you are reading these instructions) happens to be running on the node that you just rebooted. In that case the terminal on the right will disconnect. You will need to refresh your window until OpenShift notices that the node is down and re-schedules the showroom pod to an available node.
To speed up the re-creation of the pod you can also find the showroom pod in the showroom namespace and delete the currently running pod.
====

Let's watch our VM and pod status in the default namespace:

[source,sh,role=execute]
Expand All @@ -135,7 +152,10 @@ Let's check to make sure our data is still there:

[source,sh,role=execute]
----
virtctl ssh cloud-user@centos-stream9-example -i ~/.ssh/id_rsa -t "-o StrictHostKeyChecking=no" -c 'cat ~/text'
virtctl ssh cloud-user@centos-stream9-example \
-i ~/.ssh/id_rsa \
-t "-o StrictHostKeyChecking=no" \
-c 'cat ~/text'
----

You are ready to move to the next challenge.
You are ready to move to the next lab.
Loading

0 comments on commit 848c8fc

Please sign in to comment.