Skip to content
This repository has been archived by the owner on Oct 28, 2024. It is now read-only.

Commit

Permalink
moved guides to blogs
Browse files Browse the repository at this point in the history
  • Loading branch information
brianinnes committed Oct 17, 2023
1 parent adf3d09 commit f0e0fd0
Show file tree
Hide file tree
Showing 19 changed files with 138 additions and 50 deletions.
Original file line number Diff line number Diff line change
@@ -1,9 +1,16 @@
# Implementing an Automated Installation Solution for OKD on vSphere with User Provisioned Infrastructure (UPI)
---
draft: false
date: 2020-08-31
categories:
- Guide
---

## Introduction
# Implementing an Automated Installation Solution for OKD on vSphere with User Provisioned Infrastructure (UPI)

It’s possible to completely automate the process of installing OpenShift/OKD on vSphere with User Provisioned Infrastructure by chaining together the various functions of OCT via a wrapper script.

<!-- more -->

## Steps

1. Deploy the DNS, DHCP, and load balancer infrastructure outlined in the Prerequisites section.
Expand Down
9 changes: 9 additions & 0 deletions docs/guides/aws-ipi.md → docs/blog/posts/guides/aws-ipi.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# AWS IPI Default Deployment

<!--- cSpell:ignore xlarge -->

This describes the resources used by OpenShift after performing an installation
using the default options for the installer.

<!-- more -->

## Infrastructure

### Compute
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,16 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Azure IPI Default Deployment

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

<!-- more -->

## Infrastructure

### Compute
Expand Down
9 changes: 9 additions & 0 deletions docs/guides/gcp-ipi.md → docs/blog/posts/guides/gcp-ipi.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,18 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# GCP IPI Default Deployment

<!--- cSpell:ignore subnetworks -->

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

<!-- more -->

## Infrastructure

### Compute
Expand Down
File renamed without changes
File renamed without changes
File renamed without changes
Original file line number Diff line number Diff line change
@@ -1,7 +1,18 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Create a Single Node OKD (SNO) Cluster with Assisted Installer

<!--- cSpell:ignore Vadim configmap SCOS auths aWQ6cGFzcwo kubeconfig kubeadmin nsenter rootfs ostree kublet kubelet baremetal autoscaler apiserver Alertmanager Thanos packageserver thanos-querier -->

This guide outlines how to run the assisted installer locally then use it to deploy a single node OKD cluster.

<!-- more -->

## Reference Material

Information from the following sources was used to create this guide:
Expand All @@ -20,7 +31,7 @@ A single Node OKD cluster takes fewer resource than the full cluster deployment,
- Memory : 16 GB
- Storage (ideally fast storage, such as SSD) : 120GB

These are the absolut minimum resources needed, depending on the workload(s) you want to run in the cluster you may need additional CPU, memory and storage.
These are the absolute minimum resources needed, depending on the workload(s) you want to run in the cluster you may need additional CPU, memory and storage.

### Network

Expand Down Expand Up @@ -82,11 +93,11 @@ As the OKD cluster boots it will need to communicate with the Assisted Installer

For this example I will use IP **192.168.0.141** for the system running podman and hosting the Assisted Installer.

You need to create the configuration file to run the Assisted Installer in podman. The base files are available in the assisted installer [git repo](https://github.com/openshift/assisted-service/tree/master/deploy/podman){: target=_blank}, but I have modified them and updated them to offer both FCOS (Fedore Core OS) and SCOS (CentOS Stream Core OS) options.
You need to create the configuration file to run the Assisted Installer in podman. The base files are available in the assisted installer [git repo](https://github.com/openshift/assisted-service/tree/master/deploy/podman){: target=_blank}, but I have modified them and updated them to offer both FCOS (Fedora Core OS) and SCOS (CentOS Stream Core OS) options.

Create the file (sno.yaml) - this is the combined file for use with podman machine (will also work with Linux). You need to change all instances of 192.168.0.141 to the IP address of your system running podman and hosting the Assisted Installer:

```yaml
```yaml title="sno.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
Expand Down Expand Up @@ -116,10 +127,8 @@ data:
PUBLIC_CONTAINER_REGISTRIES: 'quay.io'
SERVICE_BASE_URL: http://192.168.0.141:8090
STORAGE: filesystem
OS_IMAGES: '[{"openshift_version":"4.12","cpu_architecture":"x86_64","url":"https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221127.3.0/x86_64/fedora-coreos-37.20221127.3.0-live.x86_64.iso","version":"37.20221127.3.0"},
{"openshift_version":"4.12-scos","cpu_architecture":"x86_64","url":"https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221127.3.0/x86_64/fedora-coreos-37.20221127.3.0-live.x86_64.iso","version":"37.20221127.3.0"}]'
RELEASE_IMAGES: '[{"openshift_version":"4.12","cpu_architecture":"x86_64","cpu_architectures":["x86_64"],"url":"quay.io/openshift/okd:4.12.0-0.okd-2023-04-01-051724","version":"4.12.0-0.okd-2023-04-01-051724","default":true},
{"openshift_version":"4.12-scos","cpu_architecture":"x86_64","cpu_architectures":["x86_64"],"url":"quay.io/okd/scos-release:4.12.0-0.okd-scos-2023-03-23-213604","version":"4.12.0-0.okd-scos-2023-03-23-213604","default":false}]'
OS_IMAGES: '[{"openshift_version":"4.12","cpu_architecture":"x86_64","url":"https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221127.3.0/x86_64/fedora-coreos-37.20221127.3.0-live.x86_64.iso","version":"37.20221127.3.0"},{"openshift_version":"4.12-scos","cpu_architecture":"x86_64","url":"https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/37.20221127.3.0/x86_64/fedora-coreos-37.20221127.3.0-live.x86_64.iso","version":"37.20221127.3.0"}]'
RELEASE_IMAGES: '[{"openshift_version":"4.12","cpu_architecture":"x86_64","cpu_architectures":["x86_64"],"url":"quay.io/openshift/okd:4.12.0-0.okd-2023-04-01-051724","version":"4.12.0-0.okd-2023-04-01-051724","default":true},{"openshift_version":"4.12-scos","cpu_architecture":"x86_64","cpu_architectures":["x86_64"],"url":"quay.io/okd/scos-release:4.12.0-0.okd-scos-2023-03-23-213604","version":"4.12.0-0.okd-scos-2023-03-23-213604","default":false}]'
ENABLE_UPGRADE_AGENT: "false"
---
apiVersion: v1
Expand Down Expand Up @@ -161,7 +170,7 @@ spec:
restartPolicy: Never
```
You may want to modify this configuration to add https communication and persistant storage using information in the [Assisted Installer git repo](https://github.com/openshift/assisted-service/tree/master/deploy/podman){: target=_blank}.
You may want to modify this configuration to add https communication and persistent storage using information in the [Assisted Installer git repo](https://github.com/openshift/assisted-service/tree/master/deploy/podman){: target=_blank}.
### Run the Assisted Installer
Expand All @@ -185,7 +194,7 @@ To stop a running Assisted Installer instance run (without the persistence optio
podman play kube --down sno.yaml
```

Once the Assisted installer is runing you can access it on port 8080 (http) on the system hosting podman, [http://192.168.0.141:8080](http://192.168.0.141:8080){: target=_blank} (substitute your IP address) or if accessing from the machine hosting the service [http://localhost:8080](http://localhost:8080){: target=_blank}
Once the Assisted installer is running you can access it on port 8080 (http) on the system hosting podman, `http://192.168.0.141:8080` (substitute your IP address) or if accessing from the machine hosting the service `http://localhost:8080`

## Create a cluster

Expand Down Expand Up @@ -224,18 +233,18 @@ When you have the Assisted Installer running locally you can use it to deploy a
All internal storage on the target system will be wiped and used for the cluster
6. Once the target system has booted from the ISO it will contact the Assisted Installer and then appear on the Assisted Installer **Host discovery** screen. After the target system appears and the status moves from **Discovering** to **Ready** On the you can press the next button
7. On the **Storage** page you can configure the storage to use on the target system. The default should work, but you may want to modify if your target system contains multiple disks. Once the storage settings are correct press next
8. On the **Networking** page you should be able to leave things at the default values. You may need to wait a short time while the host is initialising , When the status changes to **Ready** then press next
8. On the **Networking** page you should be able to leave things at the default values. You may need to wait a short time while the host is initializing , When the status changes to **Ready** then press next
9. On the **Review and create** page you may need to wait for the preflight checks to complete. When they are ready you can press **Install cluster** to start the cluster install.

You should be able to leave the system to complete. The target system will reboot twice and then the cluster will be installed and configured. The Assisted installer screen will show the progress.

As the cluster is being installed you will be able to download the kubeconfig file for the cluster. It is important to download this before stopping the Assisted Installer as by default the Assisted Installer storage does not persist across a shutdown.

Once the cluster setup completes you will see the cluster console access details, uncluding the passwork for the kubeadmin password. Again, you need to capture this information before stopping the Assisted Installer as the information will be lost if you have not enabled persistence.
Once the cluster setup completes you will see the cluster console access details, including the password for the kubeadmin account. Again, you need to capture this information before stopping the Assisted Installer as the information will be lost if you have not enabled persistence.

## Issues to be resolved

Currently the generated clusters are not installed correctly, so some work needs to be done to correct the setup instructions or find issues with the Assisted Installer or OKD relese files.
Currently the generated clusters are not installed correctly, so some work needs to be done to correct the setup instructions or find issues with the Assisted Installer or OKD release files.

### SCOS issue

Expand Down
9 changes: 9 additions & 0 deletions docs/guides/sno.md → docs/blog/posts/guides/sno.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,18 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Single Node OKD Installation

<!--- cSpell:ignore wildcard virt libguestfs epel devel -->

This document outlines how to deploy a single node OKD cluster using virt.

<!-- more -->

## Requirements

- Host with a minimal CentOS Stream, Fedora, or CentOS-8 installed (*do not create a /home filesystem*)
Expand Down
9 changes: 9 additions & 0 deletions docs/guides/sri.md → docs/blog/posts/guides/sri.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,18 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Sri's Overkill Homelab Setup

<!--- cSpell:ignore Homelab Ryzen NVME baremetal helpernode ceph alertmanager grafana datacenter bitwarden jellyfin netbox quassel templating -->

This document lays out the resources used to create my completely-overkill homelab. This cluster provides all the compute and storage I think I'll need for the foreseeable future, and the CPU, RAM, and storage can all be scaled vertically independently of each other. Not that I think I'll need to do that for a while.

<!-- more -->

More detail into the deployment and my homelab's Terraform configuration can be found [here](https://github.com/SriRamanujam/okd-deployment){: target=_blank}.

## Hardware
Expand Down
39 changes: 24 additions & 15 deletions docs/guides/upi-sno.md → docs/blog/posts/guides/upi-sno.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,33 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Single Node UPI OKD Installation

<!--- cSpell:ignore wildcard virt libguestfs epel devel -->
<!--- cSpell:ignore wildcard virt libguestfs epel devel VM's vm's ryzen dnsmasq mastersSchedulable virsh schedulable -->

This document outlines how to deploy a single node (the real hard way) using UPI OKD cluster on bare metal or virtual machines.

<!-- more -->

## Overview

User provisioned infrastructure **(UPI)** of OKD 4.x Single Node cluster on bare metal or virtual machines

**N.B.** Installer provisioned infrastructure **(IPI)** - this is the preferred method as it is much simpler,
it automatically provisions and maintains the install for you, however it is targeted towards cloud and onprem services
it automatically provisions and maintains the install for you, however it is targeted towards cloud and on prem services
i.e aws, gcp, azure, also for openstack, IBM, and vSphere.

If your install falls in these supported options then use IPI, if not this means that you will more than likely have to fallback on the UPI install method.

At the end of this document I have supplied a link to my repository. It includes some useful scripts and an example install-config.yaml

## Requirements
The base installation should have 7 VMs (for a full production setup) but for our home lab SNO
we will use 2 vm’s (one for bootstrap and one for the master/worker node) with the following specs :
The base installation should have 7 VM's (for a full production setup) but for our home lab SNO
we will use 2 VM's (one for bootstrap and one for the master/worker node) with the following specs :

* Master/Worker Node/s
* CPU: 4 core
Expand All @@ -35,20 +44,20 @@ we will use 2 vm’s (one for bootstrap and one for the master/worker node) with
## Architecture (this refers to a full high availability cluster)

The diagram below shows an install for high availability scalable solution.
For our single node install we only need a **bootstrap** node and a **master/worker** node (2 bare metal servers or 2 vm’s)
For our single node install we only need a **bootstrap** node and a **master/worker** node (2 bare metal servers or 2 VM's)

![pic](./img/OKD-UPI-Install.jpg){width=100%}


## Software

For the UPI SNO I made use of FHCOS (Fedora CoreOS)
For the UPI SNO I made use of FCOS (Fedora CoreOS)

FHCOS
FCOS

* For OKD https://getfedora.org/en/coreos/download?tab=metal_virtualized&stream=stable&arch=x86_64
* Download the ISO image
* Downlaod the raw.tar.gz
* Download the raw.tar.gz

OC Client & Installer

Expand All @@ -66,10 +75,10 @@ The following is a manual process of installing and configuring the infrastructu
* NFS
* Config for ocp install etc

### Provision VMs (Optional) - Skip this step if you using bare metal servers
### Provision VM's (Optional) - Skip this step if you using bare metal servers

The use of VMs is optional, each node could be a bare metal server.
As I did not have several servers at my disposal I used a NUC (ryzen9 with 32G of RAM) and created 2 VMs (bootstrap and master/worker)
The use of VM's is optional, each node could be a bare metal server.
As I did not have several servers at my disposal I used a NUC (ryzen9 with 32G of RAM) and created 2 VM's (bootstrap and master/worker)

I used cockpit (fedora) to validate the network and vm setup (from the scripts). Use the virtualization software that you prefer.
For the okd-svc machine I used the bare metal server and installed fedora 37 (this hosted my 2 VM's)
Expand All @@ -83,14 +92,14 @@ Install virtualization
sudo dnf install @virtualization
```

### Setup IP's and MAC addreses
### Setup IP's and MAC addresses

Refer to the “Architecture Diagram” above to setup each VM

Obviously the IP addresses will change according to you preferred setup (i.e 192.168.122.x)
I have listed all servers, as it will be fairly easy to change the single node cluster to a fully fledged HA cluster, by changing the install-config.yaml

As a usefule example this is what I setup
As a useful example this is what I setup

* Gateway/Helper : okd-svc 192.168.122.1
* Bootstrap : okd-bootstrap 192.168.122.253
Expand Down Expand Up @@ -569,7 +578,7 @@ $ sudo coreos-installer install /dev/sda --ignition-url http://192.168.122.1:808

**NB** Note if using Fedora CoreOS the device would need to change i.e /dev/vda

Once the vm’s are running with the relevant ignition files
Once the VM's are running with the relevant ignition files

Issue the following commands

Expand Down Expand Up @@ -674,7 +683,7 @@ A typical flow would be (once all the dependencies have been installed)
./virt-env-install.sh okd-install install
```

**N.B.** If there are any discrepencies or improvements please make note. PR's are most welcome !!!
**N.B.** If there are any discrepancies or improvements please make note. PR's are most welcome !!!


Screenshot of final OKD install
Expand Down
12 changes: 10 additions & 2 deletions docs/guides/vadim.md → docs/blog/posts/guides/vadim.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,17 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# Vadim's homelab

<!--- cSpell:ignore Vadim's homelab loadbalancer ceph NVME dnsmasq helpernode autoprovision ostree grafana datasource datasources promtail gitops gitea minio nextcloud navidrome pleroma microblogging wallabag neptr -->

This describes the resources used by OpenShift after performing an installation
to make it similar to my homelab setup.
This describes the resources used by OpenShift after performing an installation to make it similar to my homelab setup.

<!-- more -->

## Compute

Expand Down
Original file line number Diff line number Diff line change
@@ -1,22 +1,31 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# OKD Virtualization on user provided infrastructure

<!--- cSpell:ignore baremetal rpcbind openvswitch kube Virt Hyperconverged hostpath machineconfig kubevirt -->

This guide shows how to set up OKD Virtualization

<!-- more -->

## Preparing the hardware

As a first step for providing an infrastructure for OKD Virtualization, you need to prepare the hardware:

* check that the [minimum hardware requirements for running OKD](https://docs.okd.io/latest/installing/installing_bare_metal/installing-bare-metal.html#minimum-resource-requirements_installing-bare-metal) are satisfied
* check that the [additional hardware requirements for running OKD Virtualization](https://docs.okd.io/latest/virt/install/preparing-cluster-for-virt.html#virt-cluster-resource-requirements_preparing-cluster-for-virt) are also satisfied.


## Preparing the infrastructure

Once your hardware is ready and connected to the network you need to configure your services, your network and your DNS for allowing the OKD installer to deploy the software.
You may also need to prepare in advance a few services you'll need during the deployment.
Carefully read the [Preparing the user-provisioned infrastructure](https://docs.okd.io/latest/installing/installing_bare_metal/installing-bare-metal.html#installation-infrastructure-user-infra_installing-bare-metal) section and ensure all the requirements are met.


## Provision your hosts

For the bastion / service host you can use CentOS Stream 8.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,16 @@
---
draft: false
date: 2020-08-31
categories:
- Guide
---

# vSphere IPI Deployment

This describes the resources used by OpenShift after performing an installation using the required options for the installer.

<!-- more -->

## Infrastructure

### Compute
Expand Down
Loading

0 comments on commit f0e0fd0

Please sign in to comment.