Skip to content

Running a local cluster network operator for plugin development

Dan Winship edited this page Jul 12, 2019 · 22 revisions

Creating a Custom ovn-kubernetes Image

Create an image repository

You need somewhere to store your custom image that the AWS cluster instance can pull it from. The easiest options are Docker Hub or Quay.io. Create an account. Now create a repository called "ovn-kubernetes".

Clone ovn-kubernetes

  1. git clone [email protected]:ovn-org/ovn-kubernetes.git
  2. make your changes to the source tree
  3. cd ovn-kubernetes/go-controller
  4. make

Build and push the custom ovn-kubernetes image

Assuming you are working with the upstream ovn-kubernetes git repo and using Docker Hub, you can build an image from your ovn-kubernetes changes using the following steps:

  1. cd dist/images/
  2. make fedora
  3. docker tag ovn-kube-f:latest docker.io/(docker hub username)/ovn-kubernetes:latest
  4. docker push docker.io/(docker hub username)/ovn-kubernetes:latest

Download the OpenShift Installer

The installer handles cluster creation in AWS.

  1. Go to https://openshift-release.svc.ci.openshift.org/ and find an installer in the "4.2" stream that is shown in green (eg, has passed CI). For example https://openshift-release.svc.ci.openshift.org/releasestream/4.2.0-0.ci/release/4.2.0-0.ci-2019-06-25-135202
  2. Click the "Download the installer" link at the top of the page
  3. Wait a while
  4. Click the link for the "openshift-install-linux" tarball (eg "openshift-install-linux-4.2.0-0.ci-2019-06-25-135202.tar.gz") to download it
  5. Extract the tarball somewhere for later, like /tmp

Get your Pull Secrets

Pull Secrets are specific to your user and allow your cluster to download the container images for all the OpenShift components. There are two pull secrets to get: the internal-only OpenShift CI ones (used by some installer bootstrap images) and the general OpenShift Developer secrets. You probably only have to do this once (though the secrets do periodically expire).

Get the OpenShift CI pull secret

  1. go to https://api.ci.openshift.org/console/
  2. Click the (?) in the upper right
  3. click on "Command Line Tools" in the menu that drops down
  4. Click the Clipboard icon at the end of the "oc login https://..." box
  5. Paste that command from the clipboard into a terminal and run it
  6. Run oc registry login --to=/tmp/ci-pull-secret to dump the pull secret to a file

Your CI pull secret is now in /tmp/ci-pull-secret and we'll combine it with the generic OpenShift pull secrets that we'll feed to the installer.

Get the Generic OpenShift pull secrets

  1. go to the OpenShift portal secrets page
  2. Click the big "AWS" box
  3. Click the "Installer Provisioned Infrastructure" box
  4. Click the "Download Pull Secret" box and save the pull secret to /tmp/pull-secret

Combine your pull secrets

Once we've combined the pull secrets you can pass them to the installer when it asks for them.

  1. jq -nc "$(cat /tmp/pull-secret) * $(cat /tmp/ci-pull-secret)" > /tmp/combined-pull-secrets

Clone and build OpenShift

  1. git clone [email protected]:openshift/origin.git
  2. cd origin; make

Clone and build the Cluster Network Operator

  1. git clone [email protected]:openshift/cluster-network-operator.git
  2. cd cluster-network-operator
  3. hack/build-go.sh

Get an AWS account

https://mojo.redhat.com/docs/DOC-1081313#jive_content_id_Amazon_AWS

Run hack/run-locally.sh to start a cluster with your custom image

The first time you run the installer it will ask a series of questions. When it is done there will be an 'install-config.yaml.bak.XXXXXX' file in the cluster temporary directory that you give to hack/run-locally.sh. You can copy this 'install-config.yaml.bak.XXXXX' file and pass it to hack/run-locally.sh to save steps next time.

  1. First locate your 'oc' binary from the OpenShift origin build. It's in _output/local/bin/linux/amd64/oc
  2. Run PATH=/path/to/oc:$PATH hack/run-locally.sh -c (cluster temp dir) -i /path/to/openshift-install -n ovn -m docker.io/(docker hub username)/ovn-kubernetes:latest and substitute as necessary.
  3. hack/run-locally.sh runs the openshift-installer for you, so now you get to answer some questions
  4. SSH Public Key - this allows you to SSH to the bootstrap node for debugging; pick one.
  5. Platform - pick AWS
  6. Region - pick something close to you
  7. Base Domain - pick devcluster.openshift.com
  8. Cluster Name - pick something unique like "dcbw-ovntest"
  9. Pull Secret - Paste the contents of /tmp/combined-pull-secrets that we created earlier
  10. Your install-config.yaml.bak.XXXXX file will now be in your (cluster temp dir). Copy this file somewhere for future use.

Poking around your cluster

  1. You can tail -f /(cluster temp dir)/.openshift-install.log to watch progress. After it says "
  2. List pods: /path/to/oc --config /(cluster temp dir)/auth/kubeconfig get pods --all-namespaces and look for ovn-kubernetes related pods. You should see them running.
  3. Get logs from ovn-kubernetes: /path/to/oc --config /(cluster temp dir)/auth/kubeconfig logs -n openshift-ovn-kubernetes (ovn-kubernetes pod name) and it will yell at you to pick a container. Pick one and repeat the previous command but add -c (container name) to the end to get the logs.
  4. You can SSH to the bootstrap node if things don't seem to be coming up after a while. Look in /(cluster temp dir)/terraform.tfstate for the aws_instance type with the name "bootstrap". About 20 lines below you'll see "public_ip"; copy that IP and ssh core@(public IP). Then journalctl -b -f -u bootkube.service and take a look at the errors.

Destroying your cluster

  1. Ctrl+C hack/run-locally.sh
  2. Run /path/to/openshift-install --dir (cluster temp dir) destroy cluster
  3. Wait a long time

Subsequent cluster starts

Since you cached the install-config you can save yourself a lot of time. Now all you need to do is:

  1. Run PATH=/path/to/oc:$PATH hack/run-locally.sh -c (cluster temp dir) -i /path/to/openshift-install -n ovn -m docker.io/(docker hub username)/ovn-kubernetes:latest -f /path/to/install-config.yaml and substitute as necessary.