-
Notifications
You must be signed in to change notification settings - Fork 238
Running a local cluster network operator for plugin development
You need somewhere to store your custom image that the AWS cluster instance can pull it from. The easiest options are Docker Hub or Quay.io. Create an account. Now create a repository called "ovn-kubernetes".
Assuming you are working with the upstream ovn-kubernetes git repo and using Docker Hub, you can build an image from your ovn-kubernetes changes using the following steps:
- cd dist/images/
- make fedora
- docker tag ovn-kube-f:latest docker.io//ovn-kubernetes:latest
- docker push docker.io//ovn-kubernetes:latest
The installer handles cluster creation in AWS.
- https://openshift-release.svc.ci.openshift.org/ and find an installer in the "4.2" stream that is shown in green (eg, has passed CI). For example https://openshift-release.svc.ci.openshift.org/releasestream/4.2.0-0.ci/release/4.2.0-0.ci-2019-06-25-135202
- Click the "Download the installer" link at the top of the page
- Wait a while
- Click the link for the "openshift-install-linux" tarball (eg "openshift-install-linux-4.2.0-0.ci-2019-06-25-135202.tar.gz") to download it
- Extract the tarball somewhere for later, like /tmp
Pull Secrets are specific to your user and allow your cluster to download the container images for all the OpenShift components. There are two pull secrets to get: the internal-only OpenShift CI ones (used by some installer bootstrap images) and the general OpenShift Developer secrets. You probably only have to do this once (though the secrets do periodically expire).
- go to https://api.ci.openshift.org/console/
- Click the (?) in the upper right
- click on "Command Line Tools" in the menu that drops down
- Click the Clipboard icon at the end of the "oc login https://..." box
- Paste that command from the clipboard into a terminal and run it
- Run "oc registry login --to=/tmp/registry.auth" to dump the pull secret to a file
-
tr -d '\n ' < /tmp/registry.auth > /tmp/registry.auth
(to strip spaces and newlines)
Your CI pull secret is now in /tmp/registry.auth and we'll use its contents later as input to the Cluster Network Operator's hack/run-locally.sh script.
- go to the OpenShift portal secrets page
- Click the big "AWS" box
- Click the "Installer Provisioned Infrastructure" box
- Click the "Download Pull Secret" box and save the pull secret somewhere; we'll use this later as input to the openshift-installer
- git clone [email protected]:openshift/origin.git
- cd origin; make
- git clone [email protected]:openshift/cluster-network-operator.git
- cd cluster-network-operator
- hack/build-go.sh
The first time you run the installer it will ask a series of questions. When it is done there will be an 'install-config.yaml.bak.XXXXXX' file in the cluster temporary directory that you give to hack/run-locally.sh. You can copy this 'install-config.yaml.bak.XXXXX' file and pass it to hack/run-locally.sh to save steps next time.
- First locate your 'oc' binary from the OpenShift origin build. It's in _output/local/bin/linux/amd64/oc
- Install the 'jq' binary:
dnf install jq
- Run
PATH=/path/to/oc:$PATH hack/run-locally.sh -c <cluster temp dir> -i /path/to/openshift-install -n ovn -m docker.io/<docker hub username>/ovn-kubernetes:latest -s "$(cat /tmp/registry.auth)"
and substitute as necessary. Note that we pass the OpenShift CI pull secret in with the-s
option - hack/run-locally.sh runs the openshift-installer for you, so now you get to answer some questions
-
SSH Public Key
- this allows you to SSH to the bootstrap node for debugging; pick one. -
Platform
- pick AWS -
Region
- pick something close to you -
Base Domain
- pick devcluster.openshift.com -
Cluster Name
- pick something unique like "dcbw-ovntest" -
Pull Secret
- Paste the contents of the OpenShift generic pull secrets file you downloaded earlier. Typing "?" and hitting Enter will also point you to the right place. - Your install-config.yaml.bak.XXXXX file will now be in your . Copy this file somewhere for future use.
- You can
tail -f /<cluster temp dir>/.openshift-install.log
to watch progress. After it says " - List pods:
/path/to/oc --config /<cluster temp dir>/auth/kubeconfig get pods --all-namespaces
and look for ovn-kubernetes related pods. You should see them running. - Get logs from ovn-kubernetes:
/path/to/oc --config /<cluster temp dir>/auth/kubeconfig logs -n openshift-ovn-kubernetes <ovn-kubernetes pod name>
and it will yell at you to pick a container. Pick one and repeat the previous command but add-c <container name>
to the end to get the logs. - You can SSH to the bootstrap node if things don't seem to be coming up after a while. Look in //terraform.tfstate for the
aws_instance
type with the name "bootstrap". About 20 lines below you'll see "public_ip"; copy that IP andssh core@<public IP>
. Thenjournalctl -b -f -u bootkube.service
and take a look at the errors.
- Ctrl+C hack/run-locally.sh
- Run /path/to/openshift-install --dir destroy cluster
- Wait a long time
Since you cached the install-config you can save yourself a lot of time. Now all you need to do is:
- Run
PATH=/path/to/oc:$PATH hack/run-locally.sh -c <cluster temp dir> -i /path/to/openshift-install -n ovn -m docker.io/<docker hub username>/ovn-kubernetes:latest -f /path/to/install-config.yaml
and substitute as necessary.