Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update README file #15

Merged
merged 8 commits into from
Feb 16, 2024
266 changes: 223 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,44 @@
# tnf-op
# CNF Certification Suite Operator

Red Hat's CNF Certification Suite Operator PoC
[![red hat](https://img.shields.io/badge/red%20hat---?color=gray&logo=redhat&logoColor=red&style=flat)](https://www.redhat.com)
[![openshift](https://img.shields.io/badge/openshift---?color=gray&logo=redhatopenshift&logoColor=red&style=flat)](https://www.redhat.com/en/technologies/cloud-computing/openshift)

## Description

Proof of Concept for a Kubernetes/Openshift Operator running the
Kubernetes/Openshift Operator (scaffolded with operator-sdk) running the
[CNF Certification Suite Container](https://github.com/test-network-function/cnf-certification-test).
greyerof marked this conversation as resolved.
Show resolved Hide resolved

The CNF Certification Suites provide a set of test cases for the
Containerized Network Functions/Cloud Native Functions (CNFs) to verify if
best practices for deployment on Red Hat OpenShift clusters are followed.

### How does it work?

The Operator registers two CRDs in the cluster:
`CnfCertificationSuiteRun` and `CnfCertificationSuiteReport`,
also informally referred as Run and Report CRDs.

In order to fire up the CNF Certification Suite, the user must create
a CnfCertificationSuiteRun CR, which has to be created with a Config Map
containing the cnf certification suites configuration,
and a Secret containing the preflight suite credentials.
**Note:** All resources mentioned above should be created in the operator's
installation namespace (by default `cnf-certsuite-operator`)

See resources relationship diagram:

![run config](doc/uml/run_config.png)

When the CR is deployed, a new pod with two containers is created:

1. Container built with the cnf certification image in order to run the suites.
2. Container which creates a new CR representing the CNF Certification suites
results based on results claim file created by the previous container.

**See diagram summarizing the process:**

![Use Case Run](doc/uml/use_case_run.png)

## Getting Started

You’ll need a Kubernetes cluster to run against.
Expand All @@ -15,87 +47,235 @@ or run against a remote cluster.
**Note:** Your controller will automatically use the current context in your
kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).

### Running on the cluster
### Install operator

1. Install Instances of Custom Resources:
#### Initial steps

1. Clone Cnf Certification Operator repo:

```sh
kubectl apply -f config/samples/
git clone https://github.com/greyerof/tnf-op.git
```

2. Build and push your image to the location specified by `IMG`:
2. Install cert-manager:

```sh
make docker-build docker-push IMG=<some-registry>/tnf-op:tag
kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
```

3. Deploy the controller to the cluster with the image specified by `IMG`:
#### Option 1: Use a your own registry account

1. Export images environment variables:

```sh
make deploy IMG=<some-registry>/tnf-op:tag
export IMG=<your-registry.com>/<your-repo>/cnf-certsuite-operator:<version>
export SIDECAR_IMG=<your-registry.com>/<your-repo>/cnf-certsuite-operator-sidecar:<version>
```

### Uninstall CRDs
2. Build and upload the controller image to your registry account:

To delete the CRDs from the cluster:
```sh
make docker-build docker-push
```

```sh
make uninstall
```
3. Build and upload the side car image to your registry account:

### Undeploy controller
```sh
docker build -f cnf-cert-sidecar/Dockerfile -t $SIDECAR_IMG .
docker push $SIDECAR_IMG
```

UnDeploy the controller from the cluster:
4. Deploy the operator, using the previously uploaded controller image,
and the built side car image:

```sh
make undeploy
```
```sh
make deploy
```

## Contributing
#### Option 2: Use local images

// TODO(user): Add detailed information on how you would
like others to contribute to this project
1. Export images environment variables (optional):

```sh
export IMG=<your-cnf-certsuite-operator-image-name>
export SIDECAR_IMG=<your-sidecar-app-image-name>
```

### How it works
**Note**: if the images aren't provided,
scripts of next steps will use default images:

This project aims to follow the Kubernetes
[Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
```sh
IMG=ci-cnf-op:v0.0.1-test
SIDECAR_IMG=ci-cnf-op-sidecar:v0.0.1-test
```

It uses
[Controllers](https://kubernetes.io/docs/concepts/architecture/controller/),
which provide a reconcile function responsible for synchronizing resources until
the desired state is reached on the cluster.
2. Build controller and side car images:

### Test It Out
```sh
scripts/ci/build.sh
```

1. Install the CRDs into the cluster:
3. Deploy previously built images by preloading them into the kind cluster's nodes:

```sh
make install
scripts/ci/deploy.sh
```

2. Run your controller (this will run in the foreground,
so switch to a new terminal if you want to leave it running):
### Test it out

Use our samples to test out the cnf certification operator, with the following command:

```sh
make deploy-samples
```
greyerof marked this conversation as resolved.
Show resolved Hide resolved

**Note**: Current sample CnfCertificationSuiteRun CR configures
the CNF Certification Suite to run the "observability" test suite only.
It can be modified by changing manually the `labelsFilter` of the [sample CR](https://github.com/greyerof/tnf-op/blob/main/config/samples/cnf-certifications_v1alpha1_cnfcertificationsuiterun.yaml).

### How to customize the CNF Certification Suite run

1. Create Resources

In order to use the cnf certification suite operator,
you'll have to create yaml files for the following resources:

1. Config map:\
Containing the cnf certification configuration file
content under the `tnf_config.yaml` key.\
(see [CNF Certification configuration description](https://test-network-function.github.io/cnf-certification-test/configuration/))

2. Secret:\
Containing cnf preflight suite credentials
under the `preflight_dockerconfig.json` key.\
(see [Preflight Integration description](https://test-network-function.github.io/cnf-certification-test/runtime-env/#disable-intrusive-tests))

3. CnfCertificationSuiteRun CR:\
Containing the following Spec fields that has to be filled in:
- **labelsFilter**: Wanted label filtering the cnf certification tests suite.
- **logLevel**: Wanted log level of cnf certification tests suite run.\
Log level options: "info", "debug", "warn", "error"
- **timeout**: Wanted timeout for the the cnf certification tests.
- **configMapName**: Name of the config map defined at stage 1.
- **preflightSecretName**: Name of the preflight Secret
defined at stage 2.
- **enableDataCollection**: Set to "true" to enable data collection,
or "false" otherwise\
**Note:** Current operator's version **doesn't** support
setting enableDataCollection to "true".

See a [sample CnfCertificationSuiteRun CR](https://github.com/greyerof/tnf-op/blob/main/config/samples/cnf-certifications_v1alpha1_cnfcertificationsuiterun.yaml)

2. Apply resources into the cluster

After creating all the yaml files for required resources,
use the following commands to apply them into the cluster:

```sh
make run
oc apply -f /path/to/config/map.yaml
oc apply -f /path/to/preflight/secret.yaml
oc apply -f /path/to/cnfCertificationSuiteRun.yaml
```

**NOTE:** You can also run this in one step by running: `make install run`
**Note**: The same config map and secret can be reused
by different CnfCertificationSuiteRun CR's.

### Review results

If all of the resources were applied successfully, the cnf certification suites
will run on a new created `pod` in the `cnf-certsuite-operator` namespace.
The pod has the name with the form `cnf-job-run-N`:
greyerof marked this conversation as resolved.
Show resolved Hide resolved

<!-- markdownlint-disable -->
```sh
$ oc get pods -n cnf-certsuite-operator
NAME READY STATUS RESTARTS AGE
cnf-certsuite-controller-manager-6c6bb6d965-jslmd 2/2 Running 0 21h
cnf-job-run-1 0/2 Completed 0 21h
```
<!-- markdownlint-enable -->

Check whether the pod creation and the cnf certification suites run were successful
by checking CnfCertificationSuiteRun CR's status.
In the successful case, expect to see the following status:

```sh
$ oc get cnfcertificationsuiteruns.cnf-certifications.redhat.com -n cnf-certsuite-operator
NAME AGE STATUS
cnfcertificationsuiterun-sample 50m CertSuiteFinished
```

When the pod is completed, a new `CnfCertificationSuiteReport` will be created
under the same namespace.
greyerof marked this conversation as resolved.
Show resolved Hide resolved
CNF certification suites results will be stored in the CR's Status different fields:

- Results: For every test case, contains its result and logs.
If the the result is "skipped" or "failed" contains also the skip\failure reason.

3. Sample CnfCertificationSuiteRun CR:
See example:

<!-- markdownlint-disable -->
```sh
oc apply -f cert-run.yaml
status:
results:
- logs: |
INFO [Feb 15 13:05:50.749] [check.go: 263] [observability-pod-disruption-budget] Running check (labels: [common observability-pod-disruption-budget observability])
INFO [Feb 15 13:05:50.749] [suite.go: 193] [observability-pod-disruption-budget] Testing Deployment "deployment: test ns: tnf"
INFO [Feb 15 13:05:50.749] [suite.go: 206] [observability-pod-disruption-budget] PDB "test-pdb-min" is valid for Deployment: "test"
INFO [Feb 15 13:05:50.749] [suite.go: 224] [observability-pod-disruption-budget] Testing StatefulSet "statefulset: test ns: tnf"
INFO [Feb 15 13:05:50.749] [suite.go: 237] [observability-pod-disruption-budget] PDB "test-pdb-max" is valid for StatefulSet: "test"
INFO [Feb 15 13:05:50.749] [checksdb.go: 115] [observability-pod-disruption-budget] Recording result "PASSED", claimID: {Id:observability-pod-disruption-budget Suite:observability Tags:common}
result: passed
testCaseName: observability-pod-disruption-budget
- logs: |
INFO [Feb 15 13:05:50.723] [checksgroup.go: 83] [operator-install-source] Skipping check operator-install-source, reason: no matching labels
INFO [Feb 15 13:05:50.723] [checksdb.go: 115] [operator-install-source] Recording result "SKIPPED", claimID: {Id:operator-install-source Suite:operator Tags:common}
reason: no matching labels
result: skipped
testCaseName: operator-install-source
- logs: |
INFO [Feb 15 13:05:50.749] [checksgroup.go: 83] [affiliated-certification-helmchart-is-certified] Skipping check affiliated-certification-helmchart-is-certified, reason: no matching labels
INFO [Feb 15 13:05:50.749] [checksdb.go: 115] [affiliated-certification-helmchart-is-certified] Recording result "SKIPPED", claimID: {Id:affiliated-certification-helmchart-is-certified Suite:affiliated-certification Tags:common}
reason: no matching labels
result: skipped
testCaseName: affiliated-certification-helmchart-is-certified
```
<!-- markdownlint-enable -->

### Modifying the API definitions
- Summary: Summarize the total number of tests by their results.
- Verdict: Specifies the overall result of the CNF certificattion suites run.\
Poissible verdicts: "pass", "skip", "fail", "error".

If you are editing the API definitions,
generate the manifests such as CRs or CRDs using:
Run the following command to ensure its creation:

```sh
make manifests
$ oc get cnfcertificationsuitereports.cnf-certifications.redhat.com -n cnf-certsuite-operator
NAME AGE
cnf-job-run-1-report 21h
```

To review the test results describe the created
`CnfCertificationSuiteReport` run the following command:

```sh
oc describe cnfcertificationsuitereports.cnf-certifications.redhat.com \
-n cnf-certsuite-operator <report`s name>
```

### Uninstall CRDs

To delete the CRDs from the cluster:

```sh
make uninstall
```

### Undeploy controller

UnDeploy the controller from the cluster:

```sh
make undeploy
```

**NOTE:** Run `make --help` for more information on all potential `make` targets
Expand Down
Loading