This operator powers OpenShift-based deployments of Ploigos - a DevSecOps ecosystem modeled after the US DoD’s DoD Enterprise DevSecOps Reference Design(DEDSORD). Pipeline steps are implemented using the ploigos-step-runner, a python-based abstraction layer equipped with step implementers that make your pipeline agnostic to underlying tools and services.
Two APIs are offered:
-
PloigosPlatform: an all-in-one resource for provisioning pre-wired infrastructure like a CI tool, static code analysis server, artifact repository, and other services that support a DevSecOps pipeline.
-
PloigosPipeline: a resource for creating an end-to-end pipeline for your application’s source code.
-
Create a
CatalogSource
to import the RedHatGov operator catalog.oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhatgov-operators namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/redhatgov/operator-catalog:latest displayName: Red Hat NAPS Community Operators publisher: RedHatGov EOF
-
Create a project for your pipeline tooling to live.
export PLOIGOS_PROJECT=devsecops oc new-project $PLOIGOS_PROJECT
-
Ploigos is hungry - delete any
LimitRange
that might have been created from project templates:oc delete limitrange --all -n $PLOIGOS_PROJECT
-
Create a new
OperatorGroup
to support installation into the$PLOIGOS_PROJECT
namespace:oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: namespace: $PLOIGOS_PROJECT name: $PLOIGOS_PROJECT-og spec: targetNamespaces: - $PLOIGOS_PROJECT EOF
-
Install this operator into your namespace:
oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ploigos-software-factory-operator namespace: $PLOIGOS_PROJECT spec: channel: alpha installPlanApproval: Automatic name: ploigos-software-factory-operator source: redhatgov-operators sourceNamespace: openshift-marketplace EOF
-
Create a
PloigosPlatform
to spin up your infrastructure:oc apply -f - << EOF apiVersion: redhatgov.io/v1alpha1 kind: PloigosPlatform metadata: name: ploigosplatform spec: ploigosPlatform: services: continuousIntegration: jenkins: enabled: true sourceControl: gitea: enabled: true artifactRepository: nexusArtifacts: enabled: true staticCodeAnalysis: sonarqube: enabled: true continuousDeployment: argocd: enabled: true uat: selenium: enabled: true containerRegistry: nexusContainers: enabled: true EOF
-
Then create a
PloigosPipeline
instance for our reference application. If you want to use your own application here, take a look at the ploigos-onboarding-demo to see how to wire it up.oc apply -f - << EOF apiVersion: redhatgov.io/v1alpha1 kind: PloigosPipeline metadata: name: ploigospipeline-reference-app spec: appName: ref-quarkus-mvn appRepo: destinationRepoName: reference-quarkus-mvn sourceUrl: >- https://github.com/ploigos-reference-apps/reference-quarkus-mvn.git autoStartPipeline: true helmRepo: destinationRepoName: reference-quarkus-mvn-cloud-resources_workflow-typical sourceUrl: >- https://github.com/ploigos-reference-apps/reference-cloud-resources_operator.git serviceName: fruit EOF
-
Watch the magic happen - pop into Jenkins and check out the pipeline:
oc get route jenkins --template "https://{{.spec.host}}"
You can specify a select number of services in the ploigosPlatform.services
property of your PloigosPlatform
object, leveraging either Fully Managed or External Services.
Fully Managed Services are deployed and configured by the Ploigos Software Factory Operator. To use a fully managed implementation for a given workflow function, add it to your PloigosPlatform
CustomResource
like this:
ploigosPlatform: services: continuousIntegration: jenkins: enabled: true
To use a service which already exists, you must supply connection properties so the operator can configure it. This can be done by adding the required options in the externalProperties
sub-object. For example:
ploigosPlatform: services: continuousIntegration: jenkins: enabled: true externalProperties: url: http://jenkins.example.com token: 12345678
Note that applicable externalProperties
differ depending on the service you’re configuring. External Services can also be configured without the use of this operator by using the Ploigos Service Configs Collection directly.
See below for a list of supported implementations for each service along with applicable External Properties:
Service |
Required? |
Supported Implementations |
External Properties |
Single Sign-On (SSO) |
rhsso (Red Hat Single Sign-On) |
(Not Supported) |
|
Continuous Integration |
✅ |
jenkins |
|
gitlabCi |
|
||
tekton |
(Not Supported) |
||
Source Control |
✅ |
gitea |
|
gitlab |
|
||
Artifact Repository |
✅ |
nexusArtifacts |
|
Static Code Analysis |
✅ |
sonarqube |
|
Container Registry |
✅ |
nexusContainers |
|
quay |
|
||
Continuous Deployment |
✅ |
argocd |
|
User Acceptance Testing |
✅ |
selenium |
|
The default PloigosPlatform
deployment assumes that your OpenShift Router is equipped with a certificate signed by a well-known certificate authority. If your certificates are signed using a private CA instead, you can provide the name of a ConfigMap
which holds your trusted CA Bundle. The ConfigMap should have a single key named ca-bundle.crt. This key has a collection of CA certificates as its value. If the provided ConfigMap exists, it will be used as-is. Otherwise, it will be generated using a label of config.openshift.io/inject-trusted-cabundle=true
and populated with the Cluster Network Operator. For example:
apiVersion: redhatgov.io/v1alpha1 kind: PloigosPlatform metadata: name: ploigosplatform spec: ploigosPlatform: tls: trustBundleConfigMap: trustedcabundle
If you are using self-signed certs, but configuring your own private CA is for some reason not an option, you can instead disable TLS verification. This is not recommended because it is less secure. To disable TLS verification, update your PloigosPlatform
CR like this:
apiVersion: redhatgov.io/v1alpha1 kind: PloigosPlatform metadata: name: ploigosplatform spec: ploigosPlatform: tls: verify: false
When using tekton
as a continuousIntegration
service, cluster and Pipeline
assets are deployed using helm charts served from the helm repository specified by ploigosPlatform.helmRepository
. This is particularly useful to override when operating in disconnected environments.
apiVersion: redhatgov.io/v1alpha1 kind: PloigosPlatform metadata: name: ploigosplatform spec: ploigosPlatform: helmRepository: https://my.private.repo/charts
There is a script hack/operate.sh
which will download the prerequisites (operator-sdk etc.), build the operator artifacts from operator-sdk defaults, package and push the operator container image, deploy the artifacts to a Kubernetes cluster, and create a kind: PloigosPlatform
CR to deploy an instance. You should use the help page to look at what the various options do, but for the most part if you want to deploy a Ploigos Platform to a cluster directly from this repo you could run hack/operate.sh -d
.
Before running the script make sure to update the location of the container image to a repository you have access to. If you decide to build your own container image for the operator, make sure to update hack/operate.conf
with an updated container image location and add the -p
flag to operate.sh
.
The installation of the Custom Resource Definition and Cluster Role requires cluster-admin privileges. After that regular users with admin
privileges on their projects (which is automatically granted to the user who creates a project) can provision the Ploigos Software Factory Operator in their projects and deploy PloigosPlatforms using the ploigosplatform.redhatgov.io Custom Resource. If you’ve installed the operator from the RedHatGov Operator Catalog Index on an OLM-enabled cluster, the Ploigos Software Factory Operator can be installed from the OperatorHub interface of the console.
Perform the following tasks as cluster-admin:
-
Deploy the CustomResourceDefinition, ClusterRole, ClusterRoleBinding, ServiceAccount, and Operator Deployment:
hack/operate.sh
-
Once the Operator pod is running the Operator is ready to start creating Ploigos Platforms.
-
To deploy the above, and also one of the
config/samples/redhatgov_v1alpha1_ploigosplatform*.yaml
example CustomResources:hack/operate.sh --deploy-cr
-
To install the operator with RBAC scoped to a specific namespace, deploying a Role and RoleBinding instead of a ClusterRole and ClusterRoleBinding:
hack/operate.sh --overlay=namespaced --namespace=mynamespace
In case you wish to uninstall the Ploigos Software Factory Operator, simply delete the operator and its resources with:
hack/operate.sh -r
OLM uninstallation for OLM-based operators can be handled through the UI, or by deleting the Subscription
.
The Operator SDK makes heavy use of Kustomize for development and installation, but intends bundles to be generated for use in an operator catalog. This enables the Operator Lifecycle Manager, deployed onto your cluster, to install and configure operators with a simple kind: Subscription
object, instead of a large collection of manifests.
If you are using a registries.conf
change and/or ImageContentSourcePolicy mirror that covers quay.io/redhatgov images, you should not have to change anything.
To change the image sources for all necessary images to deploy the operator without such a policy, you need to have the following images hosted in a container repository on your disconnected network:
-
quay.io/redhatgov/ploigos-operator:latest
If you intend on using hack/operate.sh
it expects you to be in a development environment. Operator installation from this script therefore expects access to the internet. This comes with one extra concern: If kustomize
isn’t in your path, it tries to download it from the internet and save it locally into a .gitignore`d folder. If you intend on using `hack/operate.sh
to install the operator, you should also bring kustomize
and place it in the $PATH
of the user who will be running the script. Additionally, in order to install the operator with hack/operate.sh
you’ll need to make the following change:
-
hack/operate.conf
: IMG should point to the ploigos-operator image in your environment
Please see the Contributing Documentation.
Please see the Lifecycle Documentation.