Skip to content

Commit

Permalink
Setup OpenShift logging to capture all logs
Browse files Browse the repository at this point in the history
Closes #409

Co-authored-by: Alexander Schwartz <[email protected]>
  • Loading branch information
ryanemerson and ahus1 authored Jul 5, 2023
1 parent 655fc55 commit 16686ff
Show file tree
Hide file tree
Showing 7 changed files with 165 additions and 0 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 18 additions & 0 deletions doc/kubernetes/modules/ROOT/pages/installation-openshift.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,24 @@ http://grafana.apps.**<domain name>**
NOTE: This is a shared resource for all users using the OpenShift cluster.
While the next section describes how to install multiple Keycloaks in different namespaces, this doesn't apply to the Grafana instance.

== OpenShift Logging

OpenShift logging is enabled by default.
All application and infrastructure pod logs are stored in a non-replicated ElasticSearch instance in the `openshift-logging` namespace.

Logs can be queried in the Kibana UI, which can be accessed via the *Application Launcher*
image:installation-openshift/application-launcher.png[]
-> *Logging* in the Openshift UI:

image::installation-openshift/application-launcher-logs.png[]

In addition, when looking at the logs of a pod, use the *Show in Kibana* link to search the logs for this specific pod:

image::installation-openshift/show-in-kibana.png[]

On initial login to Kibana create an index pattern `*` with the timestamp field `@timestamp` to be able to query logs.
See the https://docs.openshift.com/container-platform/4.13/logging/cluster-logging-visualizer.html[OpenShift docs] for more details.

[[sharing-cluster-with-multiple-users]]
== Sharing one OpenShift cluster with other users

Expand Down
2 changes: 2 additions & 0 deletions provision/aws/rosa_create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,5 @@ rosa create machinepool -c "${CLUSTER_NAME}" --instance-type m5.4xlarge --max-re
# cryostat operator depends on certmanager operator
./rosa_install_certmanager_operator.sh
./rosa_install_cryotstat_operator.sh

./rosa_install_openshift_logging.sh
106 changes: 106 additions & 0 deletions provision/aws/rosa_install_openshift_logging.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
#!/bin/bash

set -eo pipefail

if [[ "$RUNNER_DEBUG" == "1" ]]; then
set -x
fi

# Wait for k8s resource to exist. See: https://github.com/kubernetes/kubernetes/issues/83242
waitFor() {
xtrace=$(set +o|grep xtrace); set +x
local ns=${1?namespace is required}; shift
local type=${1?type is required}; shift

echo "Waiting for $type $*"
until oc -n "$ns" get "$type" "$@" -o=jsonpath='{.items[0].metadata.name}' >/dev/null 2>&1; do
echo "Waiting for $type $*"
sleep 1
done
eval "$xtrace"
}

oc apply -f - << EOF
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: elasticsearch-operator
namespace: openshift-operators-redhat
spec:
channel: stable
installPlanApproval: Automatic
name: elasticsearch-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: stable
installPlanApproval: Automatic
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF

waitFor default crd clusterloggings.logging.openshift.io
oc wait --for condition=established --timeout=60s crd/clusterloggings.logging.openshift.io

oc apply -f - << EOF
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
fluentd: {}
type: fluentd
logStore:
elasticsearch:
nodeCount: 1
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
resources:
limits:
memory: 4Gi
requests:
memory: 4Gi
storage:
size: 200G
retentionPolicy:
application:
maxAge: 1d
audit:
maxAge: 7d
infra:
maxAge: 7d
type: elasticsearch
managementState: Managed
visualization:
kibana:
replicas: 1
type: kibana
EOF

# install the console plugin
oc patch console.operator cluster -n openshift-storage --type json -p '[{"op": "add", "path": "/spec/plugins", "value": ["logging-view-plugin"]}]'
39 changes: 39 additions & 0 deletions provision/openshift/monitoring/templates/openshift-logging.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
fluentd: {}
type: fluentd
logStore:
elasticsearch:
nodeCount: 1
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
resources:
limits:
memory: 4Gi
requests:
memory: 4Gi
storage:
size: 200G
retentionPolicy:
application:
maxAge: 1d
audit:
maxAge: 7d
infra:
maxAge: 7d
type: elasticsearch
managementState: Managed
visualization:
kibana:
replicas: 1
type: kibana

0 comments on commit 16686ff

Please sign in to comment.