|
1 | 1 | ## Aggregated Logging
|
2 | 2 |
|
3 |
| -> OpenShift's built in logging by default collects all output from all containers that are logging to system out. This means no logging needs to be configured explicitly in the application. Logs are collected using Vector collector or the legacy Fluentd collector then popped into Elastic (or LokiStack) where they are indexed in a timeseries as JSON. You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin. Or you can choose to use Kibana which the graphical tool on top of Elastic to run queries and search the logs. |
| 3 | +> OpenShift's built in logging is deployed as an operator using the LokiStack. By default collects all output from all containers that are logging to system out. This means no logging needs to be configured explicitly in the application. Logs are collected using a collector running on each nodes, then popped into LokiStack where they are indexed in a timeseries as JSON. OpenShift has a built in visualisation UI, but you can also use an external Grafana as well. |
| 4 | +
|
4 | 5 |
|
5 | 6 | 1. Observe logs from any given container:
|
6 | 7 |
|
|
11 | 12 |
|
12 | 13 | By default, these logs are not stored in a database, but there are a number of reasons to store them (ie troubleshooting, legal obligations..)
|
13 | 14 |
|
14 |
| -2. OpenShift magic provides a great way to collect logs across services, anything that's pumped to `STDOUT` or `STDERR` is collected by FluentD and added to Elastic Search. This makes indexing and querrying logs very easy. Kibana is added on top for easy visualisation of the data. Let's take a look at Kibana now. |
15 |
| - |
16 |
| - ```bash |
17 |
| - https://kibana-openshift-logging.<CLUSTER_DOMAIN> |
18 |
| - ``` |
19 |
| - |
20 |
| -3. Login using your standard credentials. On first login you'll need to `Allow selected permissions` for OpenShift to pull your permissions. |
| 15 | +2. OpenShift magic provides a great way to collect logs across services, anything that's pumped to `STDOUT` or `STDERR` is collected and added to LokiStack. This makes indexing and querrying logs very easy. Let's take a look at OpenShift Logs UI now. |
21 | 16 |
|
22 |
| -4. Once logged in, you'll be prompted to create an `index` to search on. This is beacause there are many data sets in elastic search, so you must choose the ones you would like to search on. We'll just search on the application logs as opposted to the platform logs in this exercise. Create an index pattern of `app-*` to search across all application logs in all namespaces. |
| 17 | +  |
23 | 18 |
|
24 |
| -  |
25 | 19 |
|
26 |
| -5. On configure settings, select `@timestamp` to filter by and create the index. |
| 20 | +7. Let's filter the information, look for the logs specifically for pet-battle apps running in the test nameaspace by adding this to the query bar. Click `Show Query`, paste the below and then hit `Run Query`. |
27 | 21 |
|
28 |
| -  |
29 |
| -
|
30 |
| -6. Go to the Kibana Dashboard - Hit `Discover` in the top left hand corner, we should now see all logs across all pods. It's a lot of information but we can query it easily. |
31 |
| - |
32 |
| -  |
33 |
| - |
34 |
| -7. Let's filter the information, look for the logs specifically for pet-battle apps running in the test nameaspace by adding this to the query bar: |
35 |
| -`kubernetes.namespace_name="<TEAM_NAME>-test" AND kubernetes.container_name=pet-battle-.*` |
| 22 | + ```bash |
| 23 | + { log_type="application", kubernetes_pod_name=~"pet-battle-.*", kubernetes_namespace_name="<TEAM_NAME>-test" }` |
| 24 | + ``` |
36 | 25 |
|
37 |
| -  |
| 26 | +  |
38 | 27 |
|
39 |
| -8. Container logs are ephemeral, so once they die you'd loose them unless they're aggregated and stored somewhere. Let's generate some messages and query them from the UI in Kibana. Connect to pod via rsh and generate logs. |
| 28 | +8. Container logs are ephemeral, so once they die you'd loose them unless they're aggregated and stored somewhere. Let's generate some messages and query them from the UI. Connect to pod via rsh and generate logs. |
40 | 29 |
|
41 | 30 | ```bash
|
42 | 31 | oc project ${TEAM_NAME}-test
|
|
58 | 47 | 9. Back on Kibana we can filter and find these messages with another query:
|
59 | 48 |
|
60 | 49 | ```yaml
|
61 |
| - kubernetes.namespace_name="<TEAM_NAME>-test" AND kubernetes.container_name=mongodb AND message=🦄🦄🦄🦄 |
| 50 | + { log_type="application", kubernetes_pod_name=~".*mongodb.*", kubernetes_namespace_name="<TEAM_NAME>-test" } |= `🦄🦄🦄🦄` | json |
62 | 51 | ```
|
63 | 52 |
|
64 |
| -  |
| 53 | +  |
0 commit comments