diff --git a/docs/en/observability/categorize-logs.asciidoc b/docs/en/observability/categorize-logs.asciidoc index 9e13aef5b8..4162f3d128 100644 --- a/docs/en/observability/categorize-logs.asciidoc +++ b/docs/en/observability/categorize-logs.asciidoc @@ -6,8 +6,8 @@ log messages are the same or very similar, so classifying them can reduce millions of log lines into just a few categories. Within the {logs-app}, the *Categories* page enables you to identify patterns in -your log events quickly. Instead of manually identifying similar logs, the logs -categorization view lists log events that have been grouped based on their +your log events quickly. Instead of manually identifying similar logs, the logs +categorization view lists log events that have been grouped based on their messages and formats so that you can take action quicker. NOTE: This feature makes use of {ml} {anomaly-jobs}. To set up jobs, you must @@ -25,47 +25,44 @@ more details, refer to {ml-docs}/setup.html[Set up {ml-features}]. Create a {ml} job to categorize log messages automatically. {ml-cap} observes the static parts of the message, clusters similar messages, classifies them into -message categories, and detects unusually high message counts in the categories. - -[role="screenshot"] -image::images/log-create-categorization-job.jpg[Configure log categorization job] +message categories, and detects unusually high message counts in the categories. // lint ignore ml -1. Select *Categories*, and you are prompted to use {ml} to create +1. Select *Categories*, and you are prompted to use {ml} to create log rate categorizations. -2. Choose a time range for the {ml} analysis. By default, the {ml} job analyzes +2. Choose a time range for the {ml} analysis. By default, the {ml} job analyzes log messages no older than four weeks and continues indefinitely. -3. Add the indices that contain the logs you want to examine. -4. Click *Create ML job*. The job is created, and it starts to run. It takes a few - minutes for the {ml} robots to collect the necessary data. After the job +3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_. +4. Click *Create ML job*. The job is created, and it starts to run. It takes a few + minutes for the {ml} robots to collect the necessary data. After the job processed the data, you can view the results. [discrete] [[analyze-log-categories]] == Analyze log categories -The *Categories* page lists all the log categories from the selected indices. -You can filter the categories by indices. The screenshot below shows the +The *Categories* page lists all the log categories from the selected indices. +You can filter the categories by indices. The screenshot below shows the categories from the `elastic.agent` log. [role="screenshot"] image::images/log-categories.jpg[Log categories] -The category row contains the following information: +The category row contains the following information: * message count: shows how many messages belong to the given category. * trend: indicates how the occurrence of the messages changes in time. -* category name: it is the name of the category and is derived from the message +* category name: it is the name of the category and is derived from the message text. * datasets: the name of the datasets where the categories are present. * maximum anomaly score: the highest anomaly score in the category. -To view a log message under a particular category, click -the arrow at the end of the row. To further examine a message, it +To view a log message under a particular category, click +the arrow at the end of the row. To further examine a message, it can be viewed in the corresponding log event on the *Stream* page or displayed in its context. [role="screenshot"] image::images/log-opened.png[Opened log category] For more information about categorization, go to -{ml-docs}/ml-configuring-categories.html[Detecting anomalous categories of data]. \ No newline at end of file +{ml-docs}/ml-configuring-categories.html[Detecting anomalous categories of data]. \ No newline at end of file diff --git a/docs/en/observability/configure-logs-sources.asciidoc b/docs/en/observability/configure-logs-sources.asciidoc index 0e3a712fad..84ca27ac39 100644 --- a/docs/en/observability/configure-logs-sources.asciidoc +++ b/docs/en/observability/configure-logs-sources.asciidoc @@ -4,9 +4,8 @@ Specify the source configuration for logs in the {kibana-ref}/logs-ui-settings-kb.html[{logs-app} settings] in the {kibana-ref}/settings.html[{kib} configuration file]. -By default, the configuration uses the `filebeat-*` index pattern to query the data. -The configuration also defines field settings for things like timestamps -and container names, and the default columns displayed in the logs stream. +By default, the configuration uses the index patterns stored in the {kib} log sources advanced setting to query the data. +The configuration also defines the default columns displayed in the logs stream. If your logs have custom index patterns, use non-default field settings, or contain parsed fields that you want to expose as individual columns, you can override the @@ -20,32 +19,22 @@ default configuration settings. + . Click *Settings*. + -|=== +|=== -| *Name* | Name of the source configuration. +| *Name* | Name of the source configuration. -| *{ipm-cap}* | {kib} index patterns or index name patterns in the {es} indices -to read log data from. - -Each log source now integrates with {kib} index patterns which support creating and -querying {kibana-ref}/managing-data-views.html[runtime fields]. You can continue -to use log sources configured to use an index name pattern, such as `filebeat-*`, -instead of a {kib} index pattern. However, some features like those depending on -runtime fields may not be available. +| *{kib} log sources advanced setting* | Use index patterns stored in the {kib} *log sources* advanced setting, which provides a centralized place to store and query log index patterns. +Update this setting by going to *Stack Management* → *Advanced Settings* and searching for _logs sources_. -Instead of entering an index pattern name, -click *Use {kib} index patterns* and select the `filebeat-*` log index pattern. - -| *{data-source-cap}* | This is a new configuration option that can be used -instead of index pattern. The Logs UI can now integrate with {data-sources} to +| *{data-source-cap} (deprecated)* | The Logs UI integrates with {data-sources} to configure the used indices by clicking *Use {data-sources}*. -| *Fields* | Configuring fields input has been deprecated. You should adjust your indexing using the -<>, which use the {ecs-ref}/index.html[Elastic Common Schema (ECS) specification]. +| *Log indices (deprecated)* | {kib} index patterns or index name patterns in the {es} indices +to read log data from. | *Log columns* | Columns that are displayed in the logs *Stream* page. -|=== +|=== + . When you have completed your changes, click *Apply*. @@ -63,16 +52,16 @@ with other data source configurations. By default, the *Stream* page within the {logs-app} displays the following columns. -|=== +|=== -| *Timestamp* | The timestamp of the log entry from the `timestamp` field. +| *Timestamp* | The timestamp of the log entry from the `timestamp` field. | *Message* | The message extracted from the document. The content of this field depends on the type of log message. If no special log message type is detected, the {ecs-ref}/ecs-base.html[Elastic Common Schema (ECS)] base field, `message`, is used. -|=== +|=== 1. To add a new column to the logs stream, select *Settings > Add column*. 2. In the list of available fields, select the field you want to add. diff --git a/docs/en/observability/explore-logs.asciidoc b/docs/en/observability/explore-logs.asciidoc index a8522e3e8a..f2ffdaa7cb 100644 --- a/docs/en/observability/explore-logs.asciidoc +++ b/docs/en/observability/explore-logs.asciidoc @@ -22,7 +22,9 @@ Viewing data in Logs Explorer requires `read` privileges for *Discover* and *Int [[find-your-logs]] == Find your logs -By default, Logs Explorer shows all of your logs. +By default, Logs Explorer shows all of your logs, according to the index patterns set in the *logs source* advanced setting. +Update this setting by going to *Stack Management* → *Advanced Settings* and searching for __. + If you need to focus on logs from a specific integration, select the integration from the logs menu: [role="screenshot"] diff --git a/docs/en/observability/inspect-log-anomalies.asciidoc b/docs/en/observability/inspect-log-anomalies.asciidoc index 8790f716bc..985b586086 100644 --- a/docs/en/observability/inspect-log-anomalies.asciidoc +++ b/docs/en/observability/inspect-log-anomalies.asciidoc @@ -35,7 +35,7 @@ Create a {ml} job to detect anomalous log entry rates automatically. 1. Select *Anomalies*, and you'll be prompted to create a {ml} job which will carry out the log rate analysis. 2. Choose a time range for the {ml} analysis. -3. Add the Indices that contain the logs you want to analyze. +3. Add the indices that contain the logs you want to examine. By default, Machine Learning analyzes messages in all log indices that match the patterns set in the *logs source* advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for _logs source_. 4. Click *Create {ml-init} job*. 5. You're now ready to explore your log partitions.