Skip to content

Conversation

universal-itengineer
Copy link
Member

Description

Fixed alerts:

  • alert: D8VirtualizationAPIPodIsNotReady
  • alert: D8InternalVirtualizationCDIDeploymentPodIsNotReady
  • alert: D8InternalVirtualizationCDIAPIServerPodIsNotReady
  • alert: D8InternalVirtualizationVirtAPIPodIsNotReady
  • alert: D8InternalVirtualizationVirtControllerPodIsNotReady
  • alert: D8InternalVirtualizationCDIOperatorPodIsNotReady
  • alert: D8VirtualizationControllerPodIsNotReady
  • alert: D8InternalVirtualizationVirtOperatorPodIsNotReady

Why do we need it, and what problem does it solve?

What is the expected result?

Checklist

  • The code is covered by unit tests.
  • e2e tests passed.
  • Documentation updated according to the changes.
  • Changes were tested in the Kubernetes cluster manually.

Changelog entries

section: observability
type: fix
summary: fix alerts like PodIsNotReady

Copy link
Contributor

sourcery-ai bot commented Oct 9, 2025

Reviewer's Guide

This PR refactors the Prometheus alert rules for multiple PodIsNotReady alerts by replacing the aggregated min readiness checks with boolean expressions that join kube_pod_status_ready == 0 and kube_pod_status_phase == 1 using and on(namespace,pod), ensuring alerts only fire for pods in Running or Succeeded phases.

Flow diagram for updated PodIsNotReady alert logic

flowchart TD
    A["kube_pod_status_ready == 0 for pod"] --> B["kube_pod_status_phase == 1 for pod (phase=Running|Succeeded)"]
    B --> C["AND on (namespace, pod)"]
    C --> D["PodIsNotReady alert fires"]
Loading

File-Level Changes

Change Details Files
Refactor PodIsNotReady alert expressions to use boolean readiness and phase metrics
  • Replaced min by (pod) ... != 1 with kube_pod_status_ready == 0 boolean comparison
  • Joined readiness and phase metrics via and on(namespace,pod)
  • Filtered kube_pod_status_phase by `phase=~"Running
Succeeded"`

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `monitoring/prometheus-rules/internal-virtualization-cdi-apiservier.yaml:5-9` </location>
<code_context>
     - alert: D8InternalVirtualizationCDIAPIServerPodIsNotReady
-      expr: min by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-apiserver-.*"}) != 1
+      expr: |
+        min by (pod) (
+        kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-apiserver-.*"}
+        and on(namespace,pod)
+        kube_pod_status_phase{phase="Succeeded|Running", namespace="d8-virtualization"}
+        ) != 1
       labels:
         severity_level: "6"
</code_context>

<issue_to_address>
**issue:** The use of 'Succeeded|Running' as a value for the 'phase' label may not match Prometheus label matching semantics.

Use 'phase=~"Succeeded|Running"' for regex matching to ensure correct selection of pods in either phase, consistent with other files.
</issue_to_address>

### Comment 2
<location> `monitoring/prometheus-rules/internal-virtualization-cdi-deployment.yaml:7` </location>
<code_context>
     - alert: D8InternalVirtualizationCDIDeploymentPodIsNotReady
-      expr: min by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-deployment-.*"}) != 1
+      expr: |
+        (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-deployment-.*"} == 0)
+        and on (namespace,pod)
+        (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-deployment-.*"} == 1)
       labels:
         severity_level: "6"
</code_context>

<issue_to_address>
**suggestion:** The alert triggers only if exactly one pod matches the phase condition, which may miss cases with multiple pods.

Consider replacing '== 1' with '> 0' or a suitable aggregation to handle scenarios with multiple pods in the specified phases.

```suggestion
        (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-deployment-.*"} > 0)
```
</issue_to_address>

### Comment 3
<location> `monitoring/prometheus-rules/internal-virtualization-cdi-operator.yaml:4-7` </location>
<code_context>
     - alert: D8InternalVirtualizationCDIOperatorPodIsNotReady
-      expr: min by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"}) != 1
+      expr: |
+        (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"} == 0)
+        and on (namespace,pod)
+        (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-operator-.*"} == 1)
       labels:
         severity_level: "6"
</code_context>

<issue_to_address>
**suggestion:** Consider using consistent aggregation for pod readiness and phase checks.

Aggregating over pods will ensure the alert works correctly when multiple pods match the pattern, preventing missed readiness or phase issues.

```suggestion
      expr: |
        (sum by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"}) == 0)
        and on (namespace,pod)
        (sum by (pod) (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-operator-.*"}) == 1)
```
</issue_to_address>

### Comment 4
<location> `monitoring/prometheus-rules/internal-virtualization-virt-api.yaml:4-7` </location>
<code_context>
     - alert: D8InternalVirtualizationVirtAPIPodIsNotReady
-      expr: min by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"}) != 1
+      expr: |
+        (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"} == 0)
+        and on (namespace,pod)
+        (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-api-.*"} == 1)
       labels:
         severity_level: "6"
</code_context>

<issue_to_address>
**suggestion:** The alert logic may not handle multiple pods correctly due to equality checks.

Consider replacing strict equality checks with aggregation or range-based logic to ensure the alert accurately reflects pod readiness in deployments with multiple pods.

```suggestion
      expr: |
        (
          sum(
            kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"}
          ) by (namespace)
          == 0
        )
        and
        (
          sum(
            kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-api-.*"}
          ) by (namespace)
          > 0
        )
```
</issue_to_address>

### Comment 5
<location> `monitoring/prometheus-rules/internal-virtualization-virt-controller.yaml:4-7` </location>
<code_context>
     - alert: D8InternalVirtualizationVirtControllerPodIsNotReady
-      expr: min by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"}) != 1
+      expr: |
+        (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"} == 0)
+        and on (namespace,pod)
+        (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-controller-.*"} == 1)
       labels:
         severity_level: "6"
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Equality checks for pod readiness and phase may not be robust for deployments with multiple pods.

Consider updating the logic to aggregate pod readiness, ensuring the alert triggers if any pod is not ready, rather than relying on equality checks.

```suggestion
      expr: |
        (
          sum by (namespace, pod) (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-controller-.*"})
          >
          sum by (namespace, pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"})
        )
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-deployment-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-deployment-.*"} == 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: The alert triggers only if exactly one pod matches the phase condition, which may miss cases with multiple pods.

Consider replacing '== 1' with '> 0' or a suitable aggregation to handle scenarios with multiple pods in the specified phases.

Suggested change
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-deployment-.*"} == 1)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-deployment-.*"} > 0)

Comment on lines +4 to +7
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-operator-.*"} == 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider using consistent aggregation for pod readiness and phase checks.

Aggregating over pods will ensure the alert works correctly when multiple pods match the pattern, preventing missed readiness or phase issues.

Suggested change
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-operator-.*"} == 1)
expr: |
(sum by (pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"cdi-operator-.*"}) == 0)
and on (namespace,pod)
(sum by (pod) (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"cdi-operator-.*"}) == 1)

Comment on lines +4 to +7
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-api-.*"} == 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: The alert logic may not handle multiple pods correctly due to equality checks.

Consider replacing strict equality checks with aggregation or range-based logic to ensure the alert accurately reflects pod readiness in deployments with multiple pods.

Suggested change
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-api-.*"} == 1)
expr: |
(
sum(
kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-api-.*"}
) by (namespace)
== 0
)
and
(
sum(
kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-api-.*"}
) by (namespace)
> 0
)

Comment on lines +4 to +7
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-controller-.*"} == 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Equality checks for pod readiness and phase may not be robust for deployments with multiple pods.

Consider updating the logic to aggregate pod readiness, ensuring the alert triggers if any pod is not ready, rather than relying on equality checks.

Suggested change
expr: |
(kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"} == 0)
and on (namespace,pod)
(kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-controller-.*"} == 1)
expr: |
(
sum by (namespace, pod) (kube_pod_status_phase{namespace="d8-virtualization", phase=~"Running|Succeeded", pod=~"virt-controller-.*"})
>
sum by (namespace, pod) (kube_pod_status_ready{condition="true", namespace="d8-virtualization", pod=~"virt-controller-.*"})
)

@universal-itengineer universal-itengineer marked this pull request as draft October 9, 2025 12:21
Signed-off-by: Nikita Korolev <[email protected]>

fix phase cdi-apiserver

Signed-off-by: Nikita Korolev <[email protected]>
@Isteb4k Isteb4k added this to the v1.1.1 milestone Oct 14, 2025
@nevermarine nevermarine modified the milestones: v1.1.1, v1.2.0 Oct 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants