Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-14346: Fix when DNS operator reports Degraded #373

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

candita
Copy link
Contributor

@candita candita commented Jul 21, 2023

Fix when DNS operator reports Degraded. Incorporate expected conditions and a grace period to allow it to be faulty for the tolerated duration (40s) before transitioning to Degraded. Don't allow the cluster operator status to be Progressing while Degraded.

Add packages like those used in the ingress controller to compare expected conditions and to use retryable errors.

Use the same heuristics on node resolver pod count as dns pod count.

Add unit test for computing degraded condition. Fix unit tests that expect Degraded to be true while Progressing is true, making sure that some observe a sense of time by adding variable previous conditions.

@candita candita changed the title OCPBUGS-14346 Fix when DNS operator reports Degraded. OCPBUGS-14346: Fix when DNS operator reports Degraded. Jul 21, 2023
@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Jul 21, 2023
@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is invalid:

  • expected the bug to target the "4.14.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

DNS Daemonset had maxUnavailable=0, so removed it from computing Degraded condition. If there are still more desired pods than what we have, only mark Degraded=false if the time period to wait has passed. Update unit tests, making sure they observe a sense of time by adding variable previous conditions.

When computing Degraded condition in the DNS controllerr, check first if upgrading and if so, don't mark Degraded=true. Add unit test for computing degraded condition.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Jul 21, 2023
@openshift-ci openshift-ci bot requested review from frobware and Miciah July 21, 2023 00:09
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 21, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from candita. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@candita candita force-pushed the OCPBUGS-14346-progressingDegraded branch from e077885 to e6cc811 Compare July 21, 2023 00:35
@candita
Copy link
Contributor Author

candita commented Jul 21, 2023

/jira refresh

@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.14" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@candita
Copy link
Contributor Author

candita commented Jul 21, 2023

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Jul 21, 2023
@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @melvinjoseph86

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@candita candita force-pushed the OCPBUGS-14346-progressingDegraded branch 5 times, most recently from ad5a5f1 to bfd73b8 Compare July 25, 2023 22:18
@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.14.0) matches configured target version for branch (4.14.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @melvinjoseph86

In response to this:

DNS Daemonset is configured with rolliing upgrade strategy maxUnavailable=0, so removed it from computing Degraded condition and removed maxUnavailable validation. If there are still more desired pods than what we have, only mark Degraded=false if the time period to wait has passed. Update unit tests, making sure they observe a sense of time by adding variable previous conditions.

When computing Degraded condition in the DNS controller, check first if upgrading and if so, don't mark Degraded=true. Also, mark Degraded=true only if it hasn't recently transitioned. Add unit test for computing degraded condition.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@candita
Copy link
Contributor Author

candita commented Jul 26, 2023

/assign @gcs278

@candita candita force-pushed the OCPBUGS-14346-progressingDegraded branch from bfd73b8 to 950992b Compare July 26, 2023 21:29
Copy link
Contributor

@gcs278 gcs278 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm following discussion in #wg-operator-degraded-condition, but wanted to add this one comment initially.

pkg/operator/controller/dns_status.go Outdated Show resolved Hide resolved
@candita
Copy link
Contributor Author

candita commented Jul 26, 2023

e2e-aws-ovn-upgrade operators-timeline reports the amount of time spent in Degraded/Progressing for dns operators:

Before the changes 2m50s Degraded and 10m36s Progressing:
e2e-aws-ovn-upgrade-operators-timeline-pre

After the changes 0s Degraded and 8m23s Progressing:
e2e-aws-ovn-upgrade-operators-timeline-post

Copy link
Contributor

@gcs278 gcs278 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still trying to wrap my head around the issue

pkg/operator/controller/dns_status_test.go Show resolved Hide resolved
pkg/operator/controller/dns_status.go Outdated Show resolved Hide resolved
@candita candita changed the title OCPBUGS-14346: Fix when DNS operator reports Degraded. [WIP] OCPBUGS-14346: Fix when DNS operator reports Degraded. Aug 3, 2023
@candita
Copy link
Contributor Author

candita commented Apr 19, 2024

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 19, 2024
@candita
Copy link
Contributor Author

candita commented May 15, 2024

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. and removed jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels May 15, 2024
@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is invalid:

  • expected the bug to target either version "4.16." or "openshift-4.16.", but it targets "4.15.z" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@candita
Copy link
Contributor Author

candita commented May 15, 2024

/tide refresh

@candita
Copy link
Contributor Author

candita commented May 15, 2024

/test e2e-aws-ovn-techpreview

@melvinjoseph86
Copy link

@candita I did a pre-merge testing and here is the result.
When i tried to upgrade the DNS operator did not report 'Degraded=True' and 'Progressing=True' at the same time as we expected.
But when i tried to simulate DNS degrade only scenario by the below steps:

  1. oc label node test=abc
  2. nodePlacement.nodeSelector set to test: abc
  3. oc adm taint node test:NoSchedule
  4. and delete the only dns pod, there will not be any dns pod.

I saw different behavior in latest 4.16 nightly and cluster build using this PR.

  1. Using the 4.16 nightly
melvinjoseph@mjoseph-mac Downloads % oc get po -n openshift-dns
NAME                  READY   STATUS    RESTARTS   AGE
dns-default-rb7n8     2/2     Running   0          67s
node-resolver-bsbq7   1/1     Running   0          116m
node-resolver-gv5dk   1/1     Running   0          108m
node-resolver-kmgnj   1/1     Running   0          108m
node-resolver-x8gvw   1/1     Running   0          116m
node-resolver-z7c6d   1/1     Running   0          108m
node-resolver-zqx4k   1/1     Running   0          116m
melvinjoseph@mjoseph-mac Downloads % oc get co                 
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.16.0-0.nightly-2024-06-02-202327   True        False         False      98m     
baremetal                                  4.16.0-0.nightly-2024-06-02-202327   True        False         False      116m    
cloud-controller-manager                   4.16.0-0.nightly-2024-06-02-202327   True        False         False      121m    
cloud-credential                           4.16.0-0.nightly-2024-06-02-202327   True        False         False      124m    
cluster-autoscaler                         4.16.0-0.nightly-2024-06-02-202327   True        False         False      116m    
config-operator                            4.16.0-0.nightly-2024-06-02-202327   True        False         False      117m    
console                                    4.16.0-0.nightly-2024-06-02-202327   True        False         False      103m    
control-plane-machine-set                  4.16.0-0.nightly-2024-06-02-202327   True        False         False      111m    
csi-snapshot-controller                    4.16.0-0.nightly-2024-06-02-202327   True        False         False      117m    
dns                                        4.16.0-0.nightly-2024-06-02-202327   False       False         True       47s     DNS "default" is unavailable.

Then removed the dns pod.

melvinjoseph@mjoseph-mac Downloads % oc delete po dns-default-rb7n8  -n openshift-dns
pod "dns-default-rb7n8" deleted
melvinjoseph@mjoseph-mac Downloads % oc get po -n openshift-dns                      
NAME                  READY   STATUS    RESTARTS   AGE
node-resolver-bsbq7   1/1     Running   0          117m
node-resolver-gv5dk   1/1     Running   0          109m
node-resolver-kmgnj   1/1     Running   0          109m
node-resolver-x8gvw   1/1     Running   0          117m
node-resolver-z7c6d   1/1     Running   0          109m
node-resolver-zqx4k   1/1     Running   0          117m
melvinjoseph@mjoseph-mac Downloads % oc get co                                       
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.16.0-0.nightly-2024-06-02-202327   False       False         False      16s     OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.mjoseph-gcp1.qe.gcp.devcluster.openshift.com/healthz": dial tcp: lookup oauth-openshift.apps.mjoseph-gcp1.qe.gcp.devcluster.openshift.com on 172.30.0.10:53: read udp 10.130.0.18:40439->172.30.0.10:53: read: connection refused
baremetal                                  4.16.0-0.nightly-2024-06-02-202327   True        False         False      117m    
cloud-controller-manager                   4.16.0-0.nightly-2024-06-02-202327   True        False         False      122m    
cloud-credential                           4.16.0-0.nightly-2024-06-02-202327   True        False         False      125m    
cluster-autoscaler                         4.16.0-0.nightly-2024-06-02-202327   True        False         False      117m    
config-operator                            4.16.0-0.nightly-2024-06-02-202327   True        False         False      118m    
console                                    4.16.0-0.nightly-2024-06-02-202327   False       True          False      14s     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.mjoseph-gcp1.qe.gcp.devcluster.openshift.com): Get "https://console-openshift-console.apps.mjoseph-gcp1.qe.gcp.devcluster.openshift.com": dial tcp: lookup console-openshift-console.apps.mjoseph-gcp1.qe.gcp.devcluster.openshift.com on 172.30.0.10:53: read udp 10.128.0.41:36108->172.30.0.10:53: read: connection refused
control-plane-machine-set                  4.16.0-0.nightly-2024-06-02-202327   True        False         False      112m    
csi-snapshot-controller                    4.16.0-0.nightly-2024-06-02-202327   True        False         False      118m    
dns                                        4.16.0-0.nightly-2024-06-02-202327   False       False         True       107s    DNS "default" is unavailable.
  1. Using this PR
melvinjoseph@mjoseph-mac Downloads % oc get po -n openshift-dns
NAME                  READY   STATUS    RESTARTS   AGE
dns-default-qkpf4     2/2     Running   0          52s
node-resolver-4wkfp   1/1     Running   0          56m
node-resolver-7q46v   1/1     Running   0          56m
node-resolver-rjr8j   1/1     Running   0          64m
node-resolver-sxpbc   1/1     Running   0          64m
node-resolver-sz56z   1/1     Running   0          64m
node-resolver-tkxgz   1/1     Running   0          56m
melvinjoseph@mjoseph-mac Downloads % oc get co                                                                   
NAME                                       VERSION                                                   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      42m     
baremetal                                  4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      64m     
cloud-controller-manager                   4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      68m     
cloud-credential                           4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      71m     
cluster-autoscaler                         4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      64m     
config-operator                            4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      65m     
console                                    4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      44m     
control-plane-machine-set                  4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      60m     
csi-snapshot-controller                    4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      65m     
dns                                        4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        True          False      63m     DNS "default" reports Progressing=True: "Have 0 up-to-date DNS pods, want 1."
etcd                                       4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      63m 
melvinjoseph@mjoseph-mac Downloads % oc delete po dns-default-qkpf4  -n openshift-dns
pod "dns-default-qkpf4" deleted

melvinjoseph@mjoseph-mac Downloads % oc get co                                       
NAME                                       VERSION                                                   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   False       False         False      17s     OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.ci-ln-prh7tft-72292.origin-ci-int-gce.dev.rhcloud.com/healthz": dial tcp: lookup oauth-openshift.apps.ci-ln-prh7tft-72292.origin-ci-int-gce.dev.rhcloud.com on 172.30.0.10:53: read udp 10.130.0.34:59897->172.30.0.10:53: read: connection refused
baremetal                                  4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      66m     
cloud-controller-manager                   4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      69m     
cloud-credential                           4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      73m     
cluster-autoscaler                         4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      66m     
config-operator                            4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      67m     
console                                    4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   False       False         False      16s     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.ci-ln-prh7tft-72292.origin-ci-int-gce.dev.rhcloud.com): Get "https://console-openshift-console.apps.ci-ln-prh7tft-72292.origin-ci-int-gce.dev.rhcloud.com": dial tcp: lookup console-openshift-console.apps.ci-ln-prh7tft-72292.origin-ci-int-gce.dev.rhcloud.com on 172.30.0.10:53: read udp 10.129.0.40:60989->172.30.0.10:53: read: connection refused
control-plane-machine-set                  4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      62m     
csi-snapshot-controller                    4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        False         False      66m     
dns                                        4.16.0-0.ci.test-2024-06-03-064625-ci-ln-prh7tft-latest   True        True          False      65m     DNS "default" reports Progressing=True: "Have 0 up-to-date DNS pods, want 1."

So the in the cluster using this PR, the DNS pod is not going through degrade state and providing wrong message even though we have a DNS pod available.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 2, 2024
@candita
Copy link
Contributor Author

candita commented Oct 3, 2024

/lifecycle frozen

@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Nov 2, 2024
Copy link
Contributor

openshift-ci bot commented Nov 2, 2024

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346. The bug has been updated to no longer refer to the pull request using the external bug tracker. All external bug links have been closed. The bug has been moved to the NEW state.

In response to this:

Fix when DNS operator reports Degraded. Incorporate expected conditions and a grace period to allow it to be faulty for the tolerated duration (40s) before transitioning to Degraded. Don't allow the cluster operator status to be Progressing while Degraded.

Add packages like those used in the ingress controller to compare expected conditions and to use retryable errors.

Use the same heuristics on node resolver pod count as dns pod count.

Add unit test for computing degraded condition. Fix unit tests that expect Degraded to be true while Progressing is true, making sure that some observe a sense of time by adding variable previous conditions.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@candita
Copy link
Contributor Author

candita commented Nov 5, 2024

/reopen
/lifecycle frozen

@openshift-ci openshift-ci bot reopened this Nov 5, 2024
Copy link
Contributor

openshift-ci bot commented Nov 5, 2024

@candita: Reopened this PR.

In response to this:

/reopen
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

openshift-ci bot commented Nov 5, 2024

@candita: The lifecycle/frozen label cannot be applied to Pull Requests.

In response to this:

/reopen
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is invalid:

  • expected the bug to target either version "4.18." or "openshift-4.18.", but it targets "4.15.z" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Fix when DNS operator reports Degraded. Incorporate expected conditions and a grace period to allow it to be faulty for the tolerated duration (40s) before transitioning to Degraded. Don't allow the cluster operator status to be Progressing while Degraded.

Add packages like those used in the ingress controller to compare expected conditions and to use retryable errors.

Use the same heuristics on node resolver pod count as dns pod count.

Add unit test for computing degraded condition. Fix unit tests that expect Degraded to be true while Progressing is true, making sure that some observe a sense of time by adding variable previous conditions.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@candita
Copy link
Contributor Author

candita commented Nov 5, 2024

/jira refresh

@openshift-ci-robot
Copy link
Contributor

@candita: This pull request references Jira Issue OCPBUGS-14346, which is invalid:

  • expected the bug to target either version "4.18." or "openshift-4.18.", but it targets "4.15.z" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@candita
Copy link
Contributor Author

candita commented Nov 5, 2024

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 5, 2024
Copy link
Contributor

openshift-ci bot commented Nov 6, 2024

@candita: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-techpreview 25abc71 link false /test e2e-aws-ovn-techpreview

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Miciah Miciah added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants