Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECONNRESET error from kubernetes watch after some minutes. #1496

Open
jimjaeger opened this issue Jan 2, 2024 · 8 comments
Open

ECONNRESET error from kubernetes watch after some minutes. #1496

jimjaeger opened this issue Jan 2, 2024 · 8 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jimjaeger
Copy link

Describe the bug
If I use the kubernetes watch to listen to resource changes I get ECONNRESET and the watch stops.
Is there any chance that the watch can handle underlaying connections errors and restart on his own?

** Client Version **
0.20.0

To Reproduce
Steps to reproduce the behavior:

  1. start a watch and wait longer than the setTimeout or setKeepAlive setting in the Watch config.

Expected behavior
A watch runs without connection issues.

** Example Code**

function waitForPodCompletion(log: Context['log'], k8sConfig: KubeConfig, podNamespace: string, resourceVersion?: string, jobName?: string): Promise<V1Pod> {
 let lastResourceVersion = resourceVersion;
  return new Promise<V1Pod>((resolve, reject) => {
    const watch = new Watch(k8sConfig);
    const queryParams: { labelSelector: string, resourceVersion?: string } = { labelSelector: `job-name=${jobName}` };
    if (resourceVersion) {
      queryParams.resourceVersion = resourceVersion;
    }

    watch.watch(`/api/v1/namespaces/${podNamespace}/pods`, queryParams, (eventType, pod: V1Pod) => {
      lastResourceVersion = pod.metadata?.resourceVersion;
      // log.info("WATCH RESULT" + JSON.stringify(pod));
      if (eventType === 'ADDED' && pod.metadata?.name) {
        log.info(`Job pod ${pod.metadata.name} ${pod.metadata?.resourceVersion} added.`);
      }
      if (eventType === 'MODIFIED' && pod.metadata?.name) {
        log.info(`Job pod ${pod.metadata.name} status: ${pod.status?.phase}, resourceVersion: ${pod.metadata?.resourceVersion}.`);
        if (pod.status?.phase === 'Succeeded') {
          //log.info("WATCH RESULT" + JSON.stringify(pod));
          resolve(pod);
        } else if (pod.status?.phase === 'Failed') {
          reject(new Error(`Job failed. Pod ${pod.metadata.name} status: ${pod.status.phase} startTime: ${pod.status.startTime}.`));
        }
      }
    }, (error: { code: string, message: string, stack: string }) => {
      // strange, here I get "null" call, short after the ECONNRESET
      if (error){
        reject(error);
      }
    })
  }).catch(onrejected => {
    if (onrejected && onrejected.code == 'ECONNRESET') {
      log.info(`Restart Watch with ${lastResourceVersion}.`);
      return waitForPodCompletion(log, k8sConfig, podNamespace, lastResourceVersion, jobName);
    } else {
      throw onrejected;
    }
  })

Environment (please complete the following information):

  • OS: Windows
  • NodeJS Versionv20.10.0
  • Cloud runtime Redhat OpenShift
@brendandburns
Copy link
Contributor

A watch is tied to a single TCP stream, so when it is broken you need to start a new watch (and you need to re-list also in case you missed something)

The informer class encapsulates this logic and is probably what you are looking for:
https://github.com/kubernetes-client/javascript/blob/master/src/informer.ts

(fwiw, wrt the "informer" name, I think it's confusing, but it got established as the standard name within the go client library, so we use it here too for consistency.)

@jimjaeger
Copy link
Author

jimjaeger commented Jan 2, 2024

Thanks for the information. But the informer class has the same problem. The informer also throws the inner connection errors.

@jobcespedes
Copy link

Same issue here with informer. Tried workaround of periodically starting the informer as suggested in #596. Nonetheless, a new issue was hit (see #1598)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2024
@jimjaeger
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
@jimjaeger
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants