Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependent resources namespaces are autodiscovered #867

Closed
scrocquesel opened this issue Jan 25, 2022 · 4 comments
Closed

Dependent resources namespaces are autodiscovered #867

scrocquesel opened this issue Jan 25, 2022 · 4 comments

Comments

@scrocquesel
Copy link
Contributor

scrocquesel commented Jan 25, 2022

Following discussion around one or all namespaces, I have an edge case scenario where I don't know at deployment time in which namespaces the dependent resources will be, and I can't deploy a cluster wide operator.

This assume :

  • A namespaced operator with a controler watching one namespace.
  • A CRD for a fictious CR like allow for cross namespace reference
    kind: AutodiscoverTargetNamespace
    metadata:
      name: foo
      namespace: banana
    spec:
      target:
        name: bar
        namespace: apple
      spec:
    

Basically, I will deploy the operator for a set of given namespace where end-user can create AutodiscoverTargetNamespace resources. End users are free to target namespaces the operator don't know of at deployment time. It is the responsability of the end-user to give enough rights to the service account of the operator in their namespaces.

The idea is that, when the operator process banana.foo, it will automatically starts an event source on apple namespace. If another AutodiscoverTargetNamespace is created for the same target namespace, the same event source can be reused but if it's for an another namespace, say orange, then a new event source is instanciated.
When banana.foo is deleted, if no more AutodiscoverTargetNamespace resource target apple namespace, the event source can be shutdown and removed.
We should be able to configure a supplier to provide a namespace name from a primary resource instance.

This may also be usefull to remove the friction of having to configure namespaces for dependent resources and reduce the number of watchers if only one namespace is really target when a "thousand" can be expected.

@csviri
Copy link
Collaborator

csviri commented Jan 26, 2022

Hi @scrocquesel ,
tbh this sounds like a quite exotic use case for me, it's basically configuring operator with custom resources on runtime, if I understand it correctly.

The question is why just not create the operator that handles AutodiscoverTargetNamespace with the informers where the operator has already permissions. If the permissions change change the operator configurations (this can be done automatically by a meta operator or an admission controller)

@scrocquesel
Copy link
Contributor Author

The question is why just not create the operator that handles AutodiscoverTargetNamespace with the informers where the operator has already permissions. If the permissions change change the operator configurations (this can be done automatically by a meta operator or an admission controller)

The operator may have access to a dozen of namespaces, and only a few of them are actually used. But then, the operator will create a dozen of informer just to watch nothing. I don't if it's costly for k8s, but it seems to me that each informer opens and maintain one connection to k8s api server.

For the configuration, yes, we may add an another operator that will watch in each namespace he knows of if a new RoleBinding as been created. But then, if don't want to maintain the list of namespace by hand, which operator will manage the configuration of this new operator ?

@csviri
Copy link
Collaborator

csviri commented Jan 26, 2022

The thing is currently dynamic informer registration is not doable, to be honest probably it's not a good idea, because how the syncing of event sources / informers works on startup.

So just thinking / brainstorming how to solve this. I would probably collect the target namespaces on startup (before operator starts) (either by listing custom resources in the namespaces or adding labels on those namespaces). And just watch continuously watch if (using an informer not an operator, but possibly in the same process), and restart the operator in case of change.

An other way what comes to my mind, is to use admission controllers with side effect, if such custom resource / namespaces is created, a config map can be maintained with the target namespaces if this config map changes (again might be watched from the operator, the operator restarts; and on start it reads the list of namespaces from the configmap)

This is again a hack, but as mentioned dynamic registration of informers / event sources is a problematic thing. Currently on startup we wait until the informers are synced. Dynamically adding them would mean to stop the "world" until the new informer is added - if we want to handle this correctly. I don't see this feasible ATM.

Maybe also would ask this question on the Kubernetes slack: https://kubernetes.slack.com/archives/CAW0GV7A5
Maybe others have more elegant idea.

Hope it helps.

@scrocquesel
Copy link
Contributor Author

This is again a hack, but as mentioned dynamic registration of informers / event sources is a problematic thing. Currently on startup we wait until the informers are synced. Dynamically adding them would mean to stop the "world" until the new informer is added - if we want to handle this correctly. I don't see this feasible ATM.

I see, I was just thinking about a running operator. But not the starting phase.

The sidecar approach is the good one, the informer can watch for namespace and apply configuration changing on demand.
Unfortunately, this will not work in my case. The cluster is using kiosh for multi-tenancy, and it is not possible to watch namespace.
Nonetheless, we will look further in this direction.

Thanks for your time. It was very instructive.

I close this,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants