-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.1.3 requires additional rbac permissions #142
Comments
Sorry, I'd submit a pull request, but that would require a lengthy approval process from my employer. |
Hi @bentrombley there should be a default role in the cluster called: kube-metrics-adapter/docs/rbac.yaml Line 116 in 97ec13d
Does your cluster not have this role as default? From what I can see it should also be in v1.15.7 |
I experienced same issue after I added that line working perfectly. |
Same here, adding this section to the |
Hello, I came up with this fix also, but I'm still getting the following logs.
In debug mode I can see the configmap being read, though. |
this fix related to the original repos issue zalando-incubator#142
Can you check if you have the role I have just created a new cluster based on v1.18.6 and it's there by default:
Or could it be that you're not deploying the kube-metrics-adapter to |
@mikkeloscar |
@prageethw Is installing to |
@mikkeloscar Thanks for the reply, yeah installing in |
What I want to avoid is that we document more access than is needed. With the change suggested in this thread and in #181 the kube-metrics-adapter would get access to ALL configmaps instead of just a single one as intended. |
@mikkeloscar fair point, I think TBH that is an extreme measure if someone can access to |
I want to document the best practice which is the least amount of permissions. If users need something custom or more relaxed they're free to use a custom role setup. I'm also fine documenting that if we clearly state the reason (e.g. not deploying in Does anyone in this thread deploy to |
I prefer to have all monitoring related stuff in the monitoring namespace. I have moved the adapter for now though because it seems the only way to get this to work is have it in kube-system since even giving it the right RBAC would mean it needs a clusterwide priv instead of a single cm priv as documented here. |
We just upgraded from 0.1.0 to 0.1.3 and started seeing errors in our logs like:
I'm not sure what changed, but adding this
apiGroups
section to the rules forcustom-metrics-resource-collector
fixed it for us:Expected Behavior
There should be no errors/warnings in the logs.
Actual Behavior
See above logs. This caused the HorizontalPodAutoscalers to fail.
Steps to Reproduce the Problem
docs/
folder.kube-metrics-adapter
pod.Specifications
The text was updated successfully, but these errors were encountered: