Skip to content

ipfs-cluster: Add additional labels to cluster pod #272

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

ipfs-cluster: Add additional labels to cluster pod #272

wants to merge 1 commit into from

Conversation

GuillaumeOuint
Copy link

For the issue #271

@demostanis
Copy link

could you give some usage examples? in particular "to add prometheus labels to auto scrape the metrics". thank you

@GuillaumeOuint
Copy link
Author

could you give some usage examples? in particular "to add prometheus labels to auto scrape the metrics". thank you

It's pretty straightforward, it's adding the capacity to add custom labels on the pod. I should have added custom annotations too as prometheus use annotations for service discovery and not labels. So the original point is incorrect (sorry). BUT, having the possibility to add custom labels/annotations should be supported as a major part of helm charts online support that (it's more or less in the helm "standard"). So I think this doesn't need usage examples, as labels/annotations can be used for any usage (prometheus, metallb, other operators...). So this feature SHOULD exist on this chart to let users customize their deployment to their needs. Chart author shouldn't limit what labels/annotations we want to put on an object, authors should always let the user choose additional labels/annotations.

Also, as the issue is open for more than a year (and the merge request closed), I took the liberty to fork this chart to add those features. ipfs-cluster

@demostanis
Copy link

thanks for your answer. after reading ipfs-cluster's documentation a bit, i figured out that metrics/tracing requires modification to ipfs-cluster's service.json. i did not find anything in this repo's chart relating to this. how did you manage to do it? does your chart provide a way to customize service.json? or maybe through environment variables i missed? i found someone spawning a job that execs into each ipfs-cluster replica to modify service.json, but it seems hacky.

@GuillaumeOuint
Copy link
Author

thanks for your answer. after reading ipfs-cluster's documentation a bit, i figured out that metrics/tracing requires modification to ipfs-cluster's service.json. i did not find anything in this repo's chart relating to this. how did you manage to do it? does your chart provide a way to customize service.json? or maybe through environment variables i missed? i found someone spawning a job that execs into each ipfs-cluster replica to modify service.json, but it seems hacky.

With my chart (don't remember if it's the case here) I have a init script that run on every node before running the daemon that use the "ipfs config" command to change settings in the ipfs kubo config. But for cluster instances I don't have anything and manual edit of the file is needed (or via initcontainer)

@demostanis
Copy link

in my use case i need to modify the cluster config, not the ipfs config. as i see there's no way to set initContainers using values.yml, only for ipfs pods?

@GuillaumeOuint
Copy link
Author

in my use case i need to modify the cluster config, not the ipfs config. as i see there's no way to set initContainers using values.yml, only for ipfs pods?

As you need more configurability, I sugest you to fork my chart (as it's already more advanced than this I think) and make your changes, then upgrade already installed release with your local chart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants