-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example values #1
Comments
Hello, for which chart ? Normally all values are present in the repo, you can also see them on artifacthub. |
sorry, the ipfs-cluster chart. There is no good chart out there, lol. Seeing your comments on the ethpandaops/ethereum-helm-charts#271 stuff as well. |
I want to spin up multiple ipfs nodes that can see each other, but not reach out to the public ones. Just playing around with private. |
ipfs documentation is horrible for kubernetes, "Beware we have not updated the following instructions in a while". Your artifact hub has nothing really helpful for spinning up multiple replicas. I did see that you are doing a loop through the statefulsets, which is different. Currently using ethpandaops's chart of ipfs-cluster now since the pods actually all come up, but can't get the clusters to see each other. LIke if i port-forward 5001 or connect with the ipfs ctl i would expect it to see other peers. |
|
Just set replicaCount to > 1. Then you'll have to configure kubo via "initScripts" scripts to not speak to the world. The classic usage of the ipfs cluster is to have a replicated set of nodes with a pinning service and an admin api. Then you can configure how kubo will behave (exemple: restrict gateway access to only local/pinned CID, etc...) |
ok, that is basically what I'm doing already.
|
Do i need to add the other local clusters to the peers or something under 001-peers.sh? |
I'll see tomorrow on the cluster where I have installed it and I'll tell you |
Thank you so much! |
I do have nodeport disabled since the pods don't come up with that. And i usually stay away from nodeports.
I don't have an external connection or anything yet, just trying to get everything connecting within a single namespace and checking status with the ctl binary and port-forwarding the webui on 5001. |
/data/ipfs-cluster/peerstore file on the cluster pods don't exist.
new empty peerstore written to /data/ipfs-cluster/peerstore. |
I am also setting a shared secret |
This is what I'm using for your chart, which is very similar to ethpandaops. |
I do not use nodeport either. IPFS swam/cluster swarm is routed through TCP SNI ingress controller (with public NAT gateways as announced IP, nat gateways is just a tcp sni proxy VM from public ip to private ingress controller). But in your case I doubt you need such gateway as your cluster is local only. For a private ipfs cluster (means that no kubo connect to the internet to fetch cid) you'll maybe need in an initscript (before the one where you're adding peers): My chart is similar to the ethpandaops one because I've forked it. To see if a ipfs-cluster is seeing the other ones, use the ipfs-cluster-ctl API and do a peers ls. In a local kubernetes cluster, each cluster should see each other directly (be aware that the cluster component uses headless service IP that directly point to the kubo IP, so the cluster pod have to start after the kubo one, if not, the cluster pod will not see it's attached kubo. PS: The kubo webui is not relevant to use in this context, you should use ipfs-cluster api or pinning service to pin/add data to the cluster, then fetch the content through kubo http gateway. Same for checking cluster peers, use "ipfs-cluster-ctl peers ls" with cluster API. The kubo webui will only be relevant to check if each kubo instance see each other and then again, we can use ipfs cli on kubo instead. |
Ok, just to reiterate what you just said. I am to get the ID of the ipfs-cluster, not the ipfs-kubo peer. To do that i'm doing So the init config should contain the pod ip of each of the ipfs-kubo pods. There is only one service for all of the ipfs-kubo pods, so I also attempted using the dns address like below where "ipfs-ipfs-cluster" is ther service name, blockchain the namespace, etc.
I don't have a way other than manually right now to make sure the ipfs-cluster pods come up after. Manual is fine for now, I can fix that somehow later. Great info. |
That still isn't working yet. But once I do get that working I'm definitely gonna automate that with an init job or something. I feel like i'm closer! |
Here is what I'm doing for the peers config.
|
I was able to do this on a kubo node, but peers still show 0 on the ipfs-cluster pod with ipfs-cluster-ctl.
|
Be sure to execute peer add after clearing the crdt store (either by doing a higher cardinality script that will execute after (001, 002, etc..) or in the same script but on the line after the crdt emptying. Ipfs cluster pod will not see kubo if kubo has been restarted after the start of the cluster pod (because of headless service ip that will point to pod ip, if kubo pod recreated, ipfs cluster pod lost track of kubo because ip changed). |
In your peer address, as all your kubo is local, use service cluster ip instead of pod ip if not already did so. |
success! I was able to use the dns names for both, sort of! I created additional services for each ipfs-kubo node and then did nslookups to get the ips and then load them in similarly. The trick for me was to update the service.json also! Might be able to do just dns for one of them, not sure. Will need to test more of that out later. |
could you share an example values file for multiple replicas in private mode by chance?
The text was updated successfully, but these errors were encountered: