Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example values #1

Closed
drduker opened this issue May 29, 2024 · 25 comments
Closed

example values #1

drduker opened this issue May 29, 2024 · 25 comments

Comments

@drduker
Copy link

drduker commented May 29, 2024

could you share an example values file for multiple replicas in private mode by chance?

@GuillaumeOuint
Copy link
Owner

Hello, for which chart ? Normally all values are present in the repo, you can also see them on artifacthub.

@drduker
Copy link
Author

drduker commented May 29, 2024

sorry, the ipfs-cluster chart. There is no good chart out there, lol. Seeing your comments on the ethpandaops/ethereum-helm-charts#271 stuff as well.

@drduker
Copy link
Author

drduker commented May 29, 2024

I want to spin up multiple ipfs nodes that can see each other, but not reach out to the public ones. Just playing around with private.

@drduker
Copy link
Author

drduker commented May 29, 2024

ipfs documentation is horrible for kubernetes, "Beware we have not updated the following instructions in a while".

Your artifact hub has nothing really helpful for spinning up multiple replicas. I did see that you are doing a loop through the statefulsets, which is different.

Currently using ethpandaops's chart of ipfs-cluster now since the pods actually all come up, but can't get the clusters to see each other. LIke if i port-forward 5001 or connect with the ipfs ctl i would expect it to see other peers.

@drduker
Copy link
Author

drduker commented May 29, 2024

│ 2024-05-29T21:57:50.260Z    INFO    pinsvcapi    common/api.go:454    PINSVCAPI (HTTP): /ip4/127.0.0.1/tcp/9097                                                                                                                                                                                              │
│ 2024-05-29T21:57:50.260Z    INFO    restapi    common/api.go:454    RESTAPI (HTTP): /ip4/0.0.0.0/tcp/9094                                                                                                                                                                                                    │
│ 2024-05-29T21:57:50.260Z    INFO    ipfsproxy    ipfsproxy/ipfsproxy.go:331    IPFS Proxy: /ip4/0.0.0.0/tcp/9095 -> /dns4/ipfs-ipfs-cluster-1.ipfs-ipfs-cluster.blockchain.svc.cluster.local/tcp/5001                                                                                                        │
│ 2024-05-29T21:57:50.262Z    INFO    crdt    [email protected]/set.go:122    Tombstones have bloomed: 0 tombs. Took: 766.585µs                                                                                                                                                                                │
│ 2024-05-29T21:57:50.262Z    INFO    crdt    [email protected]/crdt.go:296    crdt Datastore created. Number of heads: 0. Current max-height: 0. Dirty: false                                                                                                                                                 │
│ 2024-05-29T21:57:50.262Z    INFO    crdt    crdt/consensus.go:320    'trust all' mode enabled. Any peer in the cluster can modify the pinset.                                                                                                                                                                │
│ 2024-05-29T21:57:50.262Z    INFO    crdt    [email protected]/crdt.go:505    store is marked clean. No need to repair                                                                                                                                                                                        │
│ 2024-05-29T21:57:50.262Z    INFO    cluster    ipfs-cluster/cluster.go:735    Cluster Peers (without including ourselves):                                                                                                                                                                                   │
│ 2024-05-29T21:57:50.262Z    INFO    cluster    ipfs-cluster/cluster.go:737        - No other peers                                                                                                                                                                                                           │
│ 2024-05-29T21:57:50.262Z    INFO    cluster    ipfs-cluster/cluster.go:747    Waiting for IPFS to be ready...                                                                                                                                                                                                │
│ 2024-05-29T21:57:50.263Z    INFO    cluster    ipfs-cluster/cluster.go:756    IPFS is ready. Peer ID: 12D3KooWFrNS8N6Tg3zEcN7XgWtKwx4vrJqZ2zWRoxeDrQoghwdb                                                                                                                                                   │
│ 2024-05-29T21:57:50.263Z    INFO    cluster    ipfs-cluster/cluster.go:764    ** IPFS Cluster is READY **                                                                                                                                                                                                    │
│ 2024-05-29T22:02:50.262Z    INFO    crdt    [email protected]/crdt.go:562    Number of heads: 0. Current max height: 0. Queued jobs: 0. Dirty: false                                                                                                                                                         │
│ 2024-05-29T22:07:50.262Z    INFO    crdt    [email protected]/crdt.go:562    Number of heads: 0. Current max height: 0. Queued jobs: 0. Dirty: false      

@GuillaumeOuint
Copy link
Owner

Just set replicaCount to > 1. Then you'll have to configure kubo via "initScripts" scripts to not speak to the world. The classic usage of the ipfs cluster is to have a replicated set of nodes with a pinning service and an admin api. Then you can configure how kubo will behave (exemple: restrict gateway access to only local/pinned CID, etc...)

@drduker
Copy link
Author

drduker commented May 29, 2024

ok, that is basically what I'm doing already.

ipfs:
  storage:
    volumeSize: 1Gi
  # peers: ""
  initScripts:
    001-peers.sh: |-
      #!/bin/sh
      set -xe
      echo "hello, i have nothing here yet for public."

@drduker
Copy link
Author

drduker commented May 29, 2024

Do i need to add the other local clusters to the peers or something under 001-peers.sh?

@GuillaumeOuint
Copy link
Owner

I'll see tomorrow on the cluster where I have installed it and I'll tell you

@drduker
Copy link
Author

drduker commented May 29, 2024

Thank you so much!

@drduker
Copy link
Author

drduker commented May 29, 2024

I do have nodeport disabled since the pods don't come up with that. And i usually stay away from nodeports.

p2pNodePort:
  enabled: false

I don't have an external connection or anything yet, just trying to get everything connecting within a single namespace and checking status with the ctl binary and port-forwarding the webui on 5001.

@drduker
Copy link
Author

drduker commented May 29, 2024

image
the ethpandaops chart installs 3 clusters pods and 3 ipfs pods. The 3 clusters are not aware of each other.

@drduker
Copy link
Author

drduker commented May 29, 2024

/data/ipfs-cluster/peerstore file on the cluster pods don't exist.

│ Changing user to ipfs                                                                                                                                                                                                                                                                                        │
│ /usr/local/bin/entrypoint.sh: line 13: su-exec: not found                                                                                                                                                                                                                                                    │
│ ipfs-cluster-service version 1.0.6+gitd4484ec3f1fb559675555debf1bb9ac3220ad1af                                                                                                                                                                                                                               │
│ This container only runs ipfs-cluster-service. ipfs needs to be run separately!                                                                                                                                                                                                                              │
│ Initializing default configuration...                                                                                                                                                                                                                                                                        │
│ 2024-05-29T23:58:56.643Z    INFO    config    config/config.go:482    Saving configuration                                                                                                                                                                                                                   │
│ configuration written to /data/ipfs-cluster/service.json.                                                                                                                                                                                                                                                    │
│ 2024-05-29T23:58:56.646Z    INFO    config    config/identity.go:73    Saving identity                                                                                                                                                                                                                       │
│ new identity written to /data/ipfs-cluster/identity.json                                                                                                                                                                                                                                                     │
│ new empty peerstore written to /data/ipfs-cluster/peerstore.

new empty peerstore written to /data/ipfs-cluster/peerstore.

@drduker
Copy link
Author

drduker commented May 29, 2024

I am also setting a shared secret

@drduker
Copy link
Author

drduker commented May 30, 2024

replicaCount: 3

sharedSecret: "9086e7718d2f34723ec2c869dc56faredacted"

p2pNodePort:
  enabled: false

This is what I'm using for your chart, which is very similar to ethpandaops.

@drduker
Copy link
Author

drduker commented May 30, 2024

image

@GuillaumeOuint
Copy link
Owner

I do not use nodeport either. IPFS swam/cluster swarm is routed through TCP SNI ingress controller (with public NAT gateways as announced IP, nat gateways is just a tcp sni proxy VM from public ip to private ingress controller). But in your case I doubt you need such gateway as your cluster is local only.
For peers, I have a "002-peers.sh" initScript that have:
ipfs config Peering.Peers "[ { \"ID\": \"Qm...peer_id\", \"Addrs\": [ \"/ip4/ipv4_of_the_individual_ipfs_kubo_node_service/tcp/4001\" ] },{ \"ID\": \"Qm...peer_id\", \"Addrs\": [ \"/ip4/ipv4_of_the_individual_ipfs_kubo_node_service/tcp/4001\" ] } } ]"
(repeat a peer for each kubo node you have in the cluster)

For a private ipfs cluster (means that no kubo connect to the internet to fetch cid) you'll maybe need in an initscript (before the one where you're adding peers):
ipfs bootstrap rm all
And maybe more commands, I don't have time to dig in ipfs config sorry.

My chart is similar to the ethpandaops one because I've forked it.

To see if a ipfs-cluster is seeing the other ones, use the ipfs-cluster-ctl API and do a peers ls. In a local kubernetes cluster, each cluster should see each other directly (be aware that the cluster component uses headless service IP that directly point to the kubo IP, so the cluster pod have to start after the kubo one, if not, the cluster pod will not see it's attached kubo.
Ensure crdt trustedpeers include all cluster peers id (not kubo peers id)

PS: The kubo webui is not relevant to use in this context, you should use ipfs-cluster api or pinning service to pin/add data to the cluster, then fetch the content through kubo http gateway. Same for checking cluster peers, use "ipfs-cluster-ctl peers ls" with cluster API. The kubo webui will only be relevant to check if each kubo instance see each other and then again, we can use ipfs cli on kubo instead.

@drduker
Copy link
Author

drduker commented May 30, 2024

Ok, just to reiterate what you just said.

I am to get the ID of the ipfs-cluster, not the ipfs-kubo peer. To do that i'm doing ipfs-cluster-ctl id and grabbing the first id from that and not the ID from under the "IPFS:" section.

So the init config should contain the pod ip of each of the ipfs-kubo pods. There is only one service for all of the ipfs-kubo pods, so I also attempted using the dns address like below where "ipfs-ipfs-cluster" is ther service name, blockchain the namespace, etc.

      # ipfs config Peering.Peers "[ { \"ID\": \"12D3KooWLKVgPPt6XahdRjV7fmUPgPRkWCUVJCH2dx74kXQC3A4m\", \"Addrs\": [ \"/ip4/10.244.0.108/tcp/4001\" ] },{ \"ID\": \"12D3KooWHkRgWC63g7BB4A9dYvLxhD4AVmoaSkHok6oYU69zwyph\", \"Addrs\": [ \"/ip4/10.244.0.111/tcp/4001\" ] },{ \"ID\": \"12D3KooWCri71xyxTeb3BEgqh6s7tvPZbTUeoEK4zhTRmKMkREtD\", \"Addrs\": [ \"/ip4/10.244.0.113/tcp/4001\" ] } ]" --json
      ipfs config Peering.Peers "[ { \"ID\": \"12D3KooWLKVgPPt6XahdRjV7fmUPgPRkWCUVJCH2dx74kXQC3A4m\", \"Addrs\": [ \"/dns4/ipfs-ipfs-cluster-0.ipfs-ipfs-cluster.blockchain.svc.cluster.local/tcp/4001\" ] },{ \"ID\": \"12D3KooWHkRgWC63g7BB4A9dYvLxhD4AVmoaSkHok6oYU69zwyph\", \"Addrs\": [ \"/dns4/ipfs-ipfs-cluster-1.ipfs-ipfs-cluster.blockchain.svc.cluster.local/tcp/4001\" ] },{ \"ID\": \"12D3KooWCri71xyxTeb3BEgqh6s7tvPZbTUeoEK4zhTRmKMkREtD\", \"Addrs\": [ \"/dns4/ipfs-ipfs-cluster-2.ipfs-ipfs-cluster.blockchain.svc.cluster.local/tcp/4001/tcp/4001\" ] } ]" --json

I don't have a way other than manually right now to make sure the ipfs-cluster pods come up after. Manual is fine for now, I can fix that somehow later. Great info.

@drduker
Copy link
Author

drduker commented May 30, 2024

That still isn't working yet. But once I do get that working I'm definitely gonna automate that with an init job or something. I feel like i'm closer!

@drduker
Copy link
Author

drduker commented May 30, 2024

Here is what I'm doing for the peers config.
I'm trying all combinations/possibilities now of what goes in the id and IP address. Keeping the 4001 swarm port though.

ipfs:
  initScripts:
    001-peers.sh: |-
      #!/bin/sh
      set -xe
      echo "remove public connections"
      ipfs bootstrap rm $(ipfs bootstrap list)
    002-peers.sh: |-
      #!/bin/sh
      set -xe
      ipfs config Peering.Peers "[ { \"ID\": \"<CLUSTERID>\", \"Addrs\": [ \"/ip4/<IP-OF-IPFS-KUBO-POD/tcp/4001\" ] },{ \"ID\": \"12D3KooWHkRgWC63g7BB4A9dYvLxhD4AVmoaSkHok6oYU69zwyph\", \"Addrs\": [ \"/ip4/10.244.0.147/tcp/4001\" ] },{ \"ID\": \"12D3KooWCri71xyxTeb3BEgqh6s7tvPZbTUeoEK4zhTRmKMkREtD\", \"Addrs\": [ \"/ip4/10.244.0.148/tcp/4001\" ] } ]" --json

@drduker
Copy link
Author

drduker commented May 30, 2024

Every time i restart one of the ipfs-cluster pods is says this:
"crdt Datastore created. Number of heads: 0. Current max-height: 0. Dirty: false"

So pretty sure the crdt db doesn't have the other peers.
image

@drduker
Copy link
Author

drduker commented May 30, 2024

I was able to do this on a kubo node, but peers still show 0 on the ipfs-cluster pod with ipfs-cluster-ctl.

ipfs swarm connect "/ip4/10.244.0.161/tcp/4001/p2p/12D3KooWGTv12UrtBEYv42dxHpZpt3PrnP6Ld2DXqdKdbPHpRU65"
connect 12D3KooWGTv12UrtBEYv42dxHpZpt3PrnP6Ld2DXqdKdbPHpRU65 success
...
which is:
ipfs swarm connect "/ip4/<ipfs-kubo-ip>/tcp/4001/p2p/<ipfs-peer-id>"

@GuillaumeOuint
Copy link
Owner

Be sure to execute peer add after clearing the crdt store (either by doing a higher cardinality script that will execute after (001, 002, etc..) or in the same script but on the line after the crdt emptying. Ipfs cluster pod will not see kubo if kubo has been restarted after the start of the cluster pod (because of headless service ip that will point to pod ip, if kubo pod recreated, ipfs cluster pod lost track of kubo because ip changed).
After checking cluster configuration, you have to configure each cluster peer yes. You have to do this in each cluster pod in the file /data/ipfs-cluster/service.json in cluster.peer_addresses where you input service cluster ip of each ipfs-cluster instance with format "/dns/cluster-x-ipfs-cluster/tcp/9096/p2p/PEER-IDENTITY"

@GuillaumeOuint
Copy link
Owner

I was able to do this on a kubo node, but peers still show 0 on the ipfs-cluster pod with ipfs-cluster-ctl.

ipfs swarm connect "/ip4/10.244.0.161/tcp/4001/p2p/12D3KooWGTv12UrtBEYv42dxHpZpt3PrnP6Ld2DXqdKdbPHpRU65"
connect 12D3KooWGTv12UrtBEYv42dxHpZpt3PrnP6Ld2DXqdKdbPHpRU65 success
...
which is:
ipfs swarm connect "/ip4/<ipfs-kubo-ip>/tcp/4001/p2p/<ipfs-peer-id>"

In your peer address, as all your kubo is local, use service cluster ip instead of pod ip if not already did so.

@drduker
Copy link
Author

drduker commented May 30, 2024

success!

I was able to use the dns names for both, sort of! I created additional services for each ipfs-kubo node and then did nslookups to get the ips and then load them in similarly. The trick for me was to update the service.json also!

Might be able to do just dns for one of them, not sure. Will need to test more of that out later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants