Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: validium NEAR DA via sidecar #129

Open
wants to merge 6 commits into
base: develop
Choose a base branch
from

Conversation

dndll
Copy link

@dndll dndll commented Apr 17, 2024

This PR adds CDK validium support via NEAR DA. It uses the new etrog interface to provide a way for CDK validiums to verify if their sequence data has been submitted.

On the deeper details, we utilise the DA sidecar so we don't need to bake any FFI libraries into the node. This is much akin to nitro + others prefer to do it, and similar to how the DAC nodes are constructed. You'd just run the sidecar and it will submit all of the blob information, manage your near keys etc.

Some small issues:

  • configuration of the integration with the sidecar: since the validium prefers to have minimal da-specific configuration, i wasnt quite sure how we should do the config. Previously we set the DA specific config in the sequence sender, is that still the preferred approach? We only really need to set a base url, optionally if you want to configure the sidecar you can also do it by passing a config.
  • NEARDA Eth contract is still not complete: we actually implemented it for this PR, so it does not join up to our light client proofs yet. It essentially always tells the validium the data has been submitted. However we plan to do this pretty soon, more info below.

Complexity around sequence DA availability.
NEARDA works with SPV light clients, when a blob is submitted to the tracked store, we send it off to a light client which will queue the proof for verification. Once we take a batch of transactions (in batches of 16, 64, or 128), we build a plonky2 proof with the light client protocol and then verify the batch. This gets wrapped in a groth16 snark and then we relay to the l1 eventually, paying roughly 470k gas depending on calldata. This is so we can provide various user flows, such as:

  • rollup wants to prove and verify as cheaply as possible, perhaps they are eco-aware, so fill up their sequences as much as possible, they are happy for the "shared queue" to prove the tx eventually, say in 1h.
  • rollup wants to verify as quickly as possible and does not care about cost: they run a plain old light client (POLC) and create the merkle proof(~200k gas) + syncing the light client eagerly(~1m+ gas) relaying to DA. In practice, i think this is the only one that would work well with eager syncing that the sequence sender uses with default configs.
  • rollup want to have an isolated queue, queueing when they like: pushes to queue, triggers proof, proof verification is relayed.
  • rollup runs the ZKLC: they prove, verify and build the recursive proof eagerly.
  • rollup sends various blobs to all DA providers and needs to recursively prove them

In short, this introduces some latency we need to be aware of for validiums to give them different avenues beyond eager verification, or adjust the configuration so the sequence sender can work at a reasonable cadence with the ZKLC. Ideally, we would allow some lazy information in the l2 to be provided eventually. It's possible we can do some form of hybrid approach where we can provide the eager, expensive info to the l2, but eventually prove on the l1. This is something we are exploring but it's very much an idea.

To test it, it all works with local nodes, so you should be able to run make run-near and check it out. It has the forced near sequencer & the standard sequencer. After a sequence with batches is sent off, check the sequence sender to ensure it's sent. Then you can check the forced sequencer to verify it's syncing with the DA layer.

TODO:

@christophercampbell
Copy link

@dndll fantastic, thanks! I'd like to try this out, but get an unauthorized error:

 ✘ near-da-sidecar Error Head "https://ghcr.io/v2/near/rollup-data-availability/http-api/manifests/dev": unauthorized

Can I get access, or a hint on how to build that image?

@dndll
Copy link
Author

dndll commented Apr 25, 2024

@dndll fantastic, thanks! I'd like to try this out, but get an unauthorized error:

 ✘ near-da-sidecar Error Head "https://ghcr.io/v2/near/rollup-data-availability/http-api/manifests/dev": unauthorized

Can I get access, or a hint on how to build that image?

Woops! Looks like I forgot to make that public. It should be available now at https://github.com/near/rollup-data-availability/pkgs/container/rollup-data-availability%2Fhttp-api

@dndll
Copy link
Author

dndll commented May 1, 2024

Hey @christophercampbell - chasing up, did you manage to take a look at this?

@christophercampbell
Copy link

Yes, great work! You followed the intended interfaces and code structure as intended, interested in the SC developments for sure. It'd be nice to get you in contact with product folks over here, do you have a preferred contact for that?

@dndll
Copy link
Author

dndll commented May 7, 2024

Yes, great work! You followed the intended interfaces and code structure as intended, interested in the SC developments for sure. It'd be nice to get you in contact with product folks over here, do you have a preferred contact for that?

Thanks Christopher, passing through our comms now! [email protected]

@dndll dndll marked this pull request as ready for review May 7, 2024 16:45
@dndll dndll requested a review from a team as a code owner May 7, 2024 16:45
@arnaubennassar
Copy link
Collaborator

gm! After a team discussion we're reconsidering the idea of adding the different DA protocol integrations directly on this repo. Instead we'd like this code to live on a repo you fully own, and for it to be imported here as an external dependency.

In a nutshell, this PR should only contain the modifications done on:

  • cmd/run.go
  • dataavailability/config.go

Anything else should be handled on a repo of yours. Sorry for the confusion, and the last minute change of mind, but we believe this will be better for everyone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants