Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate Superchain Registry chain support #8105

Open
wants to merge 35 commits into
base: master
Choose a base branch
from

Conversation

emlautarom1
Copy link
Contributor

Solves #8065

Changes

Types of changes

What types of changes does your code introduce?

  • Bugfix (a non-breaking change that fixes an issue)
  • New feature (a non-breaking change that adds functionality)
  • Breaking change (a change that causes existing functionality not to work as expected)
  • Optimization
  • Refactoring
  • Documentation update
  • Build-related changes
  • Other: Description

Testing

Requires testing

  • Yes
  • No

If yes, did you write tests?

  • Yes
  • No

Notes on testing

Added tests to verify that we're properly decompressing generated ChainSpec files. Some tests were updated to use a new ChainSpecFileLoader that is format-aware of ChainSpec files, thus is able to decompress them if needed.

Documentation

Requires documentation update

  • Yes
  • No

Requires explanation in Release Notes

  • Yes
  • No

Once this feature is properly working we need to document that Nethermind now supports all OP Superchain chains, not only a small subset (currently: OP, Base and Worldchain).

Remarks

The generated ChainSpec files are stored in a compressed format due to their excessive size when in .json format. Just like in the Superchain Registry repository (https://github.com/ethereum-optimism/superchain-registry/blob/cb9a0ca8eda1608bf9958137a736fd2c18884fbd/superchain/README.md) we use zstandard as the compression algorithm. This is due to the OP team already figuring out that this is one of the best algorithms for this use case, and also to reduce the required amount of code (our scripts already deal with decompression so might as well compress the resulting artifacts using the same algorithm).

Currently, 45 configs are generated: 28 in Mainnet and 17 in Sepolia.

We're still missing a GitHub action that automatically runs this script either periodically or when the Superchain Registry repository gets updated (the latter being preferable)

@kamilchodola
Copy link
Contributor

@stdevMac Can you take care of checking this and making sure it works accordingly with Sedge? Also add to our CI BUT limit those side chains to trigger once per week.

@emlautarom1
Copy link
Contributor Author

@kamilchodola @stdevMac Do not test this branch yet since it's not working ATM. I'll change the status to "ready to review" once I figure out what is the issue (currently Nethermind can't find peers).

@emlautarom1 emlautarom1 marked this pull request as ready for review January 28, 2025 13:54
@emlautarom1 emlautarom1 requested review from rubo and a team as code owners January 28, 2025 13:54
@emlautarom1
Copy link
Contributor Author

@kamilchodola @stdevMac @deffrian Ready to review. The issue was in the genesis section and it was fixed by updating the Python script.

return nethermind


def to_nethermind_runner(chain_name, l1, chain):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The WIP OptimismCL section is not included at the moment.


with open(path.join(tmp_dir, file), "rb") as json_config:
samples.append(json_config.read())
nethermind_dict = zstd.train_dictionary(2**16, samples, threads=-1)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dict_size=2**16 parameter was decided purely by trial-and-error: this value provided the best result.

}

using var decompressedStream = new DecompressionStream(streamData);
decompressedStream.LoadDictionary(buffer.ToArray());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately LoadDictionary does not accept any async/Stream overload.
In practice it should not be an issue since the dictionary is ~3.5 Mb and it's only loaded once during initialization.

Copy link
Contributor

@rubo rubo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added missing file headers

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the missing file header:

Suggested change
#!/usr/bin/env bash
#!/bin/bash
# SPDX-FileCopyrightText: 2025 Demerzel Solutions Limited
# SPDX-License-Identifier: LGPL-3.0-only

@@ -0,0 +1,338 @@
import argparse
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the missing file header:

Suggested change
import argparse
# SPDX-FileCopyrightText: 2025 Demerzel Solutions Limited
# SPDX-License-Identifier: LGPL-3.0-only
import argparse

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants