Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove powers-of-2 restriction on cohort size #130

Closed
cygnusv opened this issue Jun 20, 2023 · 5 comments · Fixed by #134
Closed

Remove powers-of-2 restriction on cohort size #130

cygnusv opened this issue Jun 20, 2023 · 5 comments · Fixed by #134

Comments

@cygnusv
Copy link
Member

cygnusv commented Jun 20, 2023

Given that our cohorts will be likely small (i.e. less than 100), we're not going to take advantage of FFT optimizations that introduce the requirement for the cohort size to be a power of 2.

@piotr-roslaniec
Copy link

  • Benchmark with and without powers-of-2 restrictions

@arjunhassard
Copy link
Member

arjunhassard commented Jun 23, 2023

Would the optimization of transcript aggregation (KZG etc.) that affords us the possibility (though not necessarily product need) of viable >100 cohorts, also overshadow the relative savings/benefits from FFT optimizations, to the extent that the overall relative benefit of powers-of-2 drops below the obvious cost (inflexible cohort tuning)?

Regardless, this restriction seems more trouble than it's worth at this stage, so +1 on removing.

@piotr-roslaniec
Copy link

piotr-roslaniec commented Jun 24, 2023

The "FTT optimization" refers to a set of operations that the validator has to perform in order to produce transcripts. The tradeoff here is that with FTT we create a constraint on the number of shares (multiples of 2 only), and in exchange get lower latency (validators produce shares faster). So when we talk about FTT optimization we consider the latency in the transcript generation step of the DKG.

This trade-off is not terribly interesting for us. In the original Ferveo, it was interesting because the latency was tied to a block time (DKG was performed by chain validators), and an ability to squeeze more validators (transcripts) into a shorter time window made sense. For us, having restrictions on the number of shares removed is a better deal since we outsourced state management to a data availability layer and don't have such restrictions on performance.

I just want to highlight that this change will not impact the size of transcripts, it's purely about the amount of computing. However, lifting the restriction on the number of transcripts could help us in selecting DKG configurations that offer us "the best bang for our buck" (gas cost of storage vs the number of validators).

@piotr-roslaniec
Copy link

Added benchmarking results to this PR

@piotr-roslaniec
Copy link

Closed by #134

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants