Skip to content

chore(l1, l2, levm): removed lazy_static and use LazyCell/LazyLock#1

Open
lakshya-sky wants to merge 5 commits intomainfrom
chore/remove-laze-static
Open

chore(l1, l2, levm): removed lazy_static and use LazyCell/LazyLock#1
lakshya-sky wants to merge 5 commits intomainfrom
chore/remove-laze-static

Conversation

@lakshya-sky
Copy link
Owner

Motivation

Description

Closes #issue_number

@lakshya-sky lakshya-sky changed the title Chore/remove laze static chore(l1, l2, levm): removed lazy_static and use LazyCell/LazyLock Nov 24, 2025
@github-actions
Copy link

Benchmark for d9810ea

Click to view benchmark
Test Base PR %
Trie/cita-trie insert 10k 28.6±0.94ms 30.1±1.55ms +5.24%
Trie/cita-trie insert 1k 2.9±0.01ms 2.9±0.13ms 0.00%
Trie/ethrex-trie insert 10k 26.1±0.67ms 26.4±1.09ms +1.15%
Trie/ethrex-trie insert 1k 2.2±0.03ms 2.2±0.02ms 0.00%

@github-actions
Copy link

Benchmark Results Comparison

No significant difference was registered for any benchmark run.

Detailed Results

Benchmark Results: BubbleSort

Command Mean [s] Min [s] Max [s] Relative
main_revm_BubbleSort 2.993 ± 0.014 2.974 3.018 1.00
main_levm_BubbleSort 3.097 ± 0.045 3.051 3.209 1.03 ± 0.02
pr_revm_BubbleSort 2.999 ± 0.016 2.979 3.028 1.00 ± 0.01
pr_levm_BubbleSort 3.102 ± 0.037 3.069 3.197 1.04 ± 0.01

Benchmark Results: ERC20Approval

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Approval 981.2 ± 7.2 972.0 998.4 1.00
main_levm_ERC20Approval 1093.0 ± 5.6 1083.0 1102.6 1.11 ± 0.01
pr_revm_ERC20Approval 1000.7 ± 11.3 990.2 1028.7 1.02 ± 0.01
pr_levm_ERC20Approval 1087.5 ± 7.5 1081.0 1104.1 1.11 ± 0.01

Benchmark Results: ERC20Mint

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Mint 133.9 ± 0.6 132.9 134.9 1.00
main_levm_ERC20Mint 164.3 ± 2.0 162.6 169.0 1.23 ± 0.02
pr_revm_ERC20Mint 134.6 ± 0.9 133.0 135.7 1.01 ± 0.01
pr_levm_ERC20Mint 162.9 ± 1.8 161.1 167.4 1.22 ± 0.01

Benchmark Results: ERC20Transfer

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Transfer 232.9 ± 1.3 231.8 236.5 1.00
main_levm_ERC20Transfer 278.6 ± 5.9 274.3 293.2 1.20 ± 0.03
pr_revm_ERC20Transfer 234.4 ± 1.4 231.5 236.4 1.01 ± 0.01
pr_levm_ERC20Transfer 277.7 ± 3.8 272.1 284.2 1.19 ± 0.02

Benchmark Results: Factorial

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Factorial 225.4 ± 3.3 223.7 234.8 1.00 ± 0.02
main_levm_Factorial 266.5 ± 3.6 263.9 275.8 1.18 ± 0.02
pr_revm_Factorial 225.1 ± 1.6 223.6 229.5 1.00
pr_levm_Factorial 264.6 ± 0.8 263.2 265.9 1.18 ± 0.01

Benchmark Results: FactorialRecursive

Command Mean [s] Min [s] Max [s] Relative
main_revm_FactorialRecursive 1.629 ± 0.043 1.553 1.713 1.00
main_levm_FactorialRecursive 8.341 ± 0.031 8.298 8.412 5.12 ± 0.14
pr_revm_FactorialRecursive 1.646 ± 0.035 1.566 1.678 1.01 ± 0.03
pr_levm_FactorialRecursive 8.328 ± 0.042 8.260 8.404 5.11 ± 0.14

Benchmark Results: Fibonacci

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Fibonacci 204.9 ± 3.1 203.0 213.2 1.01 ± 0.02
main_levm_Fibonacci 256.5 ± 6.3 248.3 270.2 1.27 ± 0.03
pr_revm_Fibonacci 202.7 ± 0.8 201.2 203.6 1.00
pr_levm_Fibonacci 254.7 ± 6.3 247.7 267.6 1.26 ± 0.03

Benchmark Results: FibonacciRecursive

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_FibonacciRecursive 852.1 ± 9.2 835.9 865.5 1.11 ± 0.02
main_levm_FibonacciRecursive 767.9 ± 6.4 761.7 784.4 1.00
pr_revm_FibonacciRecursive 851.2 ± 11.2 836.8 869.3 1.11 ± 0.02
pr_levm_FibonacciRecursive 772.5 ± 5.2 764.9 783.1 1.01 ± 0.01

Benchmark Results: ManyHashes

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ManyHashes 8.3 ± 0.0 8.2 8.3 1.00
main_levm_ManyHashes 9.1 ± 0.1 9.0 9.3 1.11 ± 0.01
pr_revm_ManyHashes 8.3 ± 0.1 8.2 8.4 1.00 ± 0.01
pr_levm_ManyHashes 9.1 ± 0.1 9.0 9.4 1.11 ± 0.02

Benchmark Results: MstoreBench

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_MstoreBench 254.2 ± 5.4 251.1 268.9 1.04 ± 0.02
main_levm_MstoreBench 243.9 ± 1.7 241.9 247.6 1.00
pr_revm_MstoreBench 254.6 ± 3.5 252.6 264.3 1.04 ± 0.02
pr_levm_MstoreBench 246.0 ± 7.0 241.3 265.6 1.01 ± 0.03

Benchmark Results: Push

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Push 286.9 ± 1.7 284.8 290.9 1.00 ± 0.01
main_levm_Push 301.8 ± 4.2 298.0 311.7 1.05 ± 0.02
pr_revm_Push 286.5 ± 1.2 284.9 288.6 1.00
pr_levm_Push 302.9 ± 3.4 299.8 309.1 1.06 ± 0.01

Benchmark Results: SstoreBench_no_opt

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_SstoreBench_no_opt 176.4 ± 3.5 166.7 178.6 1.97 ± 0.06
main_levm_SstoreBench_no_opt 89.3 ± 1.8 87.7 92.2 1.00
pr_revm_SstoreBench_no_opt 176.0 ± 3.6 166.4 179.3 1.97 ± 0.06
pr_levm_SstoreBench_no_opt 90.9 ± 1.5 89.0 92.5 1.02 ± 0.03

ilitteri pushed a commit that referenced this pull request Feb 20, 2026
…ub auto-links as issue references (lambdaclass#6187)

## Motivation

The Claude AI code reviewer (`.github/workflows/pr_ai_review.yaml`)
enumerates findings using `#1`, `lambdaclass#2`, etc., which GitHub auto-links as
references to issues/PRs. This clutters PR activity feeds and confuses
readers. Example:
lambdaclass#6186 (comment)

## Description

Add formatting rules to the AI review prompt
(`.github/prompts/ai-review.md`) instructing the model to use `1.`, `2.`
or bullet points instead of `#N`, and to refer back to items as "Item
1", "Point 2" rather than "Issue #1".

## Checklist

- [ ] Updated `STORE_SCHEMA_VERSION` (crates/storage/lib.rs) if the PR
includes breaking changes to the `Store` requiring a re-sync.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant