This repository contains a benchmarking exercise designed to teach FRAME pallet benchmarking concepts through a simplified Identity pallet implementation. The project demonstrates various benchmarking patterns and complexities in a Substrate runtime environment.
Your task is to
- implement the missing benchmarks in the Identity pallet's benchmarking module
- run the benchmarks and generate a
weights.rs
file
This exercise will teach you to:
- Analyze complexity patterns - Understand different algorithmic complexities (linear, logarithmic, constant)
- Choose appropriate parameters - Determine which complexity parameters are necessary for accurate benchmarking
- Implement benchmark scenarios - Create proper setup, execution, and verification for benchmarks
- Understand storage patterns - Compare different storage approaches and their performance implications
Navigate to pallets/identity/src/benchmarking.rs
and find the two incomplete benchmarks:
#[benchmark]
fn clear_identity_inline_usage(
b: Linear<1, { T::MaxFieldLength::get() }>, // TODO: determine if necessary
j: Linear<0, { T::MaxJudgements::get() }>, // TODO: determine if necessary
) {
// TODO: implement
}
Your Tasks:
- Analyze the complexity: Examine the
clear_identity
extrinsic to understand its computational complexity when using inline storage - Determine parameters: Decide which linear parameters (
b
for bytes,j
for judgements) are actually necessary - Implement the benchmark: Create a complete benchmark following the patterns from existing benchmarks
- Add verification: Include proper assertions to verify the benchmark correctness
#[benchmark]
fn clear_identity_double_map_usage(
b: Linear<1, { T::MaxFieldLength::get() }>, // TODO: determine if necessary
j: Linear<0, { T::MaxJudgements::get() }>, // TODO: determine if necessary
) {
// TODO: implement
}
Your Tasks:
- Analyze the complexity: Examine the
clear_identity
extrinsic to understand its computational complexity when using double map storage - Determine parameters: Decide which linear parameters are actually necessary for this storage pattern
- Implement the benchmark: Create a complete benchmark demonstrating the difference from inline storage
- Add verification: Include proper assertions to verify the benchmark correctness
Your Task: After implementing the benchmarks it is time to actually run them by compiling your runtime and using the omni bencher to get timings for the extrinsics.
- Complexity Analysis
- Storage Pattern Comparison
- Benchmarking Best Practices
- Proper setup: Creating realistic pre-conditions for the benchmark
- Worst-case scenarios: Testing the most expensive execution paths
- Comprehensive verification: Ensuring benchmarks measure what they claim to measure
Before implementing, examine the existing benchmarks in the file:
set_identity
- Shows linear complexity with bytes parameterset_identity_update
- Shows linear complexity with both bytes and logarithmic complexity with judgementsprovide_judgement_inline
- Shows logarithmic complexity with judgementsprovide_judgement_double_map
- Shows linear complexity with bytes parameter but independent of judgements
Read pallets/identity/src/lib.rs
to understand:
- How
clear_identity
works - What storage operations it performs
Each benchmark should include:
#[benchmark]
fn benchmark_name(/* parameters */) {
// 1. Setup: Create test accounts and fund them
// 2. Pre-conditions: Set up identity and judgements
// 3. Execution: Call the extrinsic being benchmarked
// 4. Verification: Assert the expected final state
}
Use existing helper functions:
fund_account::<T>()
- Provides sufficient balance for operationscreate_identity_info::<T>()
- Creates test identity datawhitelisted_caller()
oraccount()
- Creates test accounts
Run for your pallets tests, including the benchmarks.
cargo test -p pallet-identity --features runtime-benchmarks
Run this to check whether your runtime compiles correctly.
cargo test --features runtime-benchmarks
cargo +nightly fmt
cargo clippy -- -D warnings
After completing this exercise, you should understand:
- When parameters matter: Why some benchmarks need
b
andj
parameters while others don't - Storage tradeoffs: The performance implications of different storage patterns
- Complexity analysis: How to identify and measure different algorithmic complexities
- Benchmark implementation: How to write comprehensive, correct benchmarks
pallets/identity/
├── src/
│ ├── lib.rs # Pallet implementation with extrinsics
│ ├── benchmarking.rs # 🎯 YOUR ASSIGNMENT - Complete the TODOs
│ ├── weights.rs # Weight trait and implementations
│ ├── mock.rs # Test runtime configuration
│ └── tests.rs # Unit tests
└── Cargo.toml
This Identity pallet is a simplified version of Substrate's Identity pallet, designed specifically for benchmarking education. It provides:
- Set/clear identity information with configurable fields
- Economic deposits to prevent spam
- Judgement system for identity validation
- Linear complexity - Operations scaling with data size
- Logarithmic complexity - Binary search operations
- Storage pattern comparison - BoundedVec vs DoubleMap performance
- Economic operations - Currency reservation, unreservation
- Real-world scenarios - Based on production Substrate patterns
# Build runtime WASM **WITHOUT BENCHMARKS**
cargo build -p bench-runtime --release
# Install required tools
cargo install polkadot-omni-node --locked
cargo install staging-chain-spec-builder --locked
# Create chain spec from WASM
chain-spec-builder create --runtime ./target/release/wbuild/bench-runtime/pba_runtime.wasm --relay-chain westend --para-id 1000 -t development default
# Run omni-node
polkadot-omni-node --chain chain_spec.json --dev-block-time 6000 --tmp
Install the omni bencher:
cargo install frame-omni-bencher --locked
Build your runtime with benchmarking enabled:
cargo build --release --features runtime-benchmarks
Run the omni bencher and generate a weights.rs
file with your results:
frame-omni-bencher v1 benchmark pallet \
--runtime \
./target/release/wbuild/bench-runtime/bench_runtime.compact.compressed.wasm \
--pallet "pallet_identity" --extrinsic "" \
--output weights.rs
- Start by understanding: Read the
clear_identity
extrinsic implementation first - Study the patterns: Look at existing benchmarks to understand the structure
- Test frequently: Run tests after each change to catch issues early
- Think about complexity: Consider what actually makes the operation more expensive
- Verify your work: Ensure your benchmarks test what they claim to test
Good luck with your benchmarking implementation! This exercise will give you valuable hands-on experience with FRAME benchmarking concepts that are essential for production Substrate development.