This directory contains ASV (Airspeed Velocity) benchmarks for Dulwich core operations.
First, install airspeed velocity:
pip install asvasv run# Run only large history benchmarks
asv run -b LargeHistoryBenchmarks
# Run with specific parameters
asv run -b "LargeHistoryBenchmarks.time_walk_full_history"# Benchmark last 10 commits
asv run HEAD~10..HEAD
# Benchmark specific commit range
asv run v1.0.0..master# Compare current commit with master
asv continuous master HEAD
# Compare two commits
asv compare v1.0.0 HEADasv publish
asv preview # Opens in browser- Walking full commit history
- Limited history walks
- Path-filtered logs
- Rev-list operations
- Merge base calculations
- Log with patches (
porcelain.log- equivalent togit log -p) - Show commits (
porcelain.show- equivalent togit show) - Diff tree (
porcelain.diff_tree- diff between commits)
Parameters:
num_commits: 100, 1000, 5000files_per_commit: 10, 50, 100
- Reading objects from disk (loose, packed, or mixed storage)
- Object existence checks
- Iterating all objects
Parameters:
num_objects: 1000, 10000, 50000storage_type: 'loose', 'packed', 'mixed'
- Random access to packed objects
- Sequential pack reading
- Pack index loading
- Pack verification
Parameters:
num_objects: 1000, 10000, 50000with_deltas: False, True
- Local protocol clone
- Local protocol fetch
- Local protocol push
Parameters:
num_commits: 100, 1000operation: 'fetch', 'push', 'clone'
- HTTP smart protocol clone
- HTTP fetch-pack operations
- With/without compression
Parameters:
num_commits: 100, 1000use_compression: True, False
- Negotiation with varying common history
- Pack generation simulation
Parameters:
num_commits: 100, 1000, 5000common_history_percent: 0, 50, 90
- Adding large files
- Diffing large file changes
Parameters:
file_size_mb: 1, 10, 50, 100
- Listing branches
- Switching branches
- Merging branches
Parameters:
num_branches: 10, 100, 1000
- Repository status with many files
- Index diffing
Parameters:
num_files: 100, 1000, 5000percent_modified: 10, 50, 100
- Counting objects with
porcelain.count_objects - Full GC with
porcelain.gc - Repacking with
porcelain.repack - Pruning unreachable objects with
porcelain.prune - Filesystem check with
porcelain.fsck
Parameters:
num_loose_objects: 100, 1000, 5000unreachable_percent: 0, 10, 50
- Full repository repack
- Incremental repacking
- Reference packing
- Pack file optimization/verification
Parameters:
num_packs: 10, 50, 100objects_per_pack: 100, 1000, 5000with_deltas: False, True
- Reading symbolic references
- Following symref chains
- Setting new symrefs
- Updating symref targets
- Listing all refs including symrefs
- Reading remote HEAD symrefs
- Resolving remote symrefs
Parameters:
num_refs: 10, 100, 1000symref_depth: 1, 2, 4
To add new benchmarks:
- Add a new class inheriting from
BenchmarkBase - Implement
setup()method for test data preparation - Add benchmark methods prefixed with
time_for timing benchmarks - Use
paramsandparam_namesfor parameterized benchmarks
Example:
class MyBenchmarks(BenchmarkBase):
params = [10, 100, 1000]
param_names = ['size']
def setup(self, size):
super().setup()
# Setup code
def time_my_operation(self, size):
# Code to benchmarkThe ASV benchmarks can be integrated with GitHub Actions for continuous performance monitoring.
- Use
--quickfor faster test runs during development - Set
ASV_USE_CONDA=noto use virtualenv instead of conda - Use
--show-stderrto debug benchmark failures - Results are stored in
.asv/results/as JSON files