Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements to DynamicPPLBenchmarks #346

Draft
wants to merge 25 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
57b5d47
bigboy update to benchmarks
torfjelde Aug 2, 2021
e7c0a76
Merge branch 'master' into tor/benchmark-update
torfjelde Aug 19, 2021
60ec2c8
Merge branch 'master' into tor/benchmark-update
torfjelde Sep 8, 2021
eb1b83c
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
d8afa71
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
5bb48d2
make models return random variables as NamedTuple as it can be useful…
torfjelde Dec 2, 2021
02484cf
add benchmarking of evaluation with SimpleVarInfo with NamedTuple
torfjelde Dec 2, 2021
5c59769
added some information about the execution environment
torfjelde Dec 3, 2021
f1f1381
added judgementtable_single
torfjelde Dec 3, 2021
a48553a
added benchmarking of SimpleVarInfo, if present
torfjelde Dec 3, 2021
f2dc062
Merge branch 'master' into tor/benchmark-update
torfjelde Dec 3, 2021
fa675de
added ComponentArrays benchmarking for SimpleVarInfo
torfjelde Dec 5, 2021
3962da2
Merge branch 'master' into tor/benchmark-update
yebai Aug 29, 2022
53dc571
Merge branch 'master' into tor/benchmark-update
yebai Nov 2, 2022
f5705d5
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 7, 2022
7f569f7
formatting
torfjelde Nov 7, 2022
4a06150
Merge branch 'master' into tor/benchmark-update
yebai Feb 2, 2023
a1cc6bf
Apply suggestions from code review
yebai Feb 2, 2023
3e7e200
Update benchmarks/benchmarks.jmd
yebai Feb 2, 2023
c867ae8
Merge branch 'master' into tor/benchmark-update
yebai Jul 4, 2023
96f120b
merged main into this one
shravanngoswamii Dec 19, 2024
0460b64
Benchmarking CI
shravanngoswamii Dec 19, 2024
a8541b5
Julia script for benchmarking on top of current setup
shravanngoswamii Feb 1, 2025
0291c2f
keep old results for reference
shravanngoswamii Feb 1, 2025
6f255d1
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 1, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/workflows/Benchmarking.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Benchmarking

on:
push:
branches:
- master

jobs:
benchmark:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v4

- name: Set up Julia
uses: julia-actions/setup-julia@v2
with:
version: '1'

- name: Install Dependencies
run: julia --project=benchmarks/ -e 'using Pkg; Pkg.instantiate()'

- name: Run Benchmarks and Generate Reports
run: julia --project=benchmarks/ -e 'using DynamicPPLBenchmarks; weave_benchmarks()'

- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./benchmarks/results
publish_branch: gh-pages
8 changes: 8 additions & 0 deletions benchmarks/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,19 @@ uuid = "d94a1522-c11e-44a7-981a-42bf5dc1a001"
version = "0.1.0"

[deps]
AbstractPPL = "7a57a42e-76ec-4ea3-a279-07e840d6d9cf"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
DiffUtils = "8294860b-85a6-42f8-8c35-d911f667b5f6"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DrWatson = "634d3b9d-ee7a-5ddf-bec9-22491ea816e1"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
LibGit2 = "76f85450-5226-5b5a-8eaa-529ad045b433"
Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Tables = "bd369af6-aec1-5ad0-b16a-f7cc5008161c"
Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9"
45 changes: 34 additions & 11 deletions benchmarks/benchmark_body.jmd
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
```julia
@time model_def(data)();
@time model_def(data...)();
```

```julia
m = time_model_def(model_def, data);
m = time_model_def(model_def, data...);
```

```julia
suite = make_suite(m);
results = run(suite);
results = run(suite; seconds=WEAVE_ARGS[:seconds]);
```

```julia
Expand All @@ -19,13 +19,37 @@ results["evaluation_untyped"]
results["evaluation_typed"]
```

```julia
let k = "evaluation_simple_varinfo_nt"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_componentarray"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_dict"
haskey(results, k) && results[k]
end
```

```julia
let k = "evaluation_simple_varinfo_dict_from_nt"
haskey(results, k) && results[k]
end
```

```julia; echo=false; results="hidden";
BenchmarkTools.save(
joinpath("results", WEAVE_ARGS[:name], "$(nameof(m))_benchmarks.json"), results
)
```

```julia; wrap=false
```julia; wrap=false; echo=false
if WEAVE_ARGS[:include_typed_code]
typed = typed_code(m)
end
Expand All @@ -34,16 +58,15 @@ end
```julia; echo=false; results="hidden"
if WEAVE_ARGS[:include_typed_code]
# Serialize the output of `typed_code` so we can compare later.
haskey(WEAVE_ARGS, :name) &&
serialize(joinpath("results", WEAVE_ARGS[:name], "$(nameof(m)).jls"), string(typed))
haskey(WEAVE_ARGS, :name) && serialize(joinpath("results", WEAVE_ARGS[:name],"$(nameof(m)).jls"), string(typed));
shravanngoswamii marked this conversation as resolved.
Show resolved Hide resolved
end
```

```julia; wrap=false; echo=false;
if haskey(WEAVE_ARGS, :name_old)
if WEAVE_ARGS[:include_typed_code] && haskey(WEAVE_ARGS, :name_old)
# We want to compare the generated code to the previous version.
using DiffUtils: DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"))
DiffUtils.diff(typed_old, string(typed); width=130)
import DiffUtils
typed_old = deserialize(joinpath("results", WEAVE_ARGS[:name_old], "$(nameof(m)).jls"));
DiffUtils.diff(typed_old, string(typed), width=130)
shravanngoswamii marked this conversation as resolved.
Show resolved Hide resolved
end
```
```
shravanngoswamii marked this conversation as resolved.
Show resolved Hide resolved
94 changes: 94 additions & 0 deletions benchmarks/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
using BenchmarkTools
using DynamicPPL
using Distributions
using DynamicPPLBenchmarks: time_model_def, make_suite
using PrettyTables
using Dates
using LibGit2

const RESULTS_DIR = "results"
const BENCHMARK_NAME = let
repo = try
LibGit2.GitRepo(joinpath(pkgdir(DynamicPPL), ".."))
catch
nothing
end
isnothing(repo) ? "benchmarks_$(Dates.format(now(), "yyyy-mm-dd_HH-MM-SS"))" :
"$(LibGit2.headname(repo))_$(string(LibGit2.GitHash(LibGit2.peel(LibGit2.GitCommit, LibGit2.head(repo))))[1:6])"
shravanngoswamii marked this conversation as resolved.
Show resolved Hide resolved
end

mkpath(joinpath(RESULTS_DIR, BENCHMARK_NAME))

@model function demo1(x)
m ~ Normal()
x ~ Normal(m, 1)
return (m = m, x = x)
shravanngoswamii marked this conversation as resolved.
Show resolved Hide resolved
end

@model function demo2(y)
p ~ Beta(1, 1)
N = length(y)
for n in 1:N
y[n] ~ Bernoulli(p)
end
return (; p)
end

models = [
(name = "demo1", model = demo1, data = (1.0,)),
(name = "demo2", model = demo2, data = (rand(0:1, 10),))
Comment on lines +38 to +39
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
(name = "demo1", model = demo1, data = (1.0,)),
(name = "demo2", model = demo2, data = (rand(0:1, 10),))
(name="demo1", model=demo1, data=(1.0,)),
(name="demo2", model=demo2, data=(rand(0:1, 10),)),

]

results = []
for (model_name, model_def, data) in models
println(">> Running benchmarks for model: $model_name")
m = time_model_def(model_def, data...)
println()
suite = make_suite(m)
bench_results = run(suite, seconds=10)

Comment on lines +48 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
bench_results = run(suite, seconds=10)
bench_results = run(suite; seconds=10)

output_path = joinpath(RESULTS_DIR, BENCHMARK_NAME, "$(model_name)_benchmarks.json")
BenchmarkTools.save(output_path, bench_results)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

for (eval_type, trial) in bench_results
push!(results, (
Model = model_name,
Evaluation = eval_type,
Time = minimum(trial).time,
Memory = trial.memory,
Allocations = trial.allocs,
Samples = length(trial.times)
))
Comment on lines +54 to +61
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
push!(results, (
Model = model_name,
Evaluation = eval_type,
Time = minimum(trial).time,
Memory = trial.memory,
Allocations = trial.allocs,
Samples = length(trial.times)
))
push!(
results,
(
Model=model_name,
Evaluation=eval_type,
Time=minimum(trial).time,
Memory=trial.memory,
Allocations=trial.allocs,
Samples=length(trial.times),
),
)

end
end

formatted = map(results) do r
(Model = r.Model,
Evaluation = replace(r.Evaluation, "_" => " "),
Time = BenchmarkTools.prettytime(r.Time),
Memory = BenchmarkTools.prettymemory(r.Memory),
Allocations = string(r.Allocations),
Samples = string(r.Samples))
Comment on lines +66 to +71
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
(Model = r.Model,
Evaluation = replace(r.Evaluation, "_" => " "),
Time = BenchmarkTools.prettytime(r.Time),
Memory = BenchmarkTools.prettymemory(r.Memory),
Allocations = string(r.Allocations),
Samples = string(r.Samples))
(
Model=r.Model,
Evaluation=replace(r.Evaluation, "_" => " "),
Time=BenchmarkTools.prettytime(r.Time),
Memory=BenchmarkTools.prettymemory(r.Memory),
Allocations=string(r.Allocations),
Samples=string(r.Samples),
)

end

md_output = """
## DynamicPPL Benchmark Results ($BENCHMARK_NAME)

### Execution Environment
- Julia version: $(VERSION)
- DynamicPPL version: $(pkgversion(DynamicPPL))
- Benchmark date: $(now())

$(pretty_table(String, formatted,
tf = tf_markdown,
header = ["Model", "Evaluation Type", "Time", "Memory", "Allocs", "Samples"],
alignment = [:l, :l, :r, :r, :r, :r]
))
"""

println(md_output)
open(joinpath(RESULTS_DIR, BENCHMARK_NAME, "REPORT.md"), "w") do io
write(io, md_output)
end

println("Benchmark results saved to: $RESULTS_DIR/$BENCHMARK_NAME")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
println("Benchmark results saved to: $RESULTS_DIR/$BENCHMARK_NAME")
println("Benchmark results saved to: $RESULTS_DIR/$BENCHMARK_NAME")

Loading
Loading