Skip to content

Verbose performance metrics#578

Closed
SylvainJoube wants to merge 5 commits intoacts-project:mainfrom
SylvainJoube:verbose-performance-metrics
Closed

Verbose performance metrics#578
SylvainJoube wants to merge 5 commits intoacts-project:mainfrom
SylvainJoube:verbose-performance-metrics

Conversation

@SylvainJoube
Copy link
Copy Markdown
Contributor

@SylvainJoube SylvainJoube commented May 7, 2024

I've added some performance metrics for the following algorithms: finding, fitting and ambiguity resolution. I think it's an easy way to have a simple evaluation of the algorithms performance, without having to open ROOT. The code compares the reconstructed track with the truth particle data and prints basic metrics (valid/duplicates/false tracks).

My code still needs a bit of refactoring (I will clean it by the end of the day). I will also write a documentation, if you agree to merge this PR, and it you think this PR was a good idea. Here is an example of output I had:

==> Statistics ... 
- read    7830 spacepoints
- read    7830 measurements
- created (cpu)  4794 seeds
- created (cpu)  5296 found tracks
- created (cpu)  5296 fitted tracks
- created (cpu)  1780 ambiguity free tracks

Performance metrics:

===== Performance metrics for finding =====
          Valid: 1786 (34%)
     Duplicates: 2170 (41%)
          Fakes: 1340 (25%)

===== Performance metrics for fitting =====
          Valid: 1786 (34%)
     Duplicates: 2170 (41%)
          Fakes: 1340 (25%)

===== Ambiguity resolution performance metrics =====
--Among the selected tracks:
  Valid quality: 0.000561798 (should be as low as possible)
          Valid: 1766 (99%)
     Duplicates: 0 (0%)
          Fakes: 14 (1%)
--Among the evicted tracks:
          Valid: 20 (1%) (not in selected tracks)
     Duplicates: 2137 (61%)
          Fakes: 1326 (38%)

===== Performance metrics for ambiguity resolution (check v2) =====
          Valid: 1766 (99%)
     Duplicates: 0 (0%)
          Fakes: 14 (1%)

I used the following command, with the added --print-performance flag:

./bin/traccc_seeding_example --input-directory=detray_simulation/toy_detector/n_particles_2000/ --detector-file=toy_detector_geometry.json --material-file=toy_detector_homogeneous_material.json --grid-file=toy_detector_surface_grids.json --input-event=1 --track-candidates-range=3:30 --constraint-step-size-mm=1000 --check-performance --print-performance

@SylvainJoube SylvainJoube force-pushed the verbose-performance-metrics branch from 8022b31 to 05dd1cf Compare May 7, 2024 13:10
@krasznaa
Copy link
Copy Markdown
Member

krasznaa commented May 7, 2024

🤔 Over the next days I intend to put together a proposal for how I believe all of this performance measurement code should be organized. As I'm really not super happy with how it is set up at the moment.

At that point, in a couple of days, let's discuss here how your development would fit into that "new landscape". 😉

@SylvainJoube
Copy link
Copy Markdown
Contributor Author

Thanks for your feedback Attila. Okay, I’ll wait! I’m very curious about your proposal, seeing your code is always a nice way for me to discover how things should be organized and implemented! 🦊

See you here in a couple of days then :) I’ll be watching the open PRs too.

@stephenswat
Copy link
Copy Markdown
Member

Closing this as superseded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants