-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
potential speedup in HLT: AXOL1TLCondition::evaluateCondition consider caching the model? #46740
Comments
cms-bot internal usage |
A new Issue was created by @slava77. @Dr15Jones, @antoniovilela, @makortel, @mandrenguyen, @rappoccio, @sextonkennedy, @smuzaffar can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
@aloeliger @artlbv FYI |
I can also take a look at this when I get some time, frankly I've been meaning to for a while. |
in a more "realistic" setup it's more like sub-percent (details from #45631 (comment)) |
assign l1 |
type performance-improvements |
New categories assigned: l1 @aloeliger,@epalencia you have been requested to review this Pull request/Issue and eventually sign? Thanks |
My 2.5% comes from a more recent test (but on GRun menu) by @bdanzi |
I'd rather trust more the manual measurement than what comes from the timing server, tbh. |
essentially all of the cost is in perhaps just having the .so file loaded in the constructor or begin job is enough |
The "problem" is that the model is evaluated (and loaded) for every threshold and BX separately, even though it is the same model. If caching is possible that would help of course and in principle the model(s) are known at the beginning of the job as they are fixed in the L1 menu. IMO it would be good to have some common approach to this loading of HLS4ML models within L1/CMSSW, as e.g. here it seems to be done differently: |
Given that the interface of This is how cmssw/L1Trigger/Phase2L1ParticleFlow/interface/JetId.h Lines 50 to 53 in 92333e3
is used in cmssw/L1Trigger/Phase2L1ParticleFlow/plugins/L1BJetProducer.cc Lines 51 to 52 in 92333e3
(now if |
assign hlt
|
New categories assigned: hlt @Martin-Grunewald,@mmusich you have been requested to review this Pull request/Issue and eventually sign? Thanks |
Looking at a profile of HLT in 14_1_X with callgrind, on MC running only
MC_ReducedIterativeTracking_v22
, I see that 78% ofL1TGlobalProducer::produce
is spent inl1t::AXOL1TLCondition::evaluateCondition
https://github.com/cms-sw/cmssw/blob/CMSSW_14_1_0_pre5/L1Trigger/L1TGlobal/src/AXOL1TLCondition.cc#L100
In my test
L1TGlobalProducer::produce
takes 9.7% of the time; in the full menu it's apparently around2.5%0.9% (updated to 0.9%, see notes below)Of all the time spent in
l1t::AXOL1TLCondition::evaluateCondition
hls4mlEmulator::ModelLoader::load_model()
is 54%hls4mlEmulator::ModelLoader::~ModelLoader()
is 30%GTADModel_emulator_v4::predict()
is 15%IIUC, the load and destruction of the model happens 10 times per event; I don't see any dependence on the current event variables.
Some kind of caching may be useful to get HLT to run a bit faster (seems like
1.5%(updated) 0.6% or so).The text was updated successfully, but these errors were encountered: