Potential fix for negative heating from Compton scattering with atomic relaxation#3865
Potential fix for negative heating from Compton scattering with atomic relaxation#3865jtramm wants to merge 1 commit intoopenmc-dev:developfrom
Conversation
When Doppler broadening causes the Compton-scattered photon to transfer less energy than the shell binding energy, the recoil electron has negative kinetic energy and is not created. However, atomic relaxation still runs and produces secondaries totaling ~e_b. The heating score (E_last - E' - bank_second_E) went negative because bank_second_E contained only relaxation products (~e_b) which exceeded the photon's energy transfer. Fix: when E_electron < 0, add it (a negative value) to bank_second_E before relaxation runs. This offsets the relaxation energy so bank_second_E = E_electron + sum(relaxation) = photon_transfer, giving a heating score of zero. This is equivalent to Geant4 PENELOPE's approach of setting localEnergyDeposit = diffEnergy when eKineticEnergy < 0 (G4PenelopeComptonModel.cc). Also use relaxation binding energy (ENDF/B) instead of Compton profile binding energy (Biggs et al.) for the electron energy calculation when atomic relaxation is enabled, matching the approach already used by the photoelectric effect. This ensures the electron energy and relaxation cascade use a self-consistent binding energy source. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
So in debugging the new CI failure for this PR, claude code is now saying that the negative heating tallies are just sort of inevitable given our physics model and not really a bug. Claude code is saying:
Anyway, my worry is still that we have an enforcement of positivity on the heating tallies in the |
|
I think we should set compton scattering cross section to zero when the photon does not have enough energy to emit the most loosely bound electron. |
Thanks for digging into this @jtramm! I think your notes, in particular the latest comment, summarizes it well. This test from #3863 was meant to check for a systematic negative tally due to a bug involving weight windows and I agree it would be good to determine a better test metric given that it's possible to get negative tally values due to the way we sample photons with our electron treatment. Tricky given that the symptom of that bug is also negative tally values, but I'll put some thought into that. |
There was a problem hiding this comment.
I think the possible negative heating may have more to do with thick-target bremsstrahlung because the sampling of photons from the TTB data doesn't necessarily preserve the original energy of the electron/positron. That could be easily tested out by turning it off (settings.electron_treatment = 'led').
Would love to hear @amandalund's thoughts on this one.
|
I've opened PR #3874, to reduce the times when negative electron energy occur. |
In fixing some CI issues with my shared secondary PR #3863 I stumbled across what I think may be a small existing issue in the Compton scattering kernel.
I was finding that the
test_photon_heatingintests/unit_tests/weightwindows/test.pywas failing as it uses weight windows, which in my PR means it automatically turns on the shared secondary mode. Getting different results would be expected in that case as the shared secondary feature makes alterations to the PRNG stream.At first I just assumed that I needed to add a with/without shared secondary case, but then noticed that this test asserts
np.all(tally_mean >= 0)on a photon heating tally with a mesh. So -- it seemed like perhaps there was a deeper issue at play with the shared secondaries that was causing some breakage here. However, I went back and tested out thedevelopbranch and found that if I increased the particle count from 100 to 1000 then this test would also fail by producing two negative tallies.I haven't looked too closely at this area of the code before but at first glance it doesn't seem right that you'd get negative tally values (though perhaps the physics allows for this with this tally?).
The
test_photon_heatingunit test was originally written for PR #3227 to catch a different bug (weight-window splits contaminating bank_second_E). It passed on develop with the original particle count because with that specific PRNG stream and 100 particles, the negative Compton events happened to cancel out across mesh bins. When increasing to 1000 particles, or when the shared secondary bank changed the PRNG stream, the negative events concentrated in a few bins and the test failed.I had claude code see if it could come up with a solution and it seems to have, but I'm not up to date on my Compton scattering physics so this will take some research on my end to verify this fix is correct. Before I do that, I'd like to determine if the negative tallies are:
Actually a bug or just intended behavior,
If the fix claude code proposes is physically correct or not. At first glance the logic seems reasonable and it gives a few references, but maybe they are hallucinated, who knows.
For now I'm leaving this as a draft as I haven't yet done the research to vouch for the correctness of the fix. @pshriwise -- I know you wrote the
test_photon_heatingunit test, so perhaps you can confirm if the intention is that photon heating tallies should always be > 0? Others might also be more up to speed on the Compton scattering physics than I am so can chime in.Potential Fix Summary (Below was all written by Claude code so take with a grain of salt)
When Doppler broadening produces a scattered photon energy where the energy
transfer
E - E'is less than the shell binding energyE_b, the Comptonelectron has negative kinetic energy and is not created as a secondary particle.
However, atomic relaxation still runs and produces secondaries (Auger electrons,
fluorescent photons) totaling ~
E_b. Since the electron is missing frombank_second_E, the heating scoreE - E' - bank_second_Egoes negative.Two changes in
src/physics.cpp:Use relaxation binding energy for the electron energy calculation when
atomic relaxation data is available, matching the approach already used by
the photoelectric effect (lines 401-403). This ensures the electron energy
and the relaxation cascade use a self-consistent binding energy source.
Correct the energy bookkeeping when
E_electron < 0by addingE_electron(a negative value) tobank_second_Ebefore relaxation runs.This offsets the relaxation products so that
bank_second_E = E_electron + sum(relaxation) = photon_transfer, and theheating score is zero. Relaxation still runs — no particles are added or
removed, and the PRNG stream is unchanged.
Root Cause
The Compton electron energy is given by (OpenMC docs, Eq.
compton-electron-energy):where
Eis the incident photon energy,E'is the scattered photon energy,and
E_bis the shell binding energy.In
compton_doppler()(src/photon.cpp:454-569), there are three fallbackpaths (lines 476, 483, 541) where an invalid sampling configuration causes the
code to fall back to the free-electron Klein-Nishina formula:
This formula does not account for binding energy. The normal acceptance
criterion at line 564 (
if (*E_out < E - E_b)) ensuresE - E' > E_b, butthe fallback paths bypass this check. The result is that
E - E' < E_bandE_electron < 0.In the calling code (
sample_photon_reaction), the electron is correctly notcreated when
E_electron < 0(it falls below the energy cutoff). Butrelaxation still runs unconditionally and adds ~
E_btobank_second_E.Since the (missing) electron's contribution would have been
E_electron = photon_transfer - E_b(a negative value that would havepartially offset the relaxation energy),
bank_second_Eends up larger thanthe photon's energy transfer:
Approach: Consistent with Geant4 PENELOPE
Geant4's PENELOPE Compton model (
G4PenelopeComptonModel.cc) handles this caseby keeping relaxation running and adjusting the energy bookkeeping:
When
eKineticEnergy < 0, Geant4 setslocalEnergyDeposit = diffEnergy(the photon's energy transfer) instead of
ionEnergy(the binding energy).Relaxation products are then subtracted from this value. The key principle:
relaxation still occurs (the vacancy is real), but the local energy deposit
is clamped to reflect that the photon only contributed
diffEnergy.Our fix is the OpenMC equivalent. OpenMC derives local deposit from the formula
E_last - E' - bank_second_Erather than tracking it explicitly. By adding thenegative
E_electrontobank_second_E, we ensure:Other Code Approaches (for context)
localEnergyDeposit = diffEnergywheneKineticEnergy < 0; relaxation still runsG4PenelopeComptonModel.cc, lines 494-531G4LowEPComptonModel.cc, lines 316-366Energy Conservation
Before fix:
E - E'to the atomE_bbank_second_E = E_bheating = (E - E') - E_b < 0(energy deficit)After fix:
E - E'to the atombank_second_Eoffset byE_electron(negative)E_bbank_second_E = E_electron + E_b = (E - E') - E_b + E_b = E - E'heating = (E - E') - (E - E') = 0Impact
so the PRNG stream is identical. Non-heating tallies (flux, current, reaction
rates, etc.) produce bit-identical results.
heatingare re-recorded:electron_heating,photon_production,photon_production_fission.test_compton_relaxation_heatingverifies no negative heatingin a lead (Z=82) geometry with 200 keV photon source.
heating events per 1000 source particles (all MT 504 Compton on oxygen) are
eliminated.
References
docs/source/methods/photon_physics.rstG4PenelopeComptonModel.ccsource (lines 494-531)Checklist