-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NonLocal ECP NaN with Batched Code #4941
Comments
This looks like a bug in the J3. It seems unlikely that any numerical or statistical issue would be to blame since the system is so small. Usefully this is a pure MPI CPU run, so we can rule out anything exotic on the computational side. Puzzling that it hasn't shown up for anyone else. It is interesting that some of the electrons have wandered a long way in terms of the primitive cell dimensions. This shouldn't matter, but perhaps it does... |
Please can you put the wavefunction file somewhere accessible or give a pointer to your Perlmutter directories and set the permissions appropriately. |
The directory has been shared on Perlmutter here: /global/cfs/cdirs/m2113/al_J3 |
I am running the code on Polaris (QMCPACK 3.17.9 under /soft/applications/qmcpack/develop-20240118/) with legacy drivers, CPU only complex build and also encounter an NaN error during J3 optimization with a similar workflow. The code seems to run without any error when I reduce minwalkers in the first few cycles to 0.01, but this results in large jumps in energy and variance. Please let me know if more information is needed. |
Thanks for the report. We have also heard that turning down the meshfactor can trigger the problem in J2 in the original problem. Could also be #4917 or something like it. |
This has been sitting around for a month so I wanted to update the status. I have been experimenting with Ilkka’s single atom version. While this appears to have the same problem it could have its own issues due to being so small:
It is worth noting that J3 is not expected to do very much here, but it still shouldn't go wrong like this. Conservative settings (e.g. increasing minwalkers) seems to only delay the problem. It has been reported that using different optimizers can avoid the problem, but since they aren't necessarily optimizing the same objective function, they may be bypassing the problem rather than being immune to it. My suspicions are that:
Will try some larger cells now. |
Thank you for the update!
…On Wed, Apr 24, 2024 at 11:53 AM Paul R. C. Kent ***@***.***> wrote:
This has been sitting around for a month so I wanted to update the status.
I have been experimenting with Ilkka’s single atom version. While this
appears to have the same problem it could have its own issues due to being
so small:
- the problem with bulk Al is straightforward to reproduce within
minutes on a CPU system.
- the problem is not related to the plane wave cutoff/spline grids,
since these are well converged.
- seemingly a good wavefunction is produced with D+J1+J2 optimization
- however, the one shift optimizer immediately takes a crazy step
(very large change in coefficients, 10^9) when J3 is added. This
subsequently results in a NaN during pseudopotential evaluation. The abort
is therefore correct and not a bug — the problem is related to the
optimizer or wavefunction.
- this applies even when large numbers of samples are used for
optimization - the optimizer tries to make a bad step.
It is worth noting that J3 is not expected to do very much here, but it
still shouldn't go wrong like this. Conservative settings (e.g. increasing
minwalkers) seems to only delay the problem.
It has been reported that using different optimizers can avoid the
problem, but since they aren't necessarily optimizing the same objective
function, they may be bypassing the problem rather than being immune to it.
My suspicions are that:
- J3 may somehow have a bug for this case. How other people have been
able to use J3 successfully is a puzzle that would presumably by answered
by identifying the bug.
- OneShift needs a better default or more conservative handling for
this case for reasons that have yet to be determined.
Will try some larger cells now.
—
Reply to this email directly, view it on GitHub
<#4941 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQQOTDVZ6G65DXUPP27VTHTY67IOVAVCNFSM6AAAAABEYBZQGSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZVGI4DAMRXGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@prckent |
Ilkka's reproducer is a modified version of Annette's. You'll need a working python ase. It is worth considering if the 2 up / 1 down electron case is properly handled in J3. |
Have you looked at the eigenvalue chosen by the mapping step after the eigenvalue solve? I don't think it gets printed out currently, but it probably should be. |
@markdewing could this be #4917 ? |
Yes, it could be. The extremely large step is one of the symptoms. |
Update: I still see the issue with the latest QMCPACK on NM Al, however, at Gani's suggestion, when using the quartic optimizer I do not see the issue anymore. |
Added a new label for ongoing issues w/ batched. |
Tagged this for v4. I think we need at least an understanding of what causes this if not a fix. i.e. OK to postpone only if there is a workaround and we have sufficiently shown that the problem is not an underlying bug, more an algorithmic limitation. |
Describe the bug
Doing a standard workflow for the spin density of neutral bulk aluminum (SCF > NSCF > Convert > J2 opt > J3 opt > DMC). The J2 optimization returns a higher variance/energy ratio ~0.3, but completes without issue. Upon a consecutive J3 optimization the following error results:
optJ3.zip
nexus_cpu.py.zip
The text was updated successfully, but these errors were encountered: