-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unit tests for peak fitting with real data #310
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well organized code. Please explain the choice of tolerances. The test would be more useful if they were smaller.
|
||
Data are from the real HB2B data previously reported problematic | ||
|
||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete the empty "returns" doc
param_values_rp, param_errors_rp = fit_result.peakcollections[1].get_native_params() | ||
|
||
assert param_values_lp.size == 117, '117 subruns' | ||
assert len(param_values_lp.dtype.names) == 6, '6 effective parameters' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The message should be "6 native parameters"
assert len(param_values_lp.dtype.names) == 6, '6 effective parameters' | ||
|
||
assert param_values_rp.size == 117, '117 subruns' | ||
assert len(param_values_rp.dtype.names) == 6, '6 effective parameters' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
assert fit_result.difference | ||
|
||
# peak 'Left' | ||
param_values_lp, param_errors_lp = fit_result.peakcollections[0].get_native_params() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you aren't checking the uncertainties
param_values_lp, param_errors_lp = fit_result.peakcollections[0].get_native_params() | |
param_values_lp, _ = fit_result.peakcollections[0].get_native_params() |
param_values_lp, param_errors_lp = fit_result.peakcollections[0].get_native_params() | ||
|
||
# peak 'Right' | ||
param_values_rp, param_errors_rp = fit_result.peakcollections[1].get_native_params() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
param_values_rp, param_errors_rp = fit_result.peakcollections[1].get_native_params() | |
param_values_rp, _ = fit_result.peakcollections[1].get_native_params() |
The original issue asked for fitting the first three subruns only. You don't need a whole project file for that. Please create a file with x,y,e for these three spectra (ascii or hdf5) to include in the repository and load that instead. This would be better located in |
@peterfpeterson Actually I found that this file is already located in |
The data mounts aren't available on the build server and the one in We already have too many tests that don't get run on the build server and end up being frequently broken. Using the data mounts in the pyrs tests will be going away once more urgent work is done. |
@peterfpeterson I reduced the HB2B_1060.nxs.h5 using |
Select 3 subruns that you prefer. The choice of the first 3 was arbitrary. |
692ae3b
to
a74ae40
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the new file tests/unit/test_peak_fit_HB2B_1060.py
still in this? It looks like the new version has been put in tests/unit/test_peak_fit_engine.py
with improvements. Did you forget to delete the old one?
b03fa70
to
b5fab7e
Compare
Peak fitting unit test hb2b 1060
Fixes #266