Skip to content

Commit

Permalink
Warn instead of error when I(Q) fails assumption check (#996)
Browse files Browse the repository at this point in the history
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v5.0.0](pre-commit/pre-commit-hooks@v4.4.0...v5.0.0)
- https://github.com/charliermarsh/ruff-pre-commithttps://github.com/astral-sh/ruff-pre-commit
- [github.com/astral-sh/ruff-pre-commit: v0.0.275 → v0.8.3](astral-sh/ruff-pre-commit@v0.0.275...v0.8.3)
- [github.com/psf/black: 23.3.0 → 24.10.0](psf/black@23.3.0...24.10.0)
- [github.com/kynan/nbstripout: 0.6.1 → 0.8.1](kynan/nbstripout@0.6.1...0.8.1)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add testing and packaging files

* pre-commit - use ruff instead of black

* Add notebooks to excludes in pyproject.toml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix codecov step

* New test prep stage

* change data submodule url

* specify cov files

* add some noqa

* address lint errors

* forgot .items() in test spice xml parser

* try pip install in test preparation step

* restore from next and exclude notebooks & test/examples

* fix geometry.py

* Update .pre-commit-config.yaml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* minor formatting in test_transmission

* minor formatting in test_simulated_events

* update urls to point to github

* one more update

* add type hinting to iq.py

* Warn instead of Exception when I(q) fails validation

* update more leftover urls

* add test, but it fails

* test doesnt assert anything yet

* remove zero Q's after binning

* tests are failinG

* Remove zero-indexes from q_bin centers, update tests

* update gitignore and make clean, working on tests

* fix test_i_of_q_binning_all.py

* fix a couple more tests

* Revert iq.py changes and related tests

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Kevin A. Tactac <[email protected]>
  • Loading branch information
3 people authored Feb 26, 2025
1 parent df03fa4 commit df0cde3
Show file tree
Hide file tree
Showing 26 changed files with 160 additions and 196 deletions.
5 changes: 3 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# project specific things to ignore
src/drtsans/_version.py
/debug/
/test_output/
BIOSANS_Definition.xml
CG3_Definition.xml
BIOSANS_*_flood.nxs
/debug/
/test_output/
/*.png

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ repos:
notebooks/
tests/examples/
- id: ruff-format
args: [--no-cache]
exclude: |
notebooks/
tests/examples/
Expand Down
5 changes: 3 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ clean: ## Delete some cruft from builds/testing/etc.
rm -rf `find . -name __pycache__ -o -name "*.egg-info"` \
`find . -type f -name '*.py[co]'` \
`find . -maxdepth 1 -type f -name 'BIOSANS_*.nxs' -o -name '*_Definition.xml'` \
.pytest_cache .ruff_cache \
.coverage build dist \
docs/_build \
debug/ test_output/ \
.coverage build dist \
.pytest_cache .ruff_cache
*.png
2 changes: 1 addition & 1 deletion OnboardingChecklist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Test Driven Development (TDD)
* Unit test

All methods and modules shall have unit tests implemented.
Unit tests are located in `repo/tests/unit/new <https://code.ornl.gov/sns-hfir-scse/sans/sans-backend/-/tree/next/tests/unit/new>`_.
Unit tests are located in `repo/tests/unit/ <https://github.com/neutrons/drtsans/blob/next/tests/unit>`_.
A unit test shall be created in the corresponding directory to the method or module that it tests against.

Examples:
Expand Down
1 change: 1 addition & 0 deletions docs/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ and configuration options.

- PR #997: Fix bug causing symmetric auto wedge finding to fail. Add mirrored wedge to auto wedge fit function plot.
- PR #998: remove the TOF offset that is done by the data aquisition system
- PR #996: Iq.dat files are now written even if they fail the assumption check
- PR #994: Remove unused module `drtsans/tof/eqsans/reduce.py`
- PR #993: Skip slices with too high transmission error when using time sliced sample transmission run
- PR #325: Migrates repository from GitLab to GitHub
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ license = { file = "LICENSE" }

[project.urls]
Documentation = "https://drtsans.readthedocs.io/"
Source = "https://code.ornl.gov/sns-hfir-scse/sans/sans-backend/"
BugTracker = "https://code.ornl.gov/sns-hfir-scse/sans/sans-backend/-/issues"
Source = "https://github.com/neutrons/drtsans/"
BugTracker = "https://github.com/neutrons/drtsans/issues"

[build-system]
requires = ["setuptools >= 42", "toml", "wheel", "versioningit"]
Expand Down
5 changes: 3 additions & 2 deletions src/drtsans/determine_bins.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def determine_1d_linear_bins(x_min, x_max, bins):
return linear_bins


def determine_1d_log_bins(x_min, x_max, decade_on_center, n_bins_per_decade=None, n_bins=None):
def determine_1d_log_bins(x_min, x_max, decade_on_center, n_bins_per_decade=None, n_bins=None) -> Bins:
"""
Parameters
Expand All @@ -69,7 +69,8 @@ def determine_1d_log_bins(x_min, x_max, decade_on_center, n_bins_per_decade=None
Returns
-------
~drtsans.iq.Bins
Bins including bin centers and bin edges
"""
# Check inputs
if n_bins_per_decade is None and n_bins is None:
Expand Down
101 changes: 61 additions & 40 deletions src/drtsans/iq.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# https://github.com/neutrons/drtsans/blob/next/src/drtsans/dataobjects.py
from enum import Enum
from typing import Any, List, Tuple, Union

import numpy
import numpy as np
from mantid.simpleapi import logger

from drtsans.dataobjects import (
DataType,
Expand All @@ -13,10 +13,9 @@
getDataType,
q_azimuthal_to_q_modulo,
)

# https://github.com/neutrons/drtsans/blob/next/src/drtsans/determine_bins.py
from drtsans.determine_bins import (
BinningParams,
Bins,
determine_1d_linear_bins,
determine_1d_log_bins,
)
Expand Down Expand Up @@ -47,20 +46,23 @@ class BinningMethod(Enum):
WEIGHTED = 2 # weighted binning


def check_iq_for_binning(i_of_q: Union[IQmod, IQazimuthal]):
def check_iq_for_binning(i_of_q: Union[IQmod, IQazimuthal]) -> bool:
"""Check I(Q) for binning.
Binning I(Q) assumes that
1. there is no NaN or Infinity in intensities
2. there is no NaN, Infinity or Zero in intensity errors
:exception : RuntimeError
raise exception if input I(Q) does not meet assumption
:param i_of_q: ~drtsans.dataobjects.IQmod or IQazimuthal
I(Q)
Parameters
----------
i_of_q: Union[IQmod, IQazimuthal]
Returns
-------
bool
True if the input I(Q) for binning meets the assumptions
"""

error_message = ""

# Check intensity
Expand All @@ -77,17 +79,19 @@ def check_iq_for_binning(i_of_q: Union[IQmod, IQazimuthal]):

# Check error
if np.where(np.isnan(i_of_q.error))[0].size > 0:
error_message += "Intensity error has {} NaNs: {}\n".format(
len(np.where(np.isnan(i_of_q.error))[0]),
np.where(np.isnan(i_of_q.error))[0],
)
nan_error = np.where(np.isnan(i_of_q.error))[0]
error_message += f"Intensity error has {len(nan_error)} NaNs: {nan_error}\n"
if np.where(np.isinf(i_of_q.error))[0].size > 0:
error_message += "Intensity error has Inf: {}\n".format(np.where(np.isnan(i_of_q.error))[0])
inf_error = np.where(np.isinf(i_of_q.error))[0]
error_message += f"Intensity error has {len(inf_error)} Infinities: {inf_error}\n"
if np.where(np.abs(i_of_q.error) < 1e-20)[0].size > 0:
error_message += "Intensity error has zero {}\n".format(np.where(np.abs(i_of_q.error) < 1e-20)[0])
zero_error = np.where(np.abs(i_of_q.error) < 1e-20)[0]
error_message += f"Intensity error has {len(zero_error)} zeros: {zero_error}\n"

if len(error_message) > 0:
raise RuntimeError("Input I(Q) for binning does not meet assumption:\n{}".format(error_message))
logger.warning(f"Input I(Q) for binning does not meet assumption: {error_message}")

return len(error_message) == 0


def valid_wedge(min_angle, max_angle) -> List[Tuple[float, float]]:
Expand Down Expand Up @@ -457,7 +461,9 @@ def validate_wedges_groups(wedges, symmetric_wedges) -> List[List[Tuple[float, f
return validated_wedge_angles_groups


def bin_intensity_into_q1d(i_of_q, q_bins, bin_method=BinningMethod.NOWEIGHT, wavelength_bins=1) -> IQmod:
def bin_intensity_into_q1d(
i_of_q: IQmod, q_bins: Bins, bin_method: BinningMethod = BinningMethod.NOWEIGHT, wavelength_bins: int = 1
) -> IQmod:
"""Binning I(Q) from scalar Q (1D) with linear binning on Q
Replace intensity, intensity_error, scalar_q, scalar_dq by IQmod
Expand All @@ -483,7 +489,7 @@ def bin_intensity_into_q1d(i_of_q, q_bins, bin_method=BinningMethod.NOWEIGHT, wa
drtsans.dataobjects.IQmod
the one dimensional data as a named tuple
"""
# Check input I(Q) whether it meets assumptions
# Check whether input I(Q) meets assumptions
check_iq_for_binning(i_of_q)

# bin I(Q)
Expand Down Expand Up @@ -643,7 +649,7 @@ def bin_annular_into_q1d(i_of_q, theta_bin_params, q_min=0.001, q_max=0.4, metho
)
raise ValueError(msg)

# Check input I(Q) whether it meets assumptions
# Check whether input I(Q) meets assumptions
check_iq_for_binning(i_of_q)

# convert the data to q and azimuthal angle
Expand Down Expand Up @@ -716,13 +722,13 @@ def _do_1d_no_weight_binning(q_array, dq_array, iq_array, sigmaq_array, q_bins,
"""

def _bin_iq1d(bin_edges, q_vec, dq_vec, i_vec, error_vec):
def _bin_iq1d(q_bins, q_vec, dq_vec, i_vec, error_vec):
"""Bin I(Q1D), dI(Q1D) and dQ(Q1D) by no weight binning algorithm
Parameters
----------
bin_edges: ~numpy.ndarray
bin edges
q_bins: ~drtsans.determine_bins.Bins
bin edges and centers
q_vec: ~numpy.ndarray
vector of Q1D
dq_vec: ~numpy.ndarray, None
Expand All @@ -735,29 +741,40 @@ def _bin_iq1d(bin_edges, q_vec, dq_vec, i_vec, error_vec):
Returns
-------
~tuple
binned intensity vector, binned intensity error vector, binned q resolution vector
binned intensity vector,
binned intensity error vector,
binned q resolution vector,
binned q center vector
"""

# Count number of Q in 'q_array' in each Q-bin when they are binned (histogram) to 'bin_edges'
num_pt_vec, _ = np.histogram(q_vec, bins=bin_edges)
num_pt_vec, _ = np.histogram(q_vec, bins=q_bins.edges)

# Counts per bin: I_{k, raw} = \sum I(i, j) for each bin
i_raw_vec, _ = np.histogram(q_vec, bins=bin_edges, weights=i_vec)
i_raw_vec, _ = np.histogram(q_vec, bins=q_bins.edges, weights=i_vec)

# Square of summed uncertainties for each bin
sigma_sqr_vec, _ = np.histogram(q_vec, bins=bin_edges, weights=error_vec**2)
sigma_sqr_vec, _ = np.histogram(q_vec, bins=q_bins.edges, weights=error_vec**2)

# Final I(Q): I_k = \frac{I_{k, raw}}{N_k}
all_lists = iter([num_pt_vec, i_raw_vec, sigma_sqr_vec])
assert all(len(lst) == len(num_pt_vec) for lst in all_lists)

# Final I(Q): I_k = \frac{I_{k, raw}}{N_k}
i_final_vec = i_raw_vec / num_pt_vec
# Final sigma(Q): sigmaI_k = \frac{sigmaI_{k, raw}}{N_k}

# Final sigma(Q): sigmaI_k = \frac{sigmaI_{k, raw}}{N_k}
sigma_final_vec = np.sqrt(sigma_sqr_vec) / num_pt_vec
# Replace 0 error with 1, as uncertainty is not zero in the region of interest
sigma_final_vec[sigma_final_vec == 0] = 1.0

# Calculate Q resolution of binned
if dq_vec is None:
bin_dq_vec = None
else:
binned_vec, _ = np.histogram(q_vec, bins=bin_edges, weights=dq_vec)
binned_vec, _ = np.histogram(q_vec, bins=q_bins.edges, weights=dq_vec)
bin_dq_vec = binned_vec / num_pt_vec
assert len(bin_dq_vec) == len(i_final_vec)

return i_final_vec, sigma_final_vec, bin_dq_vec

Expand All @@ -767,7 +784,7 @@ def _bin_iq1d(bin_edges, q_vec, dq_vec, i_vec, error_vec):
if wavelength_bins == 1 or wl_array is None:
# bin I(Q, wl) regardless of wl value
i_final_array, sigma_final_array, bin_q_resolution = _bin_iq1d(
q_bins.edges, q_array, dq_array, iq_array, sigmaq_array
q_bins, q_array, dq_array, iq_array, sigmaq_array
)

# construct output without wavelength vector
Expand Down Expand Up @@ -807,7 +824,7 @@ def _bin_iq1d(bin_edges, q_vec, dq_vec, i_vec, error_vec):

# bin by Q1D
i_final_array, sigma_final_array, bin_q_resolution = _bin_iq1d(
q_bins.edges,
q_bins,
filtered_matrix[:, 1],
dq_array_i,
filtered_matrix[:, 2],
Expand Down Expand Up @@ -835,6 +852,7 @@ def _bin_iq1d(bin_edges, q_vec, dq_vec, i_vec, error_vec):
delta_mod_q=binned_dq_vec,
wavelength=binned_wl_vec,
)

else:
raise RuntimeError(f"Number of wavlength bins = {wavelength_bins} is not supported")

Expand Down Expand Up @@ -885,21 +903,24 @@ def _do_1d_weighted_binning(q_array, dq_array, iq_array, sigma_iq_array, q_bins,
"""

def _bin_q1d_weighted(mod_q_array, delta_q_array, intensity_array, error_array, bin_edges):
def _bin_q1d_weighted(q_bins, mod_q_array, delta_q_array, intensity_array, error_array):
"""Do 1D weighed binning"""
# bin I(Q, wl) regardless of wl value
# Calculate 1/sigma^2 for multiple uses
invert_sigma2_array = 1.0 / (error_array**2)

# Histogram on 1/sigma^2, i.e., nominator part in Equation 11.22, 11.23 and 11.24
# sum_{Q, lambda}^{K} (1 / sigma(Q, lambda)^2)
w_array, _ = np.histogram(mod_q_array, bins=bin_edges, weights=invert_sigma2_array)
w_array, _ = np.histogram(mod_q_array, bins=q_bins.edges, weights=invert_sigma2_array)

# Calculate Equation 11.26: I(Q)
# I(Q') = sum_{Q, lambda}^{K} (I(Q, lambda) / sigma(Q, lambda)^2) /
# sum_{Q, lambda}^{K} (1 / sigma(Q, lambda)^2)
# denominator in Equation 11.22: sum_{Q, lambda}^{K} (I(Q, lambda) / sigma(Q, lambda)^2)
i_raw_array, _ = np.histogram(mod_q_array, bins=bin_edges, weights=intensity_array * invert_sigma2_array)
i_raw_array, _ = np.histogram(mod_q_array, bins=q_bins.edges, weights=intensity_array * invert_sigma2_array)

assert len(i_raw_array) == len(w_array)

# numerator divided by denominator (11.26)
binned_intensity_array = i_raw_array / w_array

Expand All @@ -919,7 +940,8 @@ def _bin_q1d_weighted(mod_q_array, delta_q_array, intensity_array, error_array,
if delta_q_array is None:
binned_dq_array = None
else:
binned_dq, _ = np.histogram(mod_q_array, bins=bin_edges, weights=delta_q_array * invert_sigma2_array)
binned_dq, _ = np.histogram(mod_q_array, bins=q_bins.edges, weights=delta_q_array * invert_sigma2_array)
assert len(binned_dq) == len(binned_intensity_array)
# numerator divided by denominator (11.28)
binned_dq_array = binned_dq / w_array

Expand All @@ -931,7 +953,7 @@ def _bin_q1d_weighted(mod_q_array, delta_q_array, intensity_array, error_array,
if wl_array is None or wavelength_bins == 1:
# bin I(Q, wl) regardless of wl value
i_final_array, sigma_final_array, bin_q_resolution = _bin_q1d_weighted(
q_array, dq_array, iq_array, sigma_iq_array, q_bins.edges
q_bins, q_array, dq_array, iq_array, sigma_iq_array
)

binned_i_of_q = IQmod(
Expand Down Expand Up @@ -969,14 +991,13 @@ def _bin_q1d_weighted(mod_q_array, delta_q_array, intensity_array, error_array,
dq_array_i = filtered_matrix[:, 4]

# bin by Q1D
binned = _bin_q1d_weighted(
i_final_array, sigma_final_array, bin_q_resolution = _bin_q1d_weighted(
q_bins=q_bins,
mod_q_array=filtered_matrix[:, 1],
delta_q_array=dq_array_i,
intensity_array=filtered_matrix[:, 2],
error_array=filtered_matrix[:, 3],
bin_edges=q_bins.edges,
)
i_final_array, sigma_final_array, bin_q_resolution = binned

# build up the final output
binned_q_vec = np.concatenate((binned_q_vec, q_bins.centers))
Expand Down Expand Up @@ -1039,7 +1060,7 @@ class IQazimuthal(namedtuple('IQazimuthal', 'intensity error qx qy delta_qx delt
~drtsans.dataobjects.IQazimuthal
binned IQazimuthal (important: must read Note 2)
"""
# Check input I(Q) whether it meets assumptions
# Check whether input I(Q) meets assumptions
check_iq_for_binning(i_of_q)

# Check whether it needs to bin wavelength
Expand Down
2 changes: 1 addition & 1 deletion src/drtsans/prepare_sensivities_correction.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
PATCHING_DETECTORS = "Patching Detectors"

# As this script is a wrapper to handle script prepare_sensitivity
# (e.g. https://code.ornl.gov/sns-hfir-scse/sans/sans-backend/blob/next/scripts%2Fprepare_sensitivities_biosans.py),
# (e.g. https://github.com/neutrons/drtsans/blob/next/scripts%2Fprepare_sensitivities_biosans.py),
# by which user just needs to set up instrument name but not is required to import the right modules
# (for example drtsans.mono.gpsans or drtsans.tof.eqsans),
# therefore it has to import the correct ones according instrument name in string.
Expand Down
Loading

0 comments on commit df0cde3

Please sign in to comment.