Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coverage at 51% #69

Open
dbrakenhoff opened this issue Sep 27, 2024 · 3 comments
Open

Coverage at 51% #69

dbrakenhoff opened this issue Sep 27, 2024 · 3 comments

Comments

@dbrakenhoff
Copy link
Collaborator

besselnumba.py and invlapnumba.py seem to be the cause of the low coverage at the moment:

---------- coverage: platform linux, python 3.11.10-final-0 ----------
Name                         Stmts   Miss  Cover
------------------------------------------------
ttim/__init__.py                10      0   100%
ttim/aquifer.py                146     26    82%
ttim/aquifer_parameters.py      99      6    94%
ttim/besselnumba.py           1035    898    13%
ttim/circareasink.py            98      7    93%
ttim/element.py                153     64    58%
ttim/equation.py               189     71    62%
ttim/fit.py                    139     16    88%
ttim/invlapnumba.py            143    132     8%
ttim/linedoublet.py            153      9    94%
ttim/linesink.py               322    127    61%
ttim/model.py                  339    103    70%
ttim/trace.py                  180     83    54%
ttim/util.py                    71      8    89%
ttim/version.py                  1      0   100%
ttim/well.py                   150     21    86%
------------------------------------------------
TOTAL                         3228   1571    51%

There is a test_bessel.py file in ttim/src/ but it compares fortran compiled bessel functions to their numba counterparts. Adapting this file to test the numba funcs against stored results would probably significantly improve coverage, and actually test the besselnumba code.

@mbakker7
Copy link
Owner

But all notebooks (all TTim models) call both the besselnumba and invlapnumba routines.
Wouldn't that be part of testing?

@vcantarella
Copy link
Contributor

If these contain numba functions it could be that coverage is not recognizing them because they are compiled outside of python scope. I think a possible solution would be to use add the NUMBA_DISABLE_JIT=1 environment variable, but it might slow things down a bit.

@Huite
Copy link
Contributor

Huite commented Oct 1, 2024

If these contain numba functions it could be that coverage is not recognizing them because they are compiled outside of python scope. I think a possible solution would be to use add the NUMBA_DISABLE_JIT=1 environment variable, but it might slow things down a bit.

Indeed.

For a variety of packages I test twice, once for coverage with NUMBA_DISABLE_JIT=1, then another time to see whether everything works correctly with Numba enabled.

E.g. this project is 90% numba: https://github.com/Deltares/numba_celltree/blob/6092c456587c8f4ade1d3b19e87bc7278b3d379d/pyproject.toml#L85
But has 99% coverage.

Presumably the test cases are not too large, such that the slowdown from dynamic Python isn't too bad.

And otherwise, it could be worthwhile to split into (small) unit tests for coverage and bigger slow integration tests without coverage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants