-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create tests to catch performance regressions in specific functions #50
Comments
@gsarma @travs happy that there's work towards a uniform approach to testing across repos. However, this can't really happen here (or in the muscle_model repo) without having a close look at OMV and how the workflow there relates to what you propose here. As mentioned to @travs the PyOpenWorm and Channelworm repos are different in that they're natively Python, whereas these 2 repos have XML model code that needs to be validated, converted to simulator specific code & run. OMV is designed specifically for that. I'm not saying it's not possible to use pytest for this, but it's a non trivial problem to integrate it with OMV. |
@pgleeson I've been meaning to ask about OMV - that is a testing framework developed by the OSB guys to validate the models used in their codes, correct? I've been looking at https://github.com/cheelee/osb-model-validation to figure out what they are trying to do, and how it ties in with the relevant parts of our project. Is it correct to say that OMV's Model Validation testing is somewhat different in nature to the Python code functional unit-testing that we conduct over at PyOpenWorm, and is more akin to the tests required for functionality in muscle_model, and CElegansNeuroML? |
@pgleeson thank you for the elaboration. We thought it would make sense to simply open up the issues so that discussions like these could happen and help us figure out how testing would work in different contexts. Are there some very basic tests that could be written to get a testing suite started? Even completely simple things are useful because new contributors will come in and see that something is there and it can be a valuable entry point. |
@gsarma I've modified the .travis.yml here and in the muscle model to run all the OMV tests separately and then run a number of other python scripts in the repos. These non OMV tests could potentially be modified to use your framework. This would happen in the Travis CI job with OMV_ENGINE=NON_OMV_TESTS, see https://travis-ci.org/openworm/muscle_model/builds/76992094 |
We have been exploring writing tests for catching performance regressions in PyOpenWorm and ChannelWorm.
We need to first decide which functions should be tested for performance and then define reference behavior. The simplest way is to check the performance relative to some simple computation, and later we can have a more sophisticated system for tracking performance and catching regressions.
The bare minimum to close this issue is a file called, e.g. PerformanceTests.py, with a single test and docstrings.
@travs
The text was updated successfully, but these errors were encountered: