Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better way to compare against analytic/expected results #1

Open
jczech opened this issue Jan 15, 2016 · 1 comment
Open

Better way to compare against analytic/expected results #1

jczech opened this issue Jan 15, 2016 · 1 comment

Comments

@jczech
Copy link

jczech commented Jan 15, 2016

There is no efficient way of comparing the MCell simulation results against the analytic/expected results in nutmeg. Currently, this is being done in a very hackish way in the expected_value_uni test using a number of COUNT_MINMAX tests at regular intervals throughout a simulation and making sure they fall within several SDs.

There are potentially a couple of different ways of solving this problem. Either the user could feed in a mathematical function or maybe just provide a file containing some XY data (time vs counts).

@haskelladdict
Copy link

I've extended the COMPARE_COUNTS test facility which will hopefully enable (most of) what we need here. Basically, you provide a reference file with a time column and any number of data columns that provide the expected counts. The reference file could be generated based on a mathematical function via R, octave, etc., or come from a different simulation tool (ODE simulator, etc.). In addition, you can specify either an absolute or relative acceptable deviation for each column. The test will then make sure that the actual value is within expected_value +/- deviation. If no deviation is provided the comparison will be exact. A bit more syntax is in the commit cc11ee1 message. Please give this a try and let me know if this is general enough for your needs or not.

haskelladdict added a commit that referenced this issue Jan 31, 2016
This commit extends the ability of the COMPARE_COUNTS test facility.
This addresses issue #1 on the tracker.

Previously, COMPARE_COUNTS would check for an exact match of counts compared
to the values in a reference file in a column by column fashion. This
commit adds the ability to compare test data against reference data
allowing for a certain tolerance specified via

"absDeviation" = [A1, A2, A3, ...]

or

"relDeviation" = [R1, R2, R3, ....]

Only one of absDeviation or relDeviation can be specified per
check.

Here the Ai are integers specifying the allowed absolute deviation of
column i with respect to the reference data set. Similarly, the
Ri are floats specifying the relative deviation with regard
to the reference data set. E.g. if the reference data set looks
like

1e-6 100 50
2e-6 110 55
3e-6 120 60
...

and

"absDeviation" = [10, 5]

Then the following observed output would pass the test

1e-6 108 53
2e-6 112 56
3e-6 118 58

Whereas the following observed output would fail in row 2

1e-6 108 53
2e-6 113 61
3e-6 124 64

since 61 is outside 55 +/- 5.

Similarly for

"relDeviation" = [0.1, 0.1]

the first test above would pass while the second would not since
61 is outside 55 +/- (55 * 0.1) = 55 +/- 5.5.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants