You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is no efficient way of comparing the MCell simulation results against the analytic/expected results in nutmeg. Currently, this is being done in a very hackish way in the expected_value_uni test using a number of COUNT_MINMAX tests at regular intervals throughout a simulation and making sure they fall within several SDs.
There are potentially a couple of different ways of solving this problem. Either the user could feed in a mathematical function or maybe just provide a file containing some XY data (time vs counts).
The text was updated successfully, but these errors were encountered:
I've extended the COMPARE_COUNTS test facility which will hopefully enable (most of) what we need here. Basically, you provide a reference file with a time column and any number of data columns that provide the expected counts. The reference file could be generated based on a mathematical function via R, octave, etc., or come from a different simulation tool (ODE simulator, etc.). In addition, you can specify either an absolute or relative acceptable deviation for each column. The test will then make sure that the actual value is within expected_value +/- deviation. If no deviation is provided the comparison will be exact. A bit more syntax is in the commit cc11ee1 message. Please give this a try and let me know if this is general enough for your needs or not.
This commit extends the ability of the COMPARE_COUNTS test facility.
This addresses issue #1 on the tracker.
Previously, COMPARE_COUNTS would check for an exact match of counts compared
to the values in a reference file in a column by column fashion. This
commit adds the ability to compare test data against reference data
allowing for a certain tolerance specified via
"absDeviation" = [A1, A2, A3, ...]
or
"relDeviation" = [R1, R2, R3, ....]
Only one of absDeviation or relDeviation can be specified per
check.
Here the Ai are integers specifying the allowed absolute deviation of
column i with respect to the reference data set. Similarly, the
Ri are floats specifying the relative deviation with regard
to the reference data set. E.g. if the reference data set looks
like
1e-6 100 50
2e-6 110 55
3e-6 120 60
...
and
"absDeviation" = [10, 5]
Then the following observed output would pass the test
1e-6 108 53
2e-6 112 56
3e-6 118 58
Whereas the following observed output would fail in row 2
1e-6 108 53
2e-6 113 61
3e-6 124 64
since 61 is outside 55 +/- 5.
Similarly for
"relDeviation" = [0.1, 0.1]
the first test above would pass while the second would not since
61 is outside 55 +/- (55 * 0.1) = 55 +/- 5.5.
There is no efficient way of comparing the MCell simulation results against the analytic/expected results in nutmeg. Currently, this is being done in a very hackish way in the expected_value_uni test using a number of COUNT_MINMAX tests at regular intervals throughout a simulation and making sure they fall within several SDs.
There are potentially a couple of different ways of solving this problem. Either the user could feed in a mathematical function or maybe just provide a file containing some XY data (time vs counts).
The text was updated successfully, but these errors were encountered: