You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current Graphalytics assumes that the input graph and (cache) graph are correct, which is fine as if it is not then the corresponding benchmark runs will not pass validation. However, it is unclear to users why the validation failed, as they assume the input graph and (cache) graph are correct. These datasets can be accidentally corrupted for example when the caching process was interrupted.
A check-sum (e.g. sha1) should be implemented on these files for full validation.
The text was updated successfully, but these errors were encountered:
I just ran into this issue 5.5 years later :). If the program is interrupted during the cache file's generation, it will leave a partial file and the next execution will assume it is correct even when the number of rows is different between the two files:
The current Graphalytics assumes that the input graph and (cache) graph are correct, which is fine as if it is not then the corresponding benchmark runs will not pass validation. However, it is unclear to users why the validation failed, as they assume the input graph and (cache) graph are correct. These datasets can be accidentally corrupted for example when the caching process was interrupted.
A check-sum (e.g. sha1) should be implemented on these files for full validation.
The text was updated successfully, but these errors were encountered: