Describe the bug
Simple metrics ItemCoverage, UserCoverage, NumRetrieved are producing different values for the same recs/ folder between the procedure of calculating the model and evaluating it and the procedure of loading the model and evaluating it.
To Reproduce
Steps to reproduce the behavior:
- Run a model e.g. itemKNN which will produce the recommendations in the results/recs/ dir
- Keep evaluation of the running model
- Load the results/recs/ dir as a RecommendationFolder
- Run evaluation only
- Compare the two evaluations
In both cases, the input dataset has a strategy: fixed. The train.tsv and test.tsv files have previously been produced from a random 0.2 splitting procedure and used as they are for both cases.
System details (please complete the following information):
- OS: Debian
- Python Version 3.8
- Version of the Libraries installed with conda elliot_env as described in doc
Describe the bug
Simple metrics ItemCoverage, UserCoverage, NumRetrieved are producing different values for the same recs/ folder between the procedure of calculating the model and evaluating it and the procedure of loading the model and evaluating it.
To Reproduce
Steps to reproduce the behavior:
In both cases, the input dataset has a strategy: fixed. The train.tsv and test.tsv files have previously been produced from a random 0.2 splitting procedure and used as they are for both cases.
System details (please complete the following information):