Skip to content

Simple metrics are producing different values for the same recommendations #19

@nikosT

Description

@nikosT

Describe the bug
Simple metrics ItemCoverage, UserCoverage, NumRetrieved are producing different values for the same recs/ folder between the procedure of calculating the model and evaluating it and the procedure of loading the model and evaluating it.

To Reproduce
Steps to reproduce the behavior:

  1. Run a model e.g. itemKNN which will produce the recommendations in the results/recs/ dir
  2. Keep evaluation of the running model
  3. Load the results/recs/ dir as a RecommendationFolder
  4. Run evaluation only
  5. Compare the two evaluations

In both cases, the input dataset has a strategy: fixed. The train.tsv and test.tsv files have previously been produced from a random 0.2 splitting procedure and used as they are for both cases.

System details (please complete the following information):

  • OS: Debian
  • Python Version 3.8
  • Version of the Libraries installed with conda elliot_env as described in doc

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions