Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion Matrix - BWT, FWT metrics #1667

Open
loukasilias opened this issue Oct 20, 2024 · 1 comment
Open

Confusion Matrix - BWT, FWT metrics #1667

loukasilias opened this issue Oct 20, 2024 · 1 comment

Comments

@loukasilias
Copy link

loukasilias commented Oct 20, 2024

Hello,

How can I have access to the confusion matrix per experience? I have two experiences.

I have this integrated the code below, but I dont know how to access confusion matrix.

eval_plugin = EvaluationPlugin(
    accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    timing_metrics(epoch=True),
    bwt_metrics(experience=True, stream=True),
    forward_transfer_metrics(experience=True, stream=True),
    forgetting_metrics(experience=True, stream=True),
    cpu_usage_metrics(experience=True),
    confusion_matrix_metrics(num_classes=len(benchmark.train_stream[0].classes_in_this_experience), save_image=True, stream=True),
    disk_usage_metrics(minibatch=True, epoch=True, experience=True, stream=True),
    loggers=[InteractiveLogger()],
    strict_checks=False
)

In the end, I see the following:
ConfusionMatrix_Stream/eval_phase/test_stream = <avalanche.evaluation.metric_results.AlternativeValues object at 0x7effe45ac550>

Also, when I use as metrics Forward Knowledge Transfer and Backward Knowledge Transfer, I get this error:
image

Thank you.

@wang-xulong
Copy link

Hi, I have the same issue, and I have sole it while I don't know why it can work:

When I train and test loop, the code is as follows:
for train_task in train_stream:
strategy.train(train_task)
strategy.eval(test_stream)

The work one is :
results = []
for i, experience in enumerate(train_stream):
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)

# train returns a dictionary containing last recorded value
# for each metric.
res = strategy.train(experience, eval_streams=[test_stream])
print("Training completed")

print("Computing accuracy on the whole test set")
# test returns a dictionary with the last metric collected during
# evaluation on that stream
results.append(strategy.eval(test_stream))

hope it will help you too ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants