Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graph Classification Accuracy Calculation? #63

Open
gdreiman-insitro opened this issue Apr 5, 2024 · 0 comments
Open

Graph Classification Accuracy Calculation? #63

gdreiman-insitro opened this issue Apr 5, 2024 · 0 comments

Comments

@gdreiman-insitro
Copy link

Hi thanks for the very interesting work.

I am wondering about how the metrics for the graph classification were calculated. In the paper, Table 2 is labeled at "Accuracy (%)". In both the DGL and pyg versions of this repo, it seems like F1 is calculated but reported as accuracy?

DGL

evaluate_graph_embeddings_using_svm calculates F1 here
graph_classification_evaluation returns the return of evaluate_graph_embeddings_using_svm (test_f1) here
test_f1 is appended to acc_list here
final_acc is the mean of acc_list here

pyg

evaluate_graph_embeddings_using_svm calculates F1 here
graph_classification_evaluation returns the return of evaluate_graph_embeddings_using_svm (test_f1) here
test_f1 is appended to acc_list here
final_acc is the mean of acc_list here

Is the code in the repo what was used to generate the results in the paper?
I am having trouble replicating some results and wondering if the results in the paper are accuracy or F1.

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant