Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[38]module runs incorrectly #4

Open
Alchemistqqqq opened this issue May 29, 2024 · 6 comments
Open

[38]module runs incorrectly #4

Alchemistqqqq opened this issue May 29, 2024 · 6 comments

Comments

@Alchemistqqqq
Copy link

I'm sorry to ask you about the duplicate code. I initially ran it fine, but when I needed to re-run the code in part [38] over the last two days, the following error occurred:
image
This appears to be a GPU error because I tried to run it on the cpu. Nothing goes wrong, but it takes a long time.

@Alchemistqqqq
Copy link
Author

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

@Happy2Git
Copy link
Owner

I'm sorry to ask you about the duplicate code. I initially ran it fine, but when I needed to re-run the code in part [38] over the last two days, the following error occurred: image This appears to be a GPU error because I tried to run it on the cpu. Nothing goes wrong, but it takes a long time.

  • Could you please provide more details? I haven’t seen this error when running it on the GPU. Please let me know if you’ve made any changes in this Jupyter notebook.
  • What does ‘[38]’ mean? If you mean ‘[38] Parameterized Explainer for Graph Neural Networks,’ you might need to double-check that implementation, as it is not related to this repo.

@Happy2Git
Copy link
Owner

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

@Alchemistqqqq
Copy link
Author

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

@Happy2Git
Copy link
Owner

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

For those baselines, I use their implementations of the core algorithms and adapt the pipeline to the inductive graph learning setting. Specifically, I replace the graph partition algorithm with their released algorithms, train the GNN model without the subgraph repair part, and use their aggregation algorithm to get the results in the inductive setting. It's easy to modify a few function APIs to make it runnable in this new setting.

@Alchemistqqqq
Copy link
Author

Alchemistqqqq commented Jun 18, 2024

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

For those baselines, I use their implementations of the core algorithms and adapt the pipeline to the inductive graph learning setting. Specifically, I replace the graph partition algorithm with their released algorithms, train the GNN model without the subgraph repair part, and use their aggregation algorithm to get the results in the inductive setting. It's easy to modify a few function APIs to make it runnable in this new setting.

Thank you for your answer. It was very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants