Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easier reproducibility #7

Open
Lemour-sudo opened this issue Jul 22, 2021 · 4 comments
Open

Easier reproducibility #7

Lemour-sudo opened this issue Jul 22, 2021 · 4 comments

Comments

@Lemour-sudo
Copy link

Lemour-sudo commented Jul 22, 2021

Kindly assist on these two problems:

  • I tried to reproduce the paper's results by implementing the model and I was successful in obtaining similar results throughout all the datasets except for one: PPI.
    The results for PPI barely reach 0.5 for both the AP and AUC in transductive and inductive settings.
    Can you kindly share the hyperparameters settings used in the code to assist in easing the reproducibility process.
  • May I confirm whether the structure encoder proposed in the paper is graph-agnostic (unaware of the graph's structure) since it does not seem to account for the network topology:
    struct-encoder

Thank you.

@mohitagarwal0212
Copy link

Looks like it. Graph's structure I think comes into play in the loss function only.
Also, since you were able to reproduce, did you try the attribute encoder as an MLP as mentioned in the paper? In the code it looks like only a linear layer is used.

@Lemour-sudo
Copy link
Author

Yes, I do believe the the graph's structure is only accounted for in the loss function in the authors' original code. I had to refactor the code a bit to allow changing between attribute encoders and structure encoders. So I managed to try an MLP as an attribute encoder.

@mohitagarwal0212
Copy link

Thanks, was able to figure out MLP.
Also, another thing, while reproducing did you figure out , in the ind_eval() function here https://github.com/working-yuhao/DEAL/blob/e58b2601b6102e2ebc80f20e7a92343c9e08daec/utils.py#L673, node_emb is a clone of anode_emb, so how does attr_layer and inter_layer behave any differently in that case?

@Lemour-sudo
Copy link
Author

They way I see it, the attr_layer represents the attribute model part, and inter_layer may be meant to represent the layer that connects the attribute and structure parts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants