You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I am really interested in your work. I wanted to inquire why are linear layers applied (for both encoder and decoder) on the training of datasets covered in 'main.py' (all except HSE, MMP and p53), rather than graph convolution ones. It seems a bit odd that you name these layers as gc1 and gc4 although they are linear. Moreover, there is no mention (to the best of my knowledge) of the decision of using linear ones for these datasets in the original paper.
The text was updated successfully, but these errors were encountered:
Hello! I am really interested in your work. I wanted to inquire why are linear layers applied (for both encoder and decoder) on the training of datasets covered in 'main.py' (all except HSE, MMP and p53), rather than graph convolution ones. It seems a bit odd that you name these layers as gc1 and gc4 although they are linear. Moreover, there is no mention (to the best of my knowledge) of the decision of using linear ones for these datasets in the original paper.
The text was updated successfully, but these errors were encountered: