-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretrained embedders #41
Comments
https://drive.google.com/drive/folders/1_mumfTU3GJRtjfcJK_M0fWm048sYYFqi Plus, we are not the only ones who had luck with self-supervised learning on Camelyon16, https://arxiv.org/pdf/2012.03583.pdf where they showed that very high results can be obtained. |
Thank you very much, I found that the features you extracted are only 0.86 if you train directly with the CLAM method. |
Hi, are those weights trained using tcga data? |
Camelyon16 weights: https://drive.google.com/drive/folders/1_mumfTU3GJRtjfcJK_M0fWm048sYYFqi
TCGA-lung weights: https://drive.google.com/drive/folders/1Rn_VpgM82VEfnjiVjDbObbBFHvs0V1OE
|
Hi @GeorgeBatch, I have seen the previous discussion on the magnification change for TCGA-lung patches. Could I please verify that when the above pre-trained model is specified as,
this is only for 20x patches of the whole dataset? (so the pre-trained model is trained on 20x,5x (for 40x images) and 10x,2.5x (for 20x images)) Many thanks in advance. |
I have a question, your simlr is pre-training, does it include all the data of camelyon16 (training set and test set)? Because I found that your feature extractor is faulty, you leaked the information of the test set, I tried, only pre-trained on the training set, there is no such high result, I think you should check this problem carefully, resulting in your result is too high
The text was updated successfully, but these errors were encountered: