Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

21ICCV # Emerging Properties in Self-Supervised Vision Transformers (DINO) #44

Open
XFeiF opened this issue Aug 30, 2021 · 1 comment
Open
Labels
area/SSL self-supervised learning Code Code available. Summary/Brief A breif summary about the paper. trend/Transformer Every paper uses transformer...

Comments

@XFeiF
Copy link
Owner

XFeiF commented Aug 30, 2021

Paper
Code

Authors:
Mathilde Caron, Hugo Touvron, etc.
FBAI.

Highlights:

  • A new proposed self-supervised learning method with KD: a form of knowledge distillation with no labels. Especially, it uses a different way to avoid the collapse solution, that is use the momentum teacher encoder.
  • It encouraging "local-to-global" correspondences by feeding different sizes of views to student and teacher encoders.
  • SSL ViT features explicitly contain the scene layout and, in particular, object boundaries, as shown in the next figure.

@XFeiF XFeiF added area/SSL self-supervised learning Summary/Brief A breif summary about the paper. Code Code available. trend/Transformer Every paper uses transformer... labels Aug 30, 2021
@XFeiF
Copy link
Owner Author

XFeiF commented Aug 30, 2021

They look at the self-attention of the [CLS] token on the heads of the last layer. This token is not attached to any label nor supervision. These maps show that the model automatically learns class-specific features leading to unsupervised object segmentations.

SSL segmentation magic -> SSL + Transformer's [CLS] token.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/SSL self-supervised learning Code Code available. Summary/Brief A breif summary about the paper. trend/Transformer Every paper uses transformer...
Projects
None yet
Development

No branches or pull requests

1 participant