Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Here are some questions about soft MoE #176

Open
t5862755 opened this issue Jun 27, 2024 · 1 comment
Open

Here are some questions about soft MoE #176

t5862755 opened this issue Jun 27, 2024 · 1 comment

Comments

@t5862755
Copy link

  1. According to theory, image will transform to token(patch) first, and will become slots by weight then. I would like to know, in image aspect, where the program part will offer the segmentation of origin image, in order to let image become tokens. For example, if we have the image for 32x32, we set sequence length to 16 (meaning that we have 16 slots), and we set experts to 16 too. But the image just transform to slots directly, we don't see the transformation from image to tokens in program.
    Shortly, tokens only depend on every pixel in original image, not depend on the patch segemented by original image.

  2. I would like to know what loss function and optimizer you guys often to use with soft MoE, because we want to train some data (about 50000 images) with soft MoE in 4090*2.

@jpuigcerver
Copy link
Collaborator

  1. I'm not sure if I fully understand the question, but the 2D image is transformed into tokens in the main ViT architecture (

    x = nn.Conv(
    and line 348 in the same file). It has nothing to do with Soft MoEs.

  2. We typically use cross-entropy, and Adam. I've never trained an MoE with so few images. MoEs are especially useful when you are parameter-bounded, not data-bounded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants