You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to theory, image will transform to token(patch) first, and will become slots by weight then. I would like to know, in image aspect, where the program part will offer the segmentation of origin image, in order to let image become tokens. For example, if we have the image for 32x32, we set sequence length to 16 (meaning that we have 16 slots), and we set experts to 16 too. But the image just transform to slots directly, we don't see the transformation from image to tokens in program.
Shortly, tokens only depend on every pixel in original image, not depend on the patch segemented by original image.
I would like to know what loss function and optimizer you guys often to use with soft MoE, because we want to train some data (about 50000 images) with soft MoE in 4090*2.
The text was updated successfully, but these errors were encountered:
and line 348 in the same file). It has nothing to do with Soft MoEs.
We typically use cross-entropy, and Adam. I've never trained an MoE with so few images. MoEs are especially useful when you are parameter-bounded, not data-bounded.
According to theory, image will transform to token(patch) first, and will become slots by weight then. I would like to know, in image aspect, where the program part will offer the segmentation of origin image, in order to let image become tokens. For example, if we have the image for 32x32, we set sequence length to 16 (meaning that we have 16 slots), and we set experts to 16 too. But the image just transform to slots directly, we don't see the transformation from image to tokens in program.
Shortly, tokens only depend on every pixel in original image, not depend on the patch segemented by original image.
I would like to know what loss function and optimizer you guys often to use with soft MoE, because we want to train some data (about 50000 images) with soft MoE in 4090*2.
The text was updated successfully, but these errors were encountered: