The goal is to compare the different [architecture options](https://github.com/MedARC-AI/fmri-fm/blob/b3d97ca3b5bd5ce003628236811136d56c7f4ece/src/flat_mae/models_mae.py#L437) for the decoder: - `attn`: standard MAE self-attention decoding - `cross`: cross-attention decoding following [CrossMAE](https://crossmae.github.io/) - `crossreg`: cross-*register* decoding, inspired by [MAETok](https://arxiv.org/abs/2502.03444)
The goal is to compare the different architecture options for the decoder:
attn: standard MAE self-attention decodingcross: cross-attention decoding following CrossMAEcrossreg: cross-register decoding, inspired by MAETok