Skip to content

Latest commit

 

History

History
60 lines (48 loc) · 4.5 KB

README.md

File metadata and controls

60 lines (48 loc) · 4.5 KB
calotron logo

Transformer-based models to flash-simulate the LHCb ECAL detector

TensorFlow versions Python versions PyPI - Version GitHub - License

GitHub - Tests Codecov

GitHub - Style Ruff

Transformers

Models Implementation Generative ability* Test Design inspired by
Transformer 1, 4
OptionalTransformer 1, 4
MaskedTransformer 🛠️
GigaGenerator 5, 6

*TBA

Discriminators

Models Algorithm Implementation Test Design inspired by
Discriminator DeepSets 2, 3
PairwiseDiscriminator DeepSets 2, 3
GNNDiscriminator GNN 🛠️
GigaDiscriminator Transformer 5, 6, 7

References

  1. A. Vaswani et al., "Attention Is All You Need", arXiv:1706.03762
  2. N.M. Hartman, M. Kagan and R. Teixeira De Lima, "Deep Sets for Flavor Tagging on the ATLAS Experiment", ATL-PHYS-PROC-2020-043
  3. M. Zaheer et al., "Deep Sets", arXiv:1703.06114
  4. L. Liu et al., "Understanding the Difficulty of Training Transformers", arXiv:2004.08249
  5. M. Kang et al., "Scaling up GANs for Text-to-Image Synthesis", arXiv:2303.05511
  6. K. Lee et al., "ViTGAN: Training GANs with Vision Transformers", arXiv:2107.04589
  7. H. Kim, G. Papamakarios and A. Mnih, "The Lipschitz Constant of Self-Attention", arXiv:2006.04710

Credits

Transformer implementation freely inspired by the TensorFlow tutorial Neural machine translation with a Transformer and Keras and the Keras tutorial Image classification with Vision Transformer.