Skip to content

Integration of Progressive Learning for incremental learning with multi-headed self attentive LSTM in Encoder-Decoder architecture

License

Notifications You must be signed in to change notification settings

Ipvikukiepki-KQS/Progressive-Multi-head-Self-attentive-Encoder-Decoder

Repository files navigation

Progressive-Multi-head-Self-attentive-Encoder-Decoder

Integration of Progressive Learning for incremental learning with multi-headed self attentive LSTM in Encoder-Decoder architecture. LSTM can be replaced with GRU or else the architecture can be used only with Multi-head attention with progressive learning. LSTM itself has been an open source in keras framework, and the proposed architecture is built upon the keras framework (open source deep learning framework).

Reference

  1. Saran Karthikeyan, Andreas Wendemuth, Ingo Siegert, Marcus Petersen: "Building of conversational agents using goal oriented dialog managers," Otto-von-Guericke-Universität Magdeburg, regiocom SE Magdeburg, Germany, March 2020 (https://github.com/Naras-KS/Konversa)
  2. Saunders, Ben and Camgoz, Necati Cihan and Bowden, Richard, "Progressive Transformers for End-to-End Sign Language Production," arXiv preprint arXiv:2004.14874, July 2020

About

Integration of Progressive Learning for incremental learning with multi-headed self attentive LSTM in Encoder-Decoder architecture

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages