BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
-
Updated
Aug 15, 2023 - Jupyter Notebook
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
ICLR 2018 Quick-Thought vectors
How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.
NLSC Unrestricted Natural Language-based Service Composition Middleware that uses Sentence Embeddings. Named-Entity Recognition and other NLP models.
Spanish Sentence Embeddings computed from large corpora using sent2vec.
Finding look alike sentences by leveraging the concept of semantic similarities pre-learned by transformer models while pre-training task. I've used cosine similarity as an angular distance matrix applied over sent2vec.
An approach to improve word sense induction systems (WSI) for web search result clustering. Exploring the boundaries of vector space models for the WSI Task. CHERTOY system. Chernenko, Tatjana and Toyota, Utaemon Institute for Computational Linguistics of University Heidelberg. 2017/2018
Add a description, image, and links to the sent2vec topic page so that developers can more easily learn about it.
To associate your repository with the sent2vec topic, visit your repo's landing page and select "manage topics."