GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.
- Tutorial proposal for GluonNLP is accepted at EMNLP 2019, Hong Kong.
- GluonNLP was featured in:
- KDD 2019 Alaska! Check out our tutorial: From Shallow to Deep Language Representations: Pre-training, Fine-tuning, and Beyond.
- JSALT 2019 in Montreal, 2019-6-14! Checkout https://jsalt19.mxnet.io.
- AWS re:invent 2018 in Las Vegas, 2018-11-28! Checkout details.
- PyData 2018 NYC, 2018-10-18! Checkout the awesome talk by Sneha Jha.
- KDD 2018 London, 2018-08-21, Apache MXNet Gluon tutorial! Check out https://kdd18.mxnet.io.
Make sure you have Python 3.5 or newer and a recent version of MXNet (our CI server runs the testsuite with Python 3.5).
You can install MXNet
and GluonNLP
using pip.
GluonNLP
is based on the most recent version of MXNet
.
In particular, if you want to install the most recent MXNet
release:
pip install --upgrade mxnet>=1.6.0
Else, if you want to install the most recent MXNet
nightly build:
pip install --pre --upgrade mxnet
Then, you can install GluonNLP
:
pip install gluonnlp
Please check more installation details.
GluonNLP documentation is available at our website.
GluonNLP is a community that believes in sharing.
For questions, comments, and bug reports, Github issues is the best way to reach us.
We now have a new Slack channel here. (register).
GluonNLP community welcomes contributions from anyone!
There are lots of opportunities for you to become our contributors:
- Ask or answer questions on GitHub issues.
- Propose ideas, or review proposed design ideas on GitHub issues.
- Improve the documentation.
- Contribute bug reports GitHub issues.
- Write new scripts to reproduce state-of-the-art results.
- Write new examples to explain key ideas in NLP methods and models.
- Write new public datasets (license permitting).
- Most importantly, if you have an idea of how to contribute, then do it!
For a list of open starter tasks, check good first issues.
Also see our contributing guide on simple how-tos, contribution guidelines and more.
Check out how to use GluonNLP for your own research or projects.
If you are new to Gluon, please check out our 60-minute crash course.
For getting started quickly, refer to notebook runnable examples at Examples.
For advanced examples, check out our Scripts.
For experienced users, check out our API Notes.
Load the Wikitext-2 dataset, for example:
>>> import gluonnlp as nlp
>>> train = nlp.data.WikiText2(segment='train')
>>> train[0:5]
['=', 'Valkyria', 'Chronicles', 'III', '=']
Build vocabulary based on the above dataset, for example:
>>> vocab = nlp.Vocab(counter=nlp.data.Counter(train))
>>> vocab
Vocab(size=33280, unk="<unk>", reserved="['<pad>', '<bos>', '<eos>']")
From the models package, apply a Standard RNN language model to the above dataset:
>>> model = nlp.model.language_model.StandardRNN('lstm', len(vocab),
... 200, 200, 2, 0.5, True)
>>> model
StandardRNN(
(embedding): HybridSequential(
(0): Embedding(33280 -> 200, float32)
(1): Dropout(p = 0.5, axes=())
)
(encoder): LSTM(200 -> 200.0, TNC, num_layers=2, dropout=0.5)
(decoder): HybridSequential(
(0): Dense(200 -> 33280, linear)
)
)
For example, load a GloVe word embedding, one of the state-of-the-art English word embeddings:
>>> glove = nlp.embedding.create('glove', source='glove.6B.50d')
# Obtain vectors for 'baby' in the GloVe word embedding
>>> type(glove['baby'])
<class 'mxnet.ndarray.ndarray.NDArray'>
>>> glove['baby'].shape
(50,)
The bibtex entry for the reference paper of GluonNLP is:
@article{gluoncvnlp2019, title={GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing}, author={Guo, Jian and He, He and He, Tong and Lausen, Leonard and Li, Mu and Lin, Haibin and Shi, Xingjian and Wang, Chenguang and Xie, Junyuan and Zha, Sheng and Zhang, Aston and Zhang, Hang and Zhang, Zhi and Zhang, Zhongyue and Zheng, Shuai}, journal={arXiv preprint arXiv:1907.04433}, year={2019} }
For background knowledge of deep learning or NLP, please refer to the open source book Dive into Deep Learning.