Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. We present a simple BERT-based LS approach that makes use of the pre-trained unsupervised deep bidirectional representations BERT. We feed the given sentence masked the complex word into the masking language model of BERT to generate candidate substitutions. By considering the whole sentence, the generated simpler alternatives are easier to hold cohesion and coherence of a sentence. Experimental results show that our approach obtains obvious improvement on standard LS benchmark.
- FastText (word embeddings trained using FastText)
- BERT based on Pytroch
We recommend Python 3.5 or higher. The model is implemented with PyTorch 1.0.1 using pytorch-transformers v1.0.0.
(1) Download the code of BERT based on Pytorch. In our experiments, we adopted pretrained BERT-Large, Uncased (Whole Word Masking).
(2) Copy the files provided by the project into the main file of BERT.
(3) download the pre-trained word embeddings using FastText.
(4) run "./run_LS_BERT.sh".
Suppose that there is a sentence "the cat perched on the mat" and the complex word "perched". We concatenate the original sequence S and S' as a sentence pair, and feed the sentence pair {S,S'} into the BERT to obtain the probability distribution of the vocabulary corresponding to the mask word. Finally, we select as simplification candidates the top words from the probability distribution, excluding the morphological derivations of the complex word. For this example, we can get the top three simplification candidate words "sat, seated, hopped".
Comparison of simplification candidates of complex words using three methods. Given one sentence "John composed these verses." and complex words 'composed' and 'verses', the top three simplification candidates for each complex word are generated by our method BERT-LS and the state-of-the-art two baselines based word embeddings (Glavas and Paetzold-NE). The top three substitution candidates generated by BERT-LS are not only related with the complex words, but also can fit for the original sentence very well. Then, by considering the frequency or order of each candidate, we can easily choose 'wrote' as the replacement of 'composed and 'poems' as the replacement of 'verses'. In this case, the simplification sentence 'John wrote these poems.' is more easily understand than the original sentence.
@article{qiang2019BERTLS,
title = {A Simple BERT-Based Approach for Lexical Simplification },
author = {Qiang, Jipeng and
Li, Yun and
Yi, Zhu and
Yuan, Yunhao and
Wu, Xindong},
journal = {arXiv preprint arXiv:1907.06226},
year = {2019}
}