The MoRe module proposed in the findings of EMNLP 2022 paper: Named Entity and Relation Extraction with Multi-Modal Retrieval, which aims at improving the performance of multi-modal NER and RE.
To ease the code running, you can find our pre-processed datasets at modelscope/datasets/MoRE-processed-data.
# train the baseline model
python -m scripts.train -c examples/MoRe/configs/twitter-17.yaml
# train model with image retrieval
python -m scripts.train -c examples/MoRe/configs/twitter-17-img.yaml
# train model with text retrieval
python -m scripts.train -c examples/MoRe/configs/twitter-17-txt.yaml
The related config files are listed in examples/MoRe/configs
.
Coming soon.
twitter-15 | twitter-17 | SNAP | WikiDiverse | |
---|---|---|---|---|
Wu et al., 2020 | 72.92 | - | - | - |
Yu et al., 2020 | 73.41 | 85.31 | - | - |
Sun et al., 2020 | 73.80 | - | 86.80 | - |
Sun et al., 2021 | 74.90 | - | 87.80 | - |
Zhang et al., 2021 | 74.85 | 85.51 | - | - |
Wang et al., 2022 | 78.03 | 89.75 | 90.15 | 76.87 |
Ours: Baseline | 77.04 | 89.11 | 89.65 | 76.58 |
MoRe-Text | 77.79 | 89.49 | 90.06 | 78.29 |
MoRe-Image | 77.57 | 90.28 | 90.46 | 77.81 |
If you feel the code helpful, please cite
@article{Wang2022NamedEA,
title={Named Entity and Relation Extraction with Multi-Modal Retrieval},
author={Xinyu Wang and Jiong Cai and Yong Jiang and Pengjun Xie and Kewei Tu and Wei Lu},
journal={ArXiv},
year={2022},
volume={abs/2212.01612}
}