Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bert baseline #5

Open
sdzhangbo opened this issue Jul 22, 2021 · 2 comments
Open

Bert baseline #5

sdzhangbo opened this issue Jul 22, 2021 · 2 comments

Comments

@sdzhangbo
Copy link

你好,我跑了只使用bert(Semantic)信息的实验(model type = bert , with_res=no),发现15年的correction可以达到76, 几乎和我跑的ReaLiSe模型的性能差不多(~77)。请问我bert的实验配置正确吗?如果正确的话,用你的代码和数据似乎比其他release的代码性能要高很多,请问你做了哪些优化呢(目前我看到,你对数据中unk的情况做了一些处理),谢谢

@piglaker
Copy link

他预处理干掉了ukn和“牠”这种繁体转简体的bug以及一些标点符号,然后一个重要tricks就是每1000step eval最后选个最高的,说实话有点不讲武德,毕竟我用了这两个tricks后稳稳的77,而且他的78我也没跑出来,顺便希望有谁跑出来了告诉我一下咋复现的,谢谢

@Neutralzz
Copy link

Neutralzz commented Apr 28, 2022

@piglaker
Please refer to this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants