We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好,我跑了只使用bert(Semantic)信息的实验(model type = bert , with_res=no),发现15年的correction可以达到76, 几乎和我跑的ReaLiSe模型的性能差不多(~77)。请问我bert的实验配置正确吗?如果正确的话,用你的代码和数据似乎比其他release的代码性能要高很多,请问你做了哪些优化呢(目前我看到,你对数据中unk的情况做了一些处理),谢谢
The text was updated successfully, but these errors were encountered:
他预处理干掉了ukn和“牠”这种繁体转简体的bug以及一些标点符号,然后一个重要tricks就是每1000step eval最后选个最高的,说实话有点不讲武德,毕竟我用了这两个tricks后稳稳的77,而且他的78我也没跑出来,顺便希望有谁跑出来了告诉我一下咋复现的,谢谢
Sorry, something went wrong.
@piglaker Please refer to this issue.
No branches or pull requests
你好,我跑了只使用bert(Semantic)信息的实验(model type = bert , with_res=no),发现15年的correction可以达到76, 几乎和我跑的ReaLiSe模型的性能差不多(~77)。请问我bert的实验配置正确吗?如果正确的话,用你的代码和数据似乎比其他release的代码性能要高很多,请问你做了哪些优化呢(目前我看到,你对数据中unk的情况做了一些处理),谢谢
The text was updated successfully, but these errors were encountered: