We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We are using Mecab tokenizer to split Japanese sentences into individual words.
Issue: word 食べてしまいます
gets split into
this is rather difficult for readers to understand
Ideally we should use better parser that understands Japanese conjugation on higher level.
The text was updated successfully, but these errors were encountered:
kuromoji can likely fix this issue, should be advanced tokenizer
https://github.com/atilika/kuromoji
Apache License
Sorry, something went wrong.
No branches or pull requests
We are using Mecab tokenizer to split Japanese sentences into individual words.
Issue:
word
食べてしまいます
gets split into
this is rather difficult for readers to understand
Ideally we should use better parser that understands Japanese conjugation on higher level.
The text was updated successfully, but these errors were encountered: