-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transcripts are Insufficient for Japanese #49
Comments
FYI Chromium/Chrome implementation of Web Speech API Have you filed a Chromium/Chrome bug for the issue? |
@guest271314 but the problem is the specification as it stands is inadequate for Japanese. There is no bug in the Chrome or Chromium implementation. Not sure what recording the voice and sending it to a service has to do with my stated problem here. |
@mikob Chrome/Chromium source code does not contain any code which produces a transcript. The recorded voice (when a user utters into their microphone) is sent to an external web service, which returns the transcripts that you are referring to. The external service is responsible for the resulting transcript when using Can you point to a particular portion of the Web Speech API which specifically addresses handling Japanese? Would suggest filing a Chromium bug then evaluate the response for yourself. You might be interested in
Good luck! |
@guest271314 Sorry for not being clear. What I'm trying to get at here is the fact that a simple transcript is insufficient for Japanese. No, there's nothing I can point to in the spec about Japanese, but I think Japanese (and I suspect other languages) need additional consideration in a Web Speech API Specification that aims to be completely international. Since a Japanese transcript can have multiple readings depending on context, I think the spec should address this special case by defining how Japanese is handled. One possible solution is what I mentioned in the OP. Hope this makes more sense. |
@mikob This is a link to the mailing list https://lists.w3.org/Archives/Public/public-speech-api/. From perspective here filing an issue here at GitHub relevant to changing the Web Speech API specification and actually getting any prospective changes implemented in browsers has proven to be far more time consuming than simply RYO. For example, see
Compare writing the code oneself
et al., etc. where the author does not need to wait on a response that may or may not come or an implementation that may never come. Your proposal makes sense. However, from perspective here the TTS/STT development is presently in a proprietary-first phase right now. If you have decided to proceed the route of asking for changes to the specification, would suggest again, as did above, to post bugs at each of the browser bug boards, and post at the mailing lists - but not to wait on a response - when you can roll your own in the meantime to your own exacting specification, given that the technology exists to do so. |
Currently, the only implementation of the spec that I'm aware of, in Google Chrome, returns a mixture of surface kanji and kana when using the Japanese language speech to text.
Having kanji is great for semantics, but in Japanese phonology is important in many cases. Without phonology, ambiguity is left in the intended transcription.
A whole class of use cases becomes impossible. For example:
Ideally we would always get the furigana (the kana that represent which sounds were made for the kanji) along with the kanji. Kana does have, in essence, a 1-to-1 phonetic mapping.
Better yet, a more general solution that would work for other languages with similar issues could be returning an IPA pronunciation along with the transcript, like so:
The text was updated successfully, but these errors were encountered: