Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transcripts are Insufficient for Japanese #49

Open
mikob opened this issue Dec 12, 2018 · 5 comments
Open

Transcripts are Insufficient for Japanese #49

mikob opened this issue Dec 12, 2018 · 5 comments

Comments

@mikob
Copy link

mikob commented Dec 12, 2018

Currently, the only implementation of the spec that I'm aware of, in Google Chrome, returns a mixture of surface kanji and kana when using the Japanese language speech to text.

Having kanji is great for semantics, but in Japanese phonology is important in many cases. Without phonology, ambiguity is left in the intended transcription.

A whole class of use cases becomes impossible. For example:

  • Japanese names are written in kanji but there can be multiple readings of those kanji only one of which maps to an individual.
  • If I utter てい the Google WebSpeech API returns 体. This kanji has four readings, and てい could belong to the reading of many other kanji. (I'm creating a voice-enabled Japanese flashcard app and this problem only lets me guess whether a user is uttering a kanji/vocab item correctly, I cannot be sure because of the API shortcomings for Japanese).

Ideally we would always get the furigana (the kana that represent which sounds were made for the kanji) along with the kanji. Kana does have, in essence, a 1-to-1 phonetic mapping.

Better yet, a more general solution that would work for other languages with similar issues could be returning an IPA pronunciation along with the transcript, like so:

interface SpeechRecognitionAlternative {
    readonly attribute DOMString transcript;
    readonly attribute DOMString pronunciation;
    readonly attribute float confidence;
};
@guest271314
Copy link

FYI Chromium/Chrome implementation of Web Speech API webkitSpeechRecognition currently records the user voice and sends that recording to an external service? See #41 (comment); https://bugs.chromium.org/p/chromium/issues/detail?id=816095.

Have you filed a Chromium/Chrome bug for the issue?

@mikob
Copy link
Author

mikob commented Jan 6, 2019

@guest271314 but the problem is the specification as it stands is inadequate for Japanese. There is no bug in the Chrome or Chromium implementation. Not sure what recording the voice and sending it to a service has to do with my stated problem here.

@guest271314
Copy link

@mikob Chrome/Chromium source code does not contain any code which produces a transcript. The recorded voice (when a user utters into their microphone) is sent to an external web service, which returns the transcripts that you are referring to. The external service is responsible for the resulting transcript when using webkitSpeechRecognition at Chromium/Chrome browsers not the specification.

Can you point to a particular portion of the Web Speech API which specifically addresses handling Japanese?

Would suggest filing a Chromium bug then evaluate the response for yourself.

You might be interested in

Good luck!

@mikob
Copy link
Author

mikob commented Jan 6, 2019

@guest271314 Sorry for not being clear. What I'm trying to get at here is the fact that a simple transcript is insufficient for Japanese. No, there's nothing I can point to in the spec about Japanese, but I think Japanese (and I suspect other languages) need additional consideration in a Web Speech API Specification that aims to be completely international.

Since a Japanese transcript can have multiple readings depending on context, I think the spec should address this special case by defining how Japanese is handled. One possible solution is what I mentioned in the OP.

Hope this makes more sense.

@guest271314
Copy link

@mikob This is a link to the mailing list https://lists.w3.org/Archives/Public/public-speech-api/. From perspective here filing an issue here at GitHub relevant to changing the Web Speech API specification and actually getting any prospective changes implemented in browsers has proven to be far more time consuming than simply RYO. For example, see

Compare writing the code oneself

et al., etc.

where the author does not need to wait on a response that may or may not come or an implementation that may never come.

Your proposal makes sense. However, from perspective here the TTS/STT development is presently in a proprietary-first phase right now.

If you have decided to proceed the route of asking for changes to the specification, would suggest again, as did above, to post bugs at each of the browser bug boards, and post at the mailing lists - but not to wait on a response - when you can roll your own in the meantime to your own exacting specification, given that the technology exists to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants