Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

all possible alignments #28

Open
vinaychetnani opened this issue Apr 30, 2018 · 0 comments
Open

all possible alignments #28

vinaychetnani opened this issue Apr 30, 2018 · 0 comments

Comments

@vinaychetnani
Copy link

As given in the Alex Graves CTC paper, we sum over probabilities of all the possible alignments using dynamic programming which gives us certain transcription. In code and in tensorflow documentation it is not explicitly mentioned that where this part is happening. Could you help me here and tell me in which of the following functions, this DP thing is happening:

  1. tf.nn.ctc_loss
  2. tf.nn.ctc_beam_search_decoder
  3. tf.nn.ctc_greedy_decoder
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant