You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey guys, right now Im splitting my audio into channels using ffmpeg and numpy, after that I send to BatchedInferencePipeline.Transcribe for transcription.
But I was looking at transcribe.py class and found a method named audio_split. Does it do the same process of separating audio into channels? Cant find any documentation or usage of it. Also, didn't get why segments should be passed as parameter since segments are generated after transcription process.
The text was updated successfully, but these errors were encountered:
To information of someone who can reach this by having the same question, I went deeper and found that I was misinterpreting the code.
What this function will really do is receive the audio and the transcribed segments and from its segments start and end time split the audio into corresponding chunks.
Hey guys, right now Im splitting my audio into channels using ffmpeg and numpy, after that I send to
BatchedInferencePipeline.Transcribe
for transcription.But I was looking at
transcribe.py
class and found a method namedaudio_split
. Does it do the same process of separating audio into channels? Cant find any documentation or usage of it. Also, didn't get why segments should be passed as parameter since segments are generated after transcription process.The text was updated successfully, but these errors were encountered: