-
-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real Time Diarization for Streaming Audio Chunks in Custom ASR Pipeline #275
Comments
Hi @sprath9! you can always use the standalone The class is stateful and you can send audio chunks directly. Here's a quick example: import numpy as np
from pyannote.core import SlidingWindow, SlidingWindowFeature
from diart import SpeakerDiarization
pipeline = SpeakerDiarization() # pass a SpeakerDiarizationConfig to customize
# Obtain and format audio
sample_rate = pipeline.config.sample_rate
audio = np.random.randn(sample_rate * pipeline.config.duration, 1)
sliding_window = SlidingWindow(
start=audio_start_time,
duration=1.0 / sample_rate,
step=1.0 / sample_rate,
)
audio = SlidingWindowFeature(audio, sliding_window)
# `audio_chunk` is the part of `audio` corresponding to `annotation`
annotation, audio_chunk = pipeline([audio])[0] Notice that for this to work you have to send chunks of duration |
Hi @juanmc2005 Thank you for your valuable inputs! I tried the approach you suggested, but it seems the system is unable to maintain speaker consistency. I receive 30-second audio chunks at a time, which I need to process through diarization. Each chunk starts from 0 seconds, and I also need to maintain the state across chunks. To simulate this, I tested with two 5-second audio chunks from different speakers, passing them through the same pipeline. Ideally, when processing audio1, the system should assign it to Speaker 1, and when processing audio2, it should be assigned to Speaker 2. However, after running both, I’m still getting Speaker 1 at the end for both the audio_chunk, which is incorrect. Would appreciate any insights you might have on resolving this!
![]() ![]() |
hey @juanmc2005 Thanks for your time |
I have a custom streaming pipeline with a VAD setup that triggers ASR processing only when speech is detected on a small chunk. The pipeline operates in a streaming fashion, processing audio chunks sequentially from the client.
In Diart, it seems we need to provide a file path, microphone input, or websocket for audio input. Is there a way to integrate Diart directly into my pipeline, allowing me to pass audio chunks to the diarization module and receive results in real-time? Maintaining speaker consistency across chunks is crucial, as each new chunk shouldn't be treated as a separate audio session.
I attempted to modify the AudioSource class in source.py, experimenting with custom inputs and code adjustments, but I couldn't achieve the desired results.
Could you kindly guide me on how to implement this? If possible, I would greatly appreciate a code snippet to help clarify the approach. From what I understand, the solution likely involves customizing the AudioSource class.
Thanks
The text was updated successfully, but these errors were encountered: