-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
M.E.AI.Abstractions - Speech to Text Abstraction #5838
base: main
Are you sure you want to change the base?
M.E.AI.Abstractions - Speech to Text Abstraction #5838
Conversation
@dotnet-policy-service agree company="Microsoft" |
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=938431&view=codecoverage-tab |
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=942860&view=codecoverage-tab |
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=945523&view=codecoverage-tab |
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=945918&view=codecoverage-tab |
...ibraries/Microsoft.Extensions.AI.Abstractions/AudioTranscription/AudioTranscriptionChoice.cs
Outdated
Show resolved
Hide resolved
...ibraries/Microsoft.Extensions.AI.Abstractions/AudioTranscription/AudioTranscriptionChoice.cs
Outdated
Show resolved
Hide resolved
...icrosoft.Extensions.AI.Abstractions/AudioTranscription/AudioTranscriptionClientExtensions.cs
Outdated
Show resolved
Hide resolved
...icrosoft.Extensions.AI.Abstractions/AudioTranscription/AudioTranscriptionClientExtensions.cs
Outdated
Show resolved
Hide resolved
...ries/Microsoft.Extensions.AI.Abstractions/AudioTranscription/AudioTranscriptionCompletion.cs
Outdated
Show resolved
Hide resolved
...Libraries/Microsoft.Extensions.AI.Abstractions/Utilities/DataContentAsyncEnumerableStream.cs
Outdated
Show resolved
Hide resolved
...Libraries/Microsoft.Extensions.AI.Abstractions/Utilities/DataContentAsyncEnumerableStream.cs
Outdated
Show resolved
Hide resolved
...Libraries/Microsoft.Extensions.AI.Abstractions/Utilities/DataContentAsyncEnumerableStream.cs
Outdated
Show resolved
Hide resolved
#endif | ||
{ | ||
yield return (T)Activator.CreateInstance(typeof(T), [(ReadOnlyMemory<byte>)buffer, mediaType])!; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is handing out the same buffer multiple times. It's not going to be obvious to a caller that if they grab a buffer and MoveNext, that MoveNext will have overwrittten their buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice observation, fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue still exists in the NET8+ path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this method should not be public. We can ensure that we're consuming it appropriately in our own uses, but as a public method, we have to accomodate the possibility of misuse.
src/Libraries/Microsoft.Extensions.AI.Abstractions/Utilities/StreamExtensions.cs
Outdated
Show resolved
Hide resolved
cc: @Swimburger for visibility. Feedback is appreciated. Thanks! |
🎉 Good job! The coverage increased 🎉
Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=960384&view=codecoverage-tab |
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/ErrorContent.cs
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/ErrorContent.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/Contents/ErrorContent.cs
Outdated
Show resolved
Hide resolved
/// <param name="cancellationToken">The <see cref="CancellationToken"/> to monitor for cancellation requests. The default is <see cref="CancellationToken.None"/>.</param> | ||
/// <returns>The text generated by the client.</returns> | ||
Task<SpeechToTextResponse> GetResponseAsync( | ||
IList<IAsyncEnumerable<DataContent>> speechContents, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any scenarios where an implementation is expected to mutate this? With chat, this is expected to be a history, but with speech-to-text, presumably it's generally more of a one-and-done kind of thing? Maybe this should be an IEnumerable instead of an IList?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, I just noticed, this is an IList<IAsyncEnumerable<DataContent>>
rather than an IAsyncEnumerable<DataContent>
? The intent here is this handles multiple inputs, each of which is an asynchronously produced sequence of content?
src/Libraries/Microsoft.Extensions.AI.Abstractions/SpeechToText/ISpeechToTextClient.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/SpeechToText/SpeechToTextClientExtensions.cs
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI.Abstractions/SpeechToText/SpeechToTextClientExtensions.cs
Show resolved
Hide resolved
/// </param> | ||
/// <param name="providerUri">The URL for accessing the speech to text provider, if applicable.</param> | ||
/// <param name="modelId">The ID of the speech to text model used, if applicable.</param> | ||
public SpeechToTextClientMetadata(string? providerName = null, Uri? providerUri = null, string? modelId = null) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any other common metadata that all known providers support?
public class SpeechToTextOptions | ||
{ | ||
private CultureInfo? _speechLanguage; | ||
private CultureInfo? _textLanguage; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these CultureInfos? Where are these culture info objects used?
/// <summary>Initializes a new instance of the <see cref="SpeechToTextResponse"/> class.</summary> | ||
/// <param name="choices">The list of choices in the response, one message per choice.</param> | ||
[JsonConstructor] | ||
public SpeechToTextResponse(IList<SpeechToTextMessage> choices) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does choices map to here? Does that map to the multiple inputs provided to the GetResponseAsync method? Choices is the wrong name for that, I think.
...es/Microsoft.Extensions.AI.Abstractions/SpeechToText/SpeechToTextResponseUpdateExtensions.cs
Outdated
Show resolved
Hide resolved
public static SpeechToTextResponseUpdateKind TextUpdated { get; } = new("textupdated"); | ||
|
||
/// <summary>Gets when the generated text session is closed.</summary> | ||
public static SpeechToTextResponseUpdateKind SessionClose { get; } = new("sessionclose"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the expectation that in an update sequence you always get a session open, then zero or more pairs of textupdating/textupdated, and then a session close, with zero or more errors sprinkled throughout?
src/Libraries/Microsoft.Extensions.AI.OpenAI/DataContentAsyncEnumerableStream.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI/SpeechToText/AnonymousDelegatingSpeechToTextClient.cs
Outdated
Show resolved
Hide resolved
src/Libraries/Microsoft.Extensions.AI/SpeechToText/AnonymousDelegatingSpeechToTextClient.cs
Outdated
Show resolved
Hide resolved
private static ISpeechToTextClient CreateSpeechToTextClient(HttpClient httpClient, string modelId) => | ||
new OpenAIClient(new ApiKeyCredential("apikey"), new OpenAIClientOptions { Transport = new HttpClientPipelineTransport(httpClient) }) | ||
.AsSpeechToTextClient(modelId); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid adding these large wav files? We especially don't want to add them multiple times.
Once merged, they'll be in the repo's history forever.
@RogerBarreto, anything I can do to help move this along? Thanks! |
d82f394
to
d5ca910
Compare
d5ca910
to
4bdb7b9
Compare
IList<IAsyncEnumerable<DataContent>> speechContents, SpeechToTextOptions? options = null, CancellationToken cancellationToken = default) | ||
{ | ||
return InnerClient.GetStreamingResponseAsync(speechContents, options, cancellationToken); | ||
return InnerClient.TranscribeStreamingAudioAsync(speechContents, options, cancellationToken); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick observation, as you bring Transcribe
signature back, will ISpeechToTextClient
interface also have Translate___Async
signatures, is that the rational?
My original thinking on having the Response
naming was to acommodate those two functionalities in the same method, given the change from IAudioTranscriptionClient
to ISpeechToTextClient
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given audio in and text out, what's the difference between transcribe and translate? Isn't the latter still transcription, doing speech recognition to go from audio to text, "just" with a possibly different target language than the audio content?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By definition those are different, and I opted for avoiding any confusion if possible.
"just" with a possibly different target language than the audio content
Yes, currently this is the main difference, while translation may also have multiple outputs (for multiple different language translations from the same input).
Having a dedicated interface for each has its benefits but the underlying functionality is quite the same.
From AI that was the answer I had.
Is there any difference between speech transcription and translation ? can one be a subset of another?
Yes, there is a difference between speech transcription and translation, and while they are related concepts, one is not necessarily a subset of the other. Let me break it down:
Speech Transcription
Speech transcription involves converting spoken language (audio) into written text in the same language. For example, if someone speaks in English, transcription would produce a written English version of what was said. The focus is on accurately capturing the words, and sometimes additional details like tone, pauses, or speaker identification (e.g., in a multi-speaker setting like a podcast or interview). It’s about representing the spoken content in a textual form without changing the language.
Translation
Translation, on the other hand, involves converting text or speech from one language to another. For example, translating spoken English into written Spanish or spoken French into written English. The goal is to preserve the meaning and intent of the original content while adapting it to a different language, which often requires cultural and linguistic adjustments beyond just word-for-word conversion.
Key Differences
Language Change: Transcription stays within the same language; translation shifts between languages.
Process: Transcription is about capturing what’s said as text, while translation involves interpreting and rephrasing meaning in another language.
Purpose: Transcription is often used for documentation (e.g., court records, subtitles), while translation is used to make content accessible to speakers of other languages.
Can One Be a Subset of the Other?
Not exactly, but they can overlap or be part of a broader process:
Transcription as a Step in Translation: In some workflows, speech is first transcribed into text in the original language, and then that text is translated into another language. For example, a Spanish speech might be transcribed into Spanish text and then translated into English. Here, transcription is a precursor to translation, but it’s not a subset—it’s a distinct step.
Real-Time Speech Translation: Modern technology (like AI-powered interpreters) can combine transcription and translation into a seamless process, where spoken words in one language are directly converted to text or speech in another. In this case, transcription might happen internally as part of the translation pipeline, but they remain separate functions conceptually.
Conclusion
Transcription and translation serve different purposes and operate at different levels of language processing. While they can work together (e.g., transcribe then translate), neither is inherently a subset of the other—they’re distinct tools in the language toolkit.
ADR - Introducing Speech To Text Abstraction
Problem Statement
The project requires the ability to transcribe and translate speech audios to text. The project is a proof of concept to validate the
ISpeechToTextClient
abstraction against different transcription and translation APIs providing a consistent interface for the project to use.Note
The names used for the proposed abstractions below are open and can be changed at any time given a bigger consensus.
Considered Options
Option 1: Generic Multi Modality Abstraction
IModelClient<TInput, TOutput>
(Discarded)This option would have provided a generic abstraction for all models, including audio transcription. However, this would have made the abstraction too generic and brought up some questioning during the meeting:
Usability Concerns:
The generic interface could make the API less intuitive and harder to use, as users would not be guided towards the specific options they need. 1
Naming and Clarity:
Generic names like "complete streaming" do not convey the specific functionality, making it difficult for users to understand what the method does. Specific names like "transcribe" or "generate song" would be clearer. 2
Implementation Complexity:
Implementing a generic interface would still require concrete implementations for each permutation of input and output types, which could be complex and cumbersome. 3
Specific Use Cases:
Different services have specific requirements and optimizations for their modalities, which may not be effectively captured by a generic interface. 4
Future Proofing vs. Practicality:
While a generic interface aims to be future-proof, it may not be practical for current needs and could lead to an explosion of permutations that are not all relevant. 5
Separation of Streaming and Non-Streaming:
There was a concern about separating streaming and non-streaming interfaces, as it could complicate the API further. 6
Option 2: Speech to Text Abstraction
ISpeechToTextClient
(Preferred)This option would provide a specific abstraction for audio transcription and audio translations, which would be more intuitive and easier to use. The specific interface would allow for better optimization and customization for each service.
Initially was thought on having different interfaces one for streaming and another for non-streaming api, but after some discussion, it was decided to have a single interface similar to what we have in
IChatClient
.Note
Further modality abstractions will mostly follow this as a standard moving forward.
Inputs:
IAsyncEnumerable<DataContent>
, as a simpler and recent interface, it allows for upload streaming audio data contents to the service.This API, also enables usage of large audio files or real-time transcription (without having to load to full file in-memory) and can easily be extended to support different audio input types like a single
DataContent
or aStream
instance.Supporting scenarios like:
Single DataContent type input extension
Stream type input extension
SpeechToTextOptions
, analogous to existingChatOptions
it allows providing additional options on both Streaming and Non-Streaming APIs for the service, such as language, model, or other parameters.ResponseId
is a unique identifier for the completion of the transcription. This can be useful while using Non-Streaming API to track the completion status of a specific long-running transcription process (Batch).Note
Usage of
ResponseId
follows the convention for Chat.ModelId
is a unique identifier for the model to use for transcription.SpeechLanguage
is the language of the audio content.Azure Cognitive Speech
- Supported languagesSpeechSampleRate
is the sample rate of the audio content. Real-time speech to text generation requires a specific sample rate.Outputs:
SpeechToTextResponse
, For non-streaming API analogous to existingChatResponse
it provides the text generated result and additional information about the speech response.ResponseId
is a unique identifier for the response. This can be useful while using Non-Streaming API to track the completion status of a specific long-running speech to text generation process (Batch).Note
Usage of
Response
as a prefix initially following the convention forChatResponse
type for consistency.ModelId
is a unique identifier for the model used for transcription.Choices
is a list of generated TextSpeechToTextMessage
s each referring to a generated text for the given speechDataContent
index. Majority of cases this will be a single message that can also be accessed in theMessage
property, similar to theChatResponse
.StartTime
andEndTime
represents both Timestamps from where the text started and ended relative to the speech audio length.i.e: Audio starts with instrumental music for the first 30 seconds before any speech, the trascription should start from 30 seconds forward, same for the end time.
Note
TimeSpan
is used to represent the time stamps as it is more intuitive and easier to work with, some services give the time in milliseconds, ticks or other formats.SpeechToTextResponseUpdate
, For streaming API, analogous to existingChatResponseUpdate
it provides the speech to text result as multiple chunks of updates, that represents the content generated as well as any important information about the processing.ResponseId
is a unique identifier for the speech to text response.StartTime
andEndTime
for the given transcribed chunk represents the timestamp where it starts and ends relative to the audio length.i.e: Audio starts with instrumental music for the first 30 seconds before any speech, the transcription chunk will flush with the StartTime from 30 seconds forward until the last word of the chunk which will represent the end time.
Note
TimeSpan
is used to represent the time stamps as it is more intuitive and easier to work with, some services give the time in milliseconds, ticks or other formats.Contents
is a list ofAIContent
objects that represent the transcription result. 99% use cases this will be oneTextContent
object that can be retrieved from theText
property similarly as aText
inChatMessage
.Kind
is astruct
similarly toChatRole
The decision to use an
struct
similarly toChatRole
will allow more flexibility and customization for different API updates where can provide extra update definitions which can be very specific and won't fall much into the general categories described below, this will allow implementers to not skip such updates, providing a more specificKind
update.General Update Kinds:
Session Open
- When the transcription session is open.Text Updating
- When the speech to text is in progress, without waiting for silence. (Preferably for UI updates)Different apis used different names for this, ie:
PartialTranscriptReceived
SegmentData
RecognizingSpeech
Text Updated
- When a speech to text block is complete after a small period of silence.Different API names for this, ie:
FinalTranscriptReceived
RecognizedSpeech
Session Close
- When the transcription session is closed.Error
- When an error occurs during the speech to text process.Errors during the streaming can happen, and normally won't block the ongoing process, but can provide more detailed information about the error. For this reason instead of throwing an exception, the error can be provided as part of the ongoing streaming using a dedicated content I'm calling here
ErrorContent
.The idea of providing an
ErrorContent
is mainly to avoid usingTextContent
combining the error title, code and details in a single string, which can be harder to parse and open's a poorer user experience and bad precedent for error handling / error content.Similarly to the
UsageContent
in Chat, if an update want to provide a more detailed error information as part of the ongoing streaming, adding theErrorContent
that represents the error message, code, and details, may work best for providing more specific error details that are part of an ongoing process.Specific API categories:
Additional Extensions:
Stream
->ToAsyncEnumerable<T> : where T : DataContent
This extension method allows converting a
Stream
to anIAsyncEnumerable<T>
whereT
is aDataContent
type, this will allow the usage ofStream
as an input for theISpeechToTextClient
without the need to load the entire stream into memory and simplifying the usage of the API for majority of mainstream scenarios whereStream
type is used.As we have already extensions for
Stream
this eventually could be dropped but proved to be useful when callers wanted to easily consume a Stream as anIAsyncEnumerable<T>
.IAsyncEnumerable<T> -> ToStream<T> : where T : DataContent
Allows converting an
IAsyncEnumerable<T>
to aStream
whereT
is aDataContent
typeThis extension will be very useful for implementers of the
ISpeechToTextClient
to provide a simple way to convert theIAsyncEnumerable<T>
to aStream
for the underlying service to consume, which majority all of the services SDK's currently support.Azure AI Speech SDK - Example
SK Abstractions and Adapters
Similarly how we have
ChatClient
andChatService
abstractions, we will haveSpeechToTextClient
andAudioToTextService
abstractions, where theSpeechToTextClient
will be the main entry point for the project to consume the audio transcription services, and theAudioToTextService
will be the main entry point for the services to implement the audio transcription services.