Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

M.E.AI.Abstractions - Speech to Text Abstraction #5838

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

RogerBarreto
Copy link

@RogerBarreto RogerBarreto commented Feb 3, 2025

ADR - Introducing Speech To Text Abstraction

Problem Statement

The project requires the ability to transcribe and translate speech audios to text. The project is a proof of concept to validate the ISpeechToTextClient abstraction against different transcription and translation APIs providing a consistent interface for the project to use.

Note

The names used for the proposed abstractions below are open and can be changed at any time given a bigger consensus.

Considered Options

Option 1: Generic Multi Modality Abstraction IModelClient<TInput, TOutput> (Discarded)

This option would have provided a generic abstraction for all models, including audio transcription. However, this would have made the abstraction too generic and brought up some questioning during the meeting:

Usability Concerns:

The generic interface could make the API less intuitive and harder to use, as users would not be guided towards the specific options they need. 1

  • Naming and Clarity:

    Generic names like "complete streaming" do not convey the specific functionality, making it difficult for users to understand what the method does. Specific names like "transcribe" or "generate song" would be clearer. 2

  • Implementation Complexity:

    Implementing a generic interface would still require concrete implementations for each permutation of input and output types, which could be complex and cumbersome. 3

  • Specific Use Cases:

    Different services have specific requirements and optimizations for their modalities, which may not be effectively captured by a generic interface. 4

  • Future Proofing vs. Practicality:

    While a generic interface aims to be future-proof, it may not be practical for current needs and could lead to an explosion of permutations that are not all relevant. 5

  • Separation of Streaming and Non-Streaming:

    There was a concern about separating streaming and non-streaming interfaces, as it could complicate the API further. 6

Option 2: Speech to Text Abstraction ISpeechToTextClient (Preferred)

This option would provide a specific abstraction for audio transcription and audio translations, which would be more intuitive and easier to use. The specific interface would allow for better optimization and customization for each service.

Initially was thought on having different interfaces one for streaming and another for non-streaming api, but after some discussion, it was decided to have a single interface similar to what we have in IChatClient.

Note

Further modality abstractions will mostly follow this as a standard moving forward.

public interface ISpeechToTextClient : IDisposable
{
    Task<SpeechToTextResponse> GetResponseAsync(
        IList<IAsyncEnumerable<DataContent>> speechContents, 
        SpeechToTextOptions? options = null, 
        CancellationToken cancellationToken = default);

    IAsyncEnumerable<SpeechToTextResponseUpdate> GetStreamingResponseAsync(
        IList<IAsyncEnumerable<AudioContent>> speechContents,
        SpeechToTextOptions? options = null,
        CancellationToken cancellationToken = default);
}

Inputs:

  • IAsyncEnumerable<DataContent>, as a simpler and recent interface, it allows for upload streaming audio data contents to the service.

    This API, also enables usage of large audio files or real-time transcription (without having to load to full file in-memory) and can easily be extended to support different audio input types like a single DataContent or a Stream instance.

    Supporting scenarios like:

    • Single in-memory data of audio. Non up-streaming audio
    • One audio streamed in multiple audio content chunks - Real-time Transcription
    • Single or multiple audio uri (referenced) audioContents - Batch Transcription

    Single DataContent type input extension

    // Non-Streaming API
    public static Task<SpeechToTextResponse> GetResponseAsync(
        this ISpeechToTextClient client,
        DataContent speechContent, 
        SpeechToTextOptions? options = null, 
        CancellationToken cancellationToken = default);
    
    // Streaming API
    public static IAsyncEnumerable<SpeechToTextResponseUpdate> GetStreamingResponseAsync(
        this ISpeechToTextClient client,
        DataContent speechContent, 
        SpeechToTextOptions? options = null, 
        CancellationToken cancellationToken = default);

    Stream type input extension

    // Non-Streaming API
    public static Task<SpeechToTextResponse> GetResponseAsync(
        this ISpeechToTextClient client,
        Stream audioStream,
        SpeechToTextOptions? options = null,
        CancellationToken cancellationToken = default);
    
    // Streaming API
    public static IAsyncEnumerable<SpeechToTextResponseUpdate> GetStreamingResponseAsync(
        this ISpeechToTextClient client,
        DataContent speechContent, 
        SpeechToTextOptions? options = null, 
        CancellationToken cancellationToken = default);
  • SpeechToTextOptions, analogous to existing ChatOptions it allows providing additional options on both Streaming and Non-Streaming APIs for the service, such as language, model, or other parameters.

    public class SpeechToTextOptions
    {
        public string? ResponseId { get; set; }
    
        public string? ModelId { get; set; }
    
        // Source speech language
        public string? SpeechLanguage { get; set; }
    
        // Generated text language
        public string? TextLanguage { get; set; }
    
        public int? SpeechSampleRate { get; set; }
    
        public AdditionalPropertiesDictionary? AdditionalProperties { get; set; }
    
        public virtual SpeechToTextOptions Clone();
    }
    • ResponseId is a unique identifier for the completion of the transcription. This can be useful while using Non-Streaming API to track the completion status of a specific long-running transcription process (Batch).

Note

Usage of ResponseId follows the convention for Chat.

    • ModelId is a unique identifier for the model to use for transcription.

    • SpeechLanguage is the language of the audio content.

    • SpeechSampleRate is the sample rate of the audio content. Real-time speech to text generation requires a specific sample rate.

Outputs:

  • SpeechToTextResponse, For non-streaming API analogous to existing ChatResponse it provides the text generated result and additional information about the speech response.

    public class SpeechToTextResponse
    {
        [JsonConstructor]
        public SpeechToTextResponse();
    
        public SpeechToTextResponse(IList<AIContent> contents);
    
        public SpeechToTextResponse(string? content);
    
        public string? ResponseId { get; set; }
    
        public string? ModelId { get; set; }
    
        [AllowNull]
        public IList<SpeechToTextMessage> Choices
    
        [JsonIgnore]
        public SpeechToTextMessage Message => Choices[0];
    
        public TimeSpan? StartTime { get; set; }
    
        public TimeSpan? EndTime { get; set; }
    
        [JsonIgnore]
        public object? RawRepresentation { get; set; }
    
        /// <summary>Gets or sets any additional properties associated with the chat completion.</summary>
        public AdditionalPropertiesDictionary? AdditionalProperties { get; set; }
    • ResponseId is a unique identifier for the response. This can be useful while using Non-Streaming API to track the completion status of a specific long-running speech to text generation process (Batch).

Note

Usage of Response as a prefix initially following the convention for ChatResponse type for consistency.

    • ModelId is a unique identifier for the model used for transcription.

    • Choices is a list of generated Text SpeechToTextMessages each referring to a generated text for the given speech DataContent index. Majority of cases this will be a single message that can also be accessed in the Message property, similar to the ChatResponse.

    • StartTime and EndTime represents both Timestamps from where the text started and ended relative to the speech audio length.

      i.e: Audio starts with instrumental music for the first 30 seconds before any speech, the trascription should start from 30 seconds forward, same for the end time.

Note

TimeSpan is used to represent the time stamps as it is more intuitive and easier to work with, some services give the time in milliseconds, ticks or other formats.

  • SpeechToTextResponseUpdate, For streaming API, analogous to existing ChatResponseUpdate it provides the speech to text result as multiple chunks of updates, that represents the content generated as well as any important information about the processing.

        public class SpeechToTextResponseUpdate
        {
            [JsonConstructor]
            public SpeechToTextResponseUpdate()
    
            public SpeechToTextResponseUpdate(IList<AIContent> contents)
    
            public SpeechToTextResponseUpdate(string? content)
    
            public string? ResponseId { get; set; }
    
            public int InputIndex { get; set; }
    
            public int ChoiceIndex { get; set; }
    
            public TimeSpan? StartTime { get; set; }
    
            public TimeSpan? EndTime { get; set; }
    
            public required SpeechToTextResponseUpdateKind Kind { get; set; }
    
            public object? RawRepresentation { get; set; }
    
            public AdditionalPropertiesDictionary? AdditionalProperties { get; set; }
    
            [JsonIgnore]
            public string? Text => Contents[0].Text
    
            public IList<AIContent> Contents
        }
    • ResponseId is a unique identifier for the speech to text response.

    • StartTime and EndTime for the given transcribed chunk represents the timestamp where it starts and ends relative to the audio length.

      i.e: Audio starts with instrumental music for the first 30 seconds before any speech, the transcription chunk will flush with the StartTime from 30 seconds forward until the last word of the chunk which will represent the end time.

Note

TimeSpan is used to represent the time stamps as it is more intuitive and easier to work with, some services give the time in milliseconds, ticks or other formats.

    • Contents is a list of AIContent objects that represent the transcription result. 99% use cases this will be one TextContent object that can be retrieved from the Text property similarly as a Text in ChatMessage.

    • Kind is a struct similarly to ChatRole

      The decision to use an struct similarly to ChatRole will allow more flexibility and customization for different API updates where can provide extra update definitions which can be very specific and won't fall much into the general categories described below, this will allow implementers to not skip such updates, providing a more specific Kind update.

      [JsonConverter(typeof(Converter))]
      public readonly struct SpeechToTextResponseUpdateKind : IEquatable<AudioTranscriptionUpdateKind>
      {
          public static SpeechToTextResponseUpdateKind SessionOpen { get; } = new("sessionopen");
          public static SpeechToTextResponseUpdateKind Error { get; } = new("error");
          public static SpeechToTextResponseUpdateKind TextUpdating { get; } = new("textupdating");
          public static SpeechToTextResponseUpdateKind TextUpdated { get; } = new("textupdated");
          public static SpeechToTextResponseUpdateKind SessionClose { get; } = new("sessionclose");
      
          // Similar implementation to ChatRole
      }

      General Update Kinds:

      • Session Open - When the transcription session is open.

      • Text Updating - When the speech to text is in progress, without waiting for silence. (Preferably for UI updates)

        Different apis used different names for this, ie:

        • AssemblyAI: PartialTranscriptReceived
        • Whisper.net: SegmentData
        • Azure AI Speech: RecognizingSpeech
      • Text Updated - When a speech to text block is complete after a small period of silence.

        Different API names for this, ie:

        • AssemblyAI: FinalTranscriptReceived
        • Whisper.net: N/A (Not supported by the internal API)
        • Azure AI Speech: RecognizedSpeech
      • Session Close - When the transcription session is closed.

      • Error - When an error occurs during the speech to text process.

        Errors during the streaming can happen, and normally won't block the ongoing process, but can provide more detailed information about the error. For this reason instead of throwing an exception, the error can be provided as part of the ongoing streaming using a dedicated content I'm calling here ErrorContent.

        The idea of providing an ErrorContent is mainly to avoid using TextContent combining the error title, code and details in a single string, which can be harder to parse and open's a poorer user experience and bad precedent for error handling / error content.

        Similarly to the UsageContent in Chat, if an update want to provide a more detailed error information as part of the ongoing streaming, adding the ErrorContent that represents the error message, code, and details, may work best for providing more specific error details that are part of an ongoing process.

        public class ErrorContent : AIContent
        {
            public required string Message { get; set; } // An error must have a message
            public string? Code { get; set; } // Can be non-numerical
            public string? Details { get; set; }
        }

      Specific API categories:

Additional Extensions:

Stream -> ToAsyncEnumerable<T> : where T : DataContent

This extension method allows converting a Stream to an IAsyncEnumerable<T> where T is a DataContent type, this will allow the usage of Stream as an input for the ISpeechToTextClient without the need to load the entire stream into memory and simplifying the usage of the API for majority of mainstream scenarios where Stream type is used.

As we have already extensions for Stream this eventually could be dropped but proved to be useful when callers wanted to easily consume a Stream as an IAsyncEnumerable<T>.

public static class StreamExtensions 
{
    public static async IAsyncEnumerable<T> ToAsyncEnumerable<T>(this Stream audioStream, string? mediaType = null)
        where T : DataContent
    {
        var buffer = new byte[4096];
        while ((await audioStream.ReadAsync(buffer, 0, buffer.Length)) > 0)
        {
            yield return (T)Activator.CreateInstance(typeof(T), [(ReadOnlyMemory<byte>)buffer, mediaType])!;
        }
    }
}

IAsyncEnumerable<T> -> ToStream<T> : where T : DataContent

Allows converting an IAsyncEnumerable<T> to a Stream where T is a DataContent type

This extension will be very useful for implementers of the ISpeechToTextClient to provide a simple way to convert the IAsyncEnumerable<T> to a Stream for the underlying service to consume, which majority all of the services SDK's currently support.

public static class IAsyncEnumerableExtensions
{
    public static Stream ToStream<T>(this IAsyncEnumerable<T> stream, T? firstChunk = null, CancellationToken cancellationToken = default) 
        where T : DataContent
        => new DataContentAsyncEnumerableStream<T>(stream, firstChunk, cancellationToken);
}

// Internal class to handle an IAsyncEnumerable<T> as Stream
internal class DataContentAsyncEnumerableStream<T> : Stream 
    where T : DataContent
{
    internal DataContentAsyncEnumerableStream(IAsyncEnumerable<T> asyncEnumerable, T? firstChunk = null, CancellationToken cancellationToken = default)

    public override async Task<int> ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken = default)
    {
        // Implementation
    }
}

Azure AI Speech SDK - Example

public class MySpeechToTextClient : ISpeechToTextClient
{
    public async Task<AudioTranscriptionCompletion> TranscribeAsync(
        IAsyncEnumerable<AudioContent> audioContents, 
        AudioTranscriptionOptions? options = null, 
        CancellationToken cancellationToken = default)
    {
        using var audioStream = audioContents.ToStream();
        using var audioConfig = AudioConfig.FromStreamInput(audioStream);

        // ...
    }
}

SK Abstractions and Adapters

Similarly how we have ChatClient and ChatService abstractions, we will have SpeechToTextClient and AudioToTextService abstractions, where the SpeechToTextClient will be the main entry point for the project to consume the audio transcription services, and the AudioToTextService will be the main entry point for the services to implement the audio transcription services.

public static class AudioToTextServiceExtensions
{
    public static ISpeechToTextClient ToSpeechToTextClient(this IAudioToTextService service)
    {
        ArgumentNullException.ThrowIfNull(service);

        return service is ISpeechToTextClient client ?
            client :
            new SpeechToTextClientAudioToTextService(service);
    }

    public static IAudioToTextService ToAudioToTextService(this ISpeechToTextClient client)
    {
        ArgumentNullException.ThrowIfNull(client);
        return client is IAudioToTextService service ?
            service :
            new AudioToTextServiceSpeechToTextClient(client);
    }
}

@RogerBarreto RogerBarreto changed the title M.E.AI - Audio transcription abstraction (WIP) - Missing UT + IT M.E.AI.Abstractions - Audio transcription abstraction (WIP) - Missing UT + IT Feb 3, 2025
@RogerBarreto
Copy link
Author

@dotnet-policy-service agree company="Microsoft"

@dotnet-comment-bot
Copy link
Collaborator

‼️ Found issues ‼️

Project Coverage Type Expected Actual
Microsoft.Extensions.AI.Abstractions Line 83 69.66 🔻
Microsoft.Extensions.AI.Abstractions Branch 83 66.2 🔻
Microsoft.Gen.MetadataExtractor Line 98 57.35 🔻
Microsoft.Gen.MetadataExtractor Branch 98 62.5 🔻
Microsoft.Extensions.AI.Ollama Line 80 78.25 🔻

🎉 Good job! The coverage increased 🎉
Update MinCodeCoverage in the project files.

Project Expected Actual
Microsoft.Extensions.AI 88 89
Microsoft.Extensions.AI.OpenAI 77 78

Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=938431&view=codecoverage-tab

@RussKie RussKie added the area-ai Microsoft.Extensions.AI libraries label Feb 4, 2025
@dotnet-comment-bot
Copy link
Collaborator

‼️ Found issues ‼️

Project Coverage Type Expected Actual
Microsoft.Extensions.Caching.Hybrid Line 86 82.77 🔻
Microsoft.Extensions.AI.Abstractions Line 83 81.95 🔻
Microsoft.Extensions.AI.Abstractions Branch 83 73.8 🔻
Microsoft.Gen.MetadataExtractor Line 98 57.35 🔻
Microsoft.Gen.MetadataExtractor Branch 98 62.5 🔻
Microsoft.Extensions.AI.Ollama Line 80 78.25 🔻

🎉 Good job! The coverage increased 🎉
Update MinCodeCoverage in the project files.

Project Expected Actual
Microsoft.Extensions.AI.OpenAI 77 78
Microsoft.Extensions.AI 88 89

Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=942860&view=codecoverage-tab

@dotnet-comment-bot
Copy link
Collaborator

‼️ Found issues ‼️

Project Coverage Type Expected Actual
Microsoft.Extensions.AI.Ollama Line 80 78.11 🔻
Microsoft.Extensions.Caching.Hybrid Line 86 82.92 🔻
Microsoft.Extensions.AI.OpenAI Line 77 74.23 🔻
Microsoft.Extensions.AI.OpenAI Branch 77 63.08 🔻
Microsoft.Extensions.AI.Abstractions Line 83 81.36 🔻
Microsoft.Extensions.AI.Abstractions Branch 83 74.51 🔻
Microsoft.Gen.MetadataExtractor Line 98 57.35 🔻
Microsoft.Gen.MetadataExtractor Branch 98 62.5 🔻

🎉 Good job! The coverage increased 🎉
Update MinCodeCoverage in the project files.

Project Expected Actual
Microsoft.Extensions.AI.AzureAIInference 91 92
Microsoft.Extensions.AI 88 89

Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=945523&view=codecoverage-tab

@dotnet-comment-bot
Copy link
Collaborator

‼️ Found issues ‼️

Project Coverage Type Expected Actual
Microsoft.Extensions.AI.Abstractions Branch 83 81.05 🔻
Microsoft.Extensions.Caching.Hybrid Line 86 82.77 🔻
Microsoft.Extensions.AI.Ollama Line 80 78.11 🔻
Microsoft.Extensions.AI.OpenAI Branch 77 70.56 🔻
Microsoft.Extensions.AI Line 88 80.31 🔻
Microsoft.Extensions.AI Branch 88 87.64 🔻
Microsoft.Gen.MetadataExtractor Line 98 57.35 🔻
Microsoft.Gen.MetadataExtractor Branch 98 62.5 🔻

🎉 Good job! The coverage increased 🎉
Update MinCodeCoverage in the project files.

Project Expected Actual
Microsoft.Extensions.AI.AzureAIInference 91 92

Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=945918&view=codecoverage-tab

#endif
{
yield return (T)Activator.CreateInstance(typeof(T), [(ReadOnlyMemory<byte>)buffer, mediaType])!;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is handing out the same buffer multiple times. It's not going to be obvious to a caller that if they grab a buffer and MoveNext, that MoveNext will have overwrittten their buffer.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice observation, fixed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue still exists in the NET8+ path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this method should not be public. We can ensure that we're consuming it appropriately in our own uses, but as a public method, we have to accomodate the possibility of misuse.

@luisquintanilla
Copy link
Contributor

cc: @Swimburger for visibility. Feedback is appreciated. Thanks!

@RogerBarreto RogerBarreto changed the title M.E.AI.Abstractions - Audio transcription abstraction (WIP) - Missing UT + IT M.E.AI.Abstractions - Speech to Text Abstraction (WIP) - Missing UT + IT Feb 23, 2025
@dotnet-comment-bot
Copy link
Collaborator

‼️ Found issues ‼️

Project Coverage Type Expected Actual
Microsoft.Extensions.AI.Abstractions Branch 82 78.82 🔻
Microsoft.Extensions.AI Line 89 79.8 🔻
Microsoft.Extensions.AI Branch 89 86.67 🔻

🎉 Good job! The coverage increased 🎉
Update MinCodeCoverage in the project files.

Project Expected Actual
Microsoft.Gen.MetadataExtractor 57 70

Full code coverage report: https://dev.azure.com/dnceng-public/public/_build/results?buildId=960384&view=codecoverage-tab

@RogerBarreto RogerBarreto marked this pull request as ready for review February 26, 2025 09:48
@RogerBarreto RogerBarreto requested review from a team as code owners February 26, 2025 09:48
@RogerBarreto RogerBarreto changed the title M.E.AI.Abstractions - Speech to Text Abstraction (WIP) - Missing UT + IT M.E.AI.Abstractions - Speech to Text Abstraction Feb 26, 2025
/// <param name="cancellationToken">The <see cref="CancellationToken"/> to monitor for cancellation requests. The default is <see cref="CancellationToken.None"/>.</param>
/// <returns>The text generated by the client.</returns>
Task<SpeechToTextResponse> GetResponseAsync(
IList<IAsyncEnumerable<DataContent>> speechContents,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any scenarios where an implementation is expected to mutate this? With chat, this is expected to be a history, but with speech-to-text, presumably it's generally more of a one-and-done kind of thing? Maybe this should be an IEnumerable instead of an IList?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I just noticed, this is an IList<IAsyncEnumerable<DataContent>> rather than an IAsyncEnumerable<DataContent>? The intent here is this handles multiple inputs, each of which is an asynchronously produced sequence of content?

/// </param>
/// <param name="providerUri">The URL for accessing the speech to text provider, if applicable.</param>
/// <param name="modelId">The ID of the speech to text model used, if applicable.</param>
public SpeechToTextClientMetadata(string? providerName = null, Uri? providerUri = null, string? modelId = null)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any other common metadata that all known providers support?

public class SpeechToTextOptions
{
private CultureInfo? _speechLanguage;
private CultureInfo? _textLanguage;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these CultureInfos? Where are these culture info objects used?

/// <summary>Initializes a new instance of the <see cref="SpeechToTextResponse"/> class.</summary>
/// <param name="choices">The list of choices in the response, one message per choice.</param>
[JsonConstructor]
public SpeechToTextResponse(IList<SpeechToTextMessage> choices)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does choices map to here? Does that map to the multiple inputs provided to the GetResponseAsync method? Choices is the wrong name for that, I think.

public static SpeechToTextResponseUpdateKind TextUpdated { get; } = new("textupdated");

/// <summary>Gets when the generated text session is closed.</summary>
public static SpeechToTextResponseUpdateKind SessionClose { get; } = new("sessionclose");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the expectation that in an update sequence you always get a session open, then zero or more pairs of textupdating/textupdated, and then a session close, with zero or more errors sprinkled throughout?

private static ISpeechToTextClient CreateSpeechToTextClient(HttpClient httpClient, string modelId) =>
new OpenAIClient(new ApiKeyCredential("apikey"), new OpenAIClientOptions { Transport = new HttpClientPipelineTransport(httpClient) })
.AsSpeechToTextClient(modelId);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid adding these large wav files? We especially don't want to add them multiple times.

Once merged, they'll be in the repo's history forever.

@stephentoub
Copy link
Member

@RogerBarreto, anything I can do to help move this along? Thanks!

@stephentoub stephentoub force-pushed the audio-transcription-abstraction branch from d82f394 to d5ca910 Compare March 18, 2025 22:25
@stephentoub stephentoub force-pushed the audio-transcription-abstraction branch from d5ca910 to 4bdb7b9 Compare March 18, 2025 22:26
IList<IAsyncEnumerable<DataContent>> speechContents, SpeechToTextOptions? options = null, CancellationToken cancellationToken = default)
{
return InnerClient.GetStreamingResponseAsync(speechContents, options, cancellationToken);
return InnerClient.TranscribeStreamingAudioAsync(speechContents, options, cancellationToken);
Copy link
Author

@RogerBarreto RogerBarreto Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick observation, as you bring Transcribe signature back, will ISpeechToTextClient interface also have Translate___Async signatures, is that the rational?

My original thinking on having the Response naming was to acommodate those two functionalities in the same method, given the change from IAudioTranscriptionClient to ISpeechToTextClient.

Copy link
Member

@stephentoub stephentoub Mar 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given audio in and text out, what's the difference between transcribe and translate? Isn't the latter still transcription, doing speech recognition to go from audio to text, "just" with a possibly different target language than the audio content?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By definition those are different, and I opted for avoiding any confusion if possible.

"just" with a possibly different target language than the audio content

Yes, currently this is the main difference, while translation may also have multiple outputs (for multiple different language translations from the same input).

Having a dedicated interface for each has its benefits but the underlying functionality is quite the same.

From AI that was the answer I had.

Is there any difference between speech transcription and translation ? can one be a subset of another?

Yes, there is a difference between speech transcription and translation, and while they are related concepts, one is not necessarily a subset of the other. Let me break it down:

Speech Transcription
Speech transcription involves converting spoken language (audio) into written text in the same language. For example, if someone speaks in English, transcription would produce a written English version of what was said. The focus is on accurately capturing the words, and sometimes additional details like tone, pauses, or speaker identification (e.g., in a multi-speaker setting like a podcast or interview). It’s about representing the spoken content in a textual form without changing the language.

Translation
Translation, on the other hand, involves converting text or speech from one language to another. For example, translating spoken English into written Spanish or spoken French into written English. The goal is to preserve the meaning and intent of the original content while adapting it to a different language, which often requires cultural and linguistic adjustments beyond just word-for-word conversion.

Key Differences
Language Change: Transcription stays within the same language; translation shifts between languages.
Process: Transcription is about capturing what’s said as text, while translation involves interpreting and rephrasing meaning in another language.
Purpose: Transcription is often used for documentation (e.g., court records, subtitles), while translation is used to make content accessible to speakers of other languages.
Can One Be a Subset of the Other?
Not exactly, but they can overlap or be part of a broader process:

Transcription as a Step in Translation: In some workflows, speech is first transcribed into text in the original language, and then that text is translated into another language. For example, a Spanish speech might be transcribed into Spanish text and then translated into English. Here, transcription is a precursor to translation, but it’s not a subset—it’s a distinct step.
Real-Time Speech Translation: Modern technology (like AI-powered interpreters) can combine transcription and translation into a seamless process, where spoken words in one language are directly converted to text or speech in another. In this case, transcription might happen internally as part of the translation pipeline, but they remain separate functions conceptually.
Conclusion
Transcription and translation serve different purposes and operate at different levels of language processing. While they can work together (e.g., transcribe then translate), neither is inherently a subset of the other—they’re distinct tools in the language toolkit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-ai Microsoft.Extensions.AI libraries
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants