Add live audio transcription streaming support to Foundry Local C# SDK#485
Open
Add live audio transcription streaming support to Foundry Local C# SDK#485
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
added 2 commits
March 10, 2026 18:09
Contributor
There was a problem hiding this comment.
Pull request overview
Adds a new C# SDK API for live/streaming audio transcription sessions (push PCM chunks, receive incremental/final text results) and includes a Windows microphone demo sample.
Changes:
- Introduces
LiveAudioTranscriptionSession+ result/error types for streaming ASR over Core interop. - Extends Core interop to support audio stream start/push/stop (including binary payload routing).
- Adds a
samples/cs/LiveAudioTranscriptiondemo project and updates the audio client factory API.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| sdk_v2/cs/test/FoundryLocal.Tests/Utils.cs | Replaced prior test utilities with ad-hoc top-level streaming harness code (currently breaks test build). |
| sdk_v2/cs/test/FoundryLocal.Tests/ModelTests.cs | Adds trailing blank lines (formatting noise). |
| sdk_v2/cs/src/OpenAI/LiveAudioTranscriptionTypes.cs | Adds LiveAudioTranscriptionResult and a structured Core error type. |
| sdk_v2/cs/src/OpenAI/LiveAudioTranscriptionClient.cs | Adds LiveAudioTranscriptionSession implementation (channels, retry, stop semantics). |
| sdk_v2/cs/src/OpenAI/AudioClient.cs | Adds CreateLiveTranscriptionSession() and removes the public file streaming transcription API. |
| sdk_v2/cs/src/Detail/JsonSerializationContext.cs | Registers new audio streaming types for source-gen JSON. |
| sdk_v2/cs/src/Detail/ICoreInterop.cs | Adds interop structs + methods for audio stream start/push/stop. |
| sdk_v2/cs/src/Detail/CoreInterop.cs | Implements binary command routing via execute_command_with_binary and start/stop routing via execute_command. |
| sdk_v2/cs/src/AssemblyInfo.cs | Adds InternalsVisibleTo("AudioStreamTest"). |
| samples/cs/LiveAudioTranscription/README.md | Documentation for the live transcription demo sample. |
| samples/cs/LiveAudioTranscription/Program.cs | Windows microphone demo using NAudio + new session API. |
| samples/cs/LiveAudioTranscription/LiveAudioTranscription.csproj | Adds sample project dependencies and references the SDK project (path currently incorrect). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
samples/cs/LiveAudioTranscription/LiveAudioTranscription.csproj
Outdated
Show resolved
Hide resolved
samples/cs/LiveAudioTranscription/LiveAudioTranscription.csproj
Outdated
Show resolved
Hide resolved
samples/cs/LiveAudioTranscription/LiveAudioTranscription.csproj
Outdated
Show resolved
Hide resolved
samples/cs/LiveAudioTranscription/LiveAudioTranscription.csproj
Outdated
Show resolved
Hide resolved
…g-support-sdk # Conflicts: # sdk/js/test/openai/chatClient.test.ts
nenad1002
reviewed
Mar 27, 2026
samples/cs/GettingStarted/src/LiveAudioTranscriptionExample/Program.cs
Outdated
Show resolved
Hide resolved
…ionItem pattern (#561) ### Description Redesigns `LiveAudioTranscriptionResponse` to follow the OpenAI Realtime API's `ConversationItem` shape, enabling forward compatibility with a future WebSocket-based architecture. **Motivation:** - Customers using OpenAI's Realtime API access transcription via `result.content[0].transcript` - By adopting this pattern now, customers who write `result.Content[0].Text` won't need to change their code when we migrate to WebSocket transport - Aligns with the team's plan to move toward OpenAI Realtime API compatibility **Before:** ```csharp // Extended AudioCreateTranscriptionResponse from Betalgo await foreach (var result in session.GetTranscriptionStream()) { Console.Write(result.Text); // inherited from base bool final = result.IsFinal; // custom field var segments = result.Segments; // inherited from base } ``` **After:** ```csharp // Own type shaped like OpenAI Realtime ConversationItem await foreach (var result in session.GetTranscriptionStream()) { Console.Write(result.Content[0].Text); // ConversationItem pattern Console.Write(result.Content[0].Transcript); // alias for Text (Realtime compat) bool final = result.IsFinal; double? start = result.StartTime; } ``` **Changes:** | File | Change | |------|--------| | LiveAudioTranscriptionTypes.cs | Removed `AudioCreateTranscriptionResponse` inheritance. New standalone `LiveAudioTranscriptionResponse` with `Content` list + new `TranscriptionContentPart` type | | LiveAudioTranscriptionClient.cs | Updated text checks: `.Text` → `.Content?[0]?.Text` | | JsonSerializationContext.cs | Registered `TranscriptionContentPart`, removed `AudioCreateTranscriptionResponse.Segment` | | LiveAudioTranscriptionTests.cs | Updated assertions to match new type shape | | Program.cs (sample) | Updated result reading to `result.Content?[0]?.Text` | | README.md | Updated docs and output type table | **Key design decisions:** - `TranscriptionContentPart` has both `Text` and `Transcript` (set to the same value) for maximum compatibility with both Whisper and Realtime API patterns - `StartTime`/`EndTime` are top-level on the response (not nested in Segments) — simpler access, maps to Realtime's `audio_start_ms`/`audio_end_ms` - No dependency on Betalgo's `ConversationItem` — we own the type to avoid carrying unused chat/tool-calling fields - `LiveAudioTranscriptionRaw` (Core JSON deserialization) is unchanged — this is purely an SDK presentation change, no Core/neutron-server impact **No breaking changes to:** Core API, native interop, audio pipeline, session lifecycle --------- Co-authored-by: ruiren_microsoft <ruiren@microsoft.com>
added 4 commits
March 30, 2026 12:53
…g-support-sdk # Conflicts: # .github/workflows/build-js-steps.yml # sdk/js/script/install.cjs
kunal-vaishnavi
approved these changes
Mar 30, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Here's the cleaned version:
Description:
Adds real-time audio streaming support to the Foundry Local C# SDK, enabling live microphone-to-text transcription via ONNX Runtime GenAI's StreamingProcessor API (Nemotron ASR).
The existing
OpenAIAudioClientonly supports file-based transcription. This PR introducesLiveAudioTranscriptionSessionthat accepts continuous PCM audio chunks (e.g., from a microphone) and returns partial/final transcription results as an async stream.What's included
New files
src/OpenAI/LiveAudioTranscriptionClient.cs— Streaming session withStartAsync(),AppendAsync(),GetTranscriptionStream(),StopAsync()src/OpenAI/LiveAudioTranscriptionTypes.cs—LiveAudioTranscriptionResponse(extendsAudioCreateTranscriptionResponse) andCoreErrorResponsetypestest/FoundryLocal.Tests/LiveAudioTranscriptionTests.cs— Unit tests for deserialization, settings, state guardsModified files
src/OpenAI/AudioClient.cs— AddedCreateLiveTranscriptionSession()factory methodsrc/Detail/ICoreInterop.cs— AddedStreamingRequestBufferstruct,StartAudioStream,PushAudioData,StopAudioStreaminterface methodssrc/Detail/CoreInterop.cs— Routes audio commands through existingexecute_command/execute_command_with_binarynative entry pointssrc/Detail/JsonSerializationContext.cs— RegisteredLiveAudioTranscriptionResponsefor AOT compatibilityAPI surface
Design highlights
LiveAudioTranscriptionResponseextendsAudioCreateTranscriptionResponsefor consistent output format with file-based transcriptionChannel<T>serializes audio pushes from any thread (safe for mic callbacks) with backpressureStartAsync()and immutable during the sessionStopAsyncalways calls native stop even if cancelled, preventing native session leaksCancellationTokenSource, decoupled from the caller's tokenStartAudioStreamandStopAudioStreamroute throughexecute_command;PushAudioDataroutes throughexecute_command_with_binary— no new native entry points requiredCore integration (neutron-server)
The Core side (AudioStreamingSession.cs) uses
StreamingProcessor+Generator+Tokenizer+TokenizerStreamfrom onnxruntime-genai to perform real-time RNNT decoding. The native commands (audio_stream_start/push/stop) are handled as cases inNativeInterop.ExecuteCommandManaged/ExecuteCommandWithBinaryManaged.Verified working
StreamingProcessorpipeline verified with WAV file (correct transcript)TranscribeChunkbyte[] PCM path matches reference float[] path exactly