Summary
The @google/genai SDK provides a GA Live API (client.live.connect()) for low-latency, real-time bidirectional audio, video, and text interaction with Gemini models. This repo has zero instrumentation for any Live API surface — no channels, no plugin handlers, no wrapper proxies, and no auto-instrumentation configs. Users building real-time voice or multimodal AI applications with the Google GenAI Live API get no Braintrust spans.
What instrumentation is missing
No coverage in any layer:
- Wrapper (
js/src/wrappers/google-genai.ts): Only intercepts instance.models (line 74–76). No proxy for the live resource.
- Auto-instrumentation config (
js/src/auto-instrumentations/configs/google-genai.ts): Only defines configs for generateContentInternal and generateContentStreamInternal. No config for Live API methods.
- Channels (
js/src/instrumentation/plugins/google-genai-channels.ts): Only two channels defined (generateContent, generateContentStream). No Live API channels.
- Plugin (
js/src/instrumentation/plugins/google-genai-plugin.ts): No handler for Live API calls.
- Vendor types (
js/src/vendor-sdk-types/google-genai.ts): GoogleGenAIClient only declares models property. No live property.
A grep for live.connect, sendRealtimeInput, and sendClientContent across js/src/ returns zero matches.
Key upstream API surfaces with no Braintrust tracing:
| SDK Method |
Description |
client.live.connect({ model, config }) |
Creates a WebSocket session for real-time interaction. Config includes system instructions, tools, voice, response modalities, and speech config. |
session.sendClientContent() |
Sends text content in the session. |
session.sendRealtimeInput() |
Sends real-time audio/video data. |
session.sendToolResponse() |
Sends tool call results back to the model. |
The Live API session emits server events including model responses with usage metadata and tool call requests. These contain the same kind of data (content, token counts, tool calls) that is already captured for models.generateContent() in this repo.
Braintrust docs status
not_found — The Braintrust Gemini integration page at https://www.braintrust.dev/docs/integrations/ai-providers/gemini documents generateContent and generateContentStream as supported methods. The Live API is not mentioned.
Upstream reference
- Google GenAI Live API docs: https://ai.google.dev/gemini-api/docs/live
- SDK method:
client.live.connect({ model, config }) — returns a WebSocket session
- Capabilities: real-time audio/video/text interaction, tool calling, 70+ language support, interruption handling
- Session config:
systemInstruction, tools, responseModalities, speechConfig, realtimeInputConfig
- This is a production API documented on the official Google AI for Developers site.
- The
@google/genai SDK (latest v1.48.0+) provides full TypeScript support for the Live API.
Precedent
- This is the Google equivalent of the OpenAI Realtime API gap.
- Both represent a new category of AI API (WebSocket-based real-time sessions) that the current instrumentation architecture does not cover.
Local files inspected
js/src/wrappers/google-genai.ts — only wraps models property
js/src/auto-instrumentations/configs/google-genai.ts — no Live configs
js/src/instrumentation/plugins/google-genai-channels.ts — no Live channels
js/src/instrumentation/plugins/google-genai-plugin.ts — no Live handlers
js/src/vendor-sdk-types/google-genai.ts — no live property on client type
e2e/scenarios/google-genai-instrumentation/ — no Live test scenarios
Summary
The
@google/genaiSDK provides a GA Live API (client.live.connect()) for low-latency, real-time bidirectional audio, video, and text interaction with Gemini models. This repo has zero instrumentation for any Live API surface — no channels, no plugin handlers, no wrapper proxies, and no auto-instrumentation configs. Users building real-time voice or multimodal AI applications with the Google GenAI Live API get no Braintrust spans.What instrumentation is missing
No coverage in any layer:
js/src/wrappers/google-genai.ts): Only interceptsinstance.models(line 74–76). No proxy for theliveresource.js/src/auto-instrumentations/configs/google-genai.ts): Only defines configs forgenerateContentInternalandgenerateContentStreamInternal. No config for Live API methods.js/src/instrumentation/plugins/google-genai-channels.ts): Only two channels defined (generateContent,generateContentStream). No Live API channels.js/src/instrumentation/plugins/google-genai-plugin.ts): No handler for Live API calls.js/src/vendor-sdk-types/google-genai.ts):GoogleGenAIClientonly declaresmodelsproperty. Noliveproperty.A grep for
live.connect,sendRealtimeInput, andsendClientContentacrossjs/src/returns zero matches.Key upstream API surfaces with no Braintrust tracing:
client.live.connect({ model, config })session.sendClientContent()session.sendRealtimeInput()session.sendToolResponse()The Live API session emits server events including model responses with usage metadata and tool call requests. These contain the same kind of data (content, token counts, tool calls) that is already captured for
models.generateContent()in this repo.Braintrust docs status
not_found— The Braintrust Gemini integration page at https://www.braintrust.dev/docs/integrations/ai-providers/gemini documentsgenerateContentandgenerateContentStreamas supported methods. The Live API is not mentioned.Upstream reference
client.live.connect({ model, config })— returns a WebSocket sessionsystemInstruction,tools,responseModalities,speechConfig,realtimeInputConfig@google/genaiSDK (latest v1.48.0+) provides full TypeScript support for the Live API.Precedent
Local files inspected
js/src/wrappers/google-genai.ts— only wrapsmodelspropertyjs/src/auto-instrumentations/configs/google-genai.ts— no Live configsjs/src/instrumentation/plugins/google-genai-channels.ts— no Live channelsjs/src/instrumentation/plugins/google-genai-plugin.ts— no Live handlersjs/src/vendor-sdk-types/google-genai.ts— noliveproperty on client typee2e/scenarios/google-genai-instrumentation/— no Live test scenarios