Summary
The Google GenAI video generation API (models.generate_videos() / async_models.generate_videos()) is not instrumented. Calls to generate videos using Veo models produce zero Braintrust tracing. This is the video counterpart to models.generate_images(), which IS instrumented in this repo.
What is missing
| Google GenAI Method |
Instrumented? |
models.generate_content() |
Yes |
models.generate_content_stream() |
Yes |
models.embed_content() |
Yes |
models.generate_images() |
Yes |
models.generate_videos() |
No |
async_models.generate_videos() |
No |
The generate_videos() method is the SDK entry point for text-to-video and image-to-video generation using Google's Veo models (Veo 2 is stable; Veo 3.1 is preview). It returns an asynchronous long-running Operation that must be polled until completion.
At minimum, instrumentation should create a span capturing:
- Input: prompt text, model name, image reference (for image-to-video), generation config
- Output: video metadata (duration, format, number of videos generated)
- Metrics: latency (total operation time from submission to completion)
- Metadata: model, aspect ratio, resolution, person generation setting
The pattern would mirror the existing ModelsGenerateImagesPatcher / AsyncModelsGenerateImagesPatcher, adapted for the video response format and long-running operation semantics.
Braintrust docs status
not_found — The Gemini integration page documents generate_content, streaming, function calling, structured outputs, thinking tokens, and context caching. Image generation is supported via the existing generate_images patcher. Video generation is not mentioned.
Upstream sources
Local files inspected
py/src/braintrust/integrations/google_genai/patchers.py — defines ModelsGenerateImagesPatcher and AsyncModelsGenerateImagesPatcher but no video generation patchers; zero references to generate_videos
py/src/braintrust/integrations/google_genai/tracing.py — contains _generate_images_wrapper and _async_generate_images_wrapper but no video generation wrappers
py/src/braintrust/integrations/google_genai/integration.py — integration class registers image generation patchers but no video patchers
py/src/braintrust/integrations/google_genai/test_google_genai.py — no video generation test cases
py/noxfile.py — test_google_genai session exists but no video-specific coverage
Relationship to existing issues
This is the same class of gap as #124 (OpenAI Images API not instrumented) — a stable generative media API that the wrapper silently skips. The difference is that Google GenAI generate_images() IS already instrumented, making generate_videos() an adjacent gap in the same integration.
Summary
The Google GenAI video generation API (
models.generate_videos()/async_models.generate_videos()) is not instrumented. Calls to generate videos using Veo models produce zero Braintrust tracing. This is the video counterpart tomodels.generate_images(), which IS instrumented in this repo.What is missing
models.generate_content()models.generate_content_stream()models.embed_content()models.generate_images()models.generate_videos()async_models.generate_videos()The
generate_videos()method is the SDK entry point for text-to-video and image-to-video generation using Google's Veo models (Veo 2 is stable; Veo 3.1 is preview). It returns an asynchronous long-runningOperationthat must be polled until completion.At minimum, instrumentation should create a span capturing:
The pattern would mirror the existing
ModelsGenerateImagesPatcher/AsyncModelsGenerateImagesPatcher, adapted for the video response format and long-running operation semantics.Braintrust docs status
not_found — The Gemini integration page documents
generate_content, streaming, function calling, structured outputs, thinking tokens, and context caching. Image generation is supported via the existinggenerate_imagespatcher. Video generation is not mentioned.Upstream sources
client.models.generate_videos(model="veo-2-generate-001", ...)— stable Veo 2 modelLocal files inspected
py/src/braintrust/integrations/google_genai/patchers.py— definesModelsGenerateImagesPatcherandAsyncModelsGenerateImagesPatcherbut no video generation patchers; zero references togenerate_videospy/src/braintrust/integrations/google_genai/tracing.py— contains_generate_images_wrapperand_async_generate_images_wrapperbut no video generation wrapperspy/src/braintrust/integrations/google_genai/integration.py— integration class registers image generation patchers but no video patcherspy/src/braintrust/integrations/google_genai/test_google_genai.py— no video generation test casespy/noxfile.py—test_google_genaisession exists but no video-specific coverageRelationship to existing issues
This is the same class of gap as #124 (OpenAI Images API not instrumented) — a stable generative media API that the wrapper silently skips. The difference is that Google GenAI
generate_images()IS already instrumented, makinggenerate_videos()an adjacent gap in the same integration.