Skip to content

[BOT ISSUE] Mistral: Audio APIs (client.audio.speech, client.audio.transcriptions) not instrumented #223

@braintrust-bot

Description

@braintrust-bot

Summary

The Mistral Audio APIs are not instrumented. Calls to client.audio.speech.create() (text-to-speech) and client.audio.transcriptions.create() (speech-to-text) produce zero Braintrust tracing. These are documented, production APIs in the Mistral platform.

The Braintrust Mistral integration instruments chat completions, embeddings, FIM, and agents, but has no patchers for the audio resource.

What is missing

Mistral Resource Method Instrumented?
client.chat complete(), stream() Yes
client.embeddings create() Yes
client.fim complete(), stream() Yes
client.agents complete(), stream() Yes
client.audio.speech create(), create_async() No
client.audio.transcriptions create(), create_async() No

Text-to-Speech (audio.speech)

Generates spoken audio from text input. Instrumentation should capture:

  • Input: text content, voice selection, audio format
  • Output: audio metadata (duration, format, size)
  • Metrics: latency, model
  • Metadata: voice ID, output format

Speech-to-Text (audio.transcriptions)

Converts audio files into text transcripts. Also supports SSE streaming transcription (POST /v1/audio/transcriptions#stream). Instrumentation should capture:

  • Input: audio file reference, language, response format
  • Output: transcribed text
  • Metrics: latency, model, audio duration
  • Metadata: language, response format

Relationship to existing issues

This is the Mistral equivalent of:

All three are the same class of gap across different provider SDKs: stable audio generation and transcription APIs that produce zero tracing.

Braintrust docs status

not_found — The Mistral integration page documents chat completions only. No mention of audio API support.

Upstream sources

  • Mistral API reference — Audio endpoints: POST /v1/audio/speech, POST /v1/audio/transcriptions, POST /v1/audio/transcriptions#stream (documented at https://docs.mistral.ai/api/)
  • Mistral Python SDK (mistralai v2.3.1 on PyPI): https://pypi.org/project/mistralai/ — includes audio speech and transcription resources
  • The Audio APIs support both synchronous and asynchronous operations

Local files inspected

  • py/src/braintrust/integrations/mistral/patchers.py — defines patchers for Chat, Embeddings, Fim, Agents; zero references to audio, speech, or transcription
  • py/src/braintrust/integrations/mistral/tracing.py — wrapper functions for chat, embeddings, FIM, agents only; no audio wrappers (though _normalize_special_payloads handles input_audio type for chat message inputs, the dedicated Audio API endpoints are not wrapped)
  • py/src/braintrust/integrations/mistral/integration.py — integration class registers 4 composite patchers; no AudioPatcher
  • py/src/braintrust/integrations/mistral/test_mistral.py — no audio test cases
  • py/noxfile.pytest_mistral session tests against LATEST and 1.12.4; no audio coverage

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions