Summary
The Mistral Audio APIs are not instrumented. Calls to client.audio.speech.create() (text-to-speech) and client.audio.transcriptions.create() (speech-to-text) produce zero Braintrust tracing. These are documented, production APIs in the Mistral platform.
The Braintrust Mistral integration instruments chat completions, embeddings, FIM, and agents, but has no patchers for the audio resource.
What is missing
| Mistral Resource |
Method |
Instrumented? |
client.chat |
complete(), stream() |
Yes |
client.embeddings |
create() |
Yes |
client.fim |
complete(), stream() |
Yes |
client.agents |
complete(), stream() |
Yes |
client.audio.speech |
create(), create_async() |
No |
client.audio.transcriptions |
create(), create_async() |
No |
Text-to-Speech (audio.speech)
Generates spoken audio from text input. Instrumentation should capture:
- Input: text content, voice selection, audio format
- Output: audio metadata (duration, format, size)
- Metrics: latency, model
- Metadata: voice ID, output format
Speech-to-Text (audio.transcriptions)
Converts audio files into text transcripts. Also supports SSE streaming transcription (POST /v1/audio/transcriptions#stream). Instrumentation should capture:
- Input: audio file reference, language, response format
- Output: transcribed text
- Metrics: latency, model, audio duration
- Metadata: language, response format
Relationship to existing issues
This is the Mistral equivalent of:
All three are the same class of gap across different provider SDKs: stable audio generation and transcription APIs that produce zero tracing.
Braintrust docs status
not_found — The Mistral integration page documents chat completions only. No mention of audio API support.
Upstream sources
- Mistral API reference — Audio endpoints:
POST /v1/audio/speech, POST /v1/audio/transcriptions, POST /v1/audio/transcriptions#stream (documented at https://docs.mistral.ai/api/)
- Mistral Python SDK (
mistralai v2.3.1 on PyPI): https://pypi.org/project/mistralai/ — includes audio speech and transcription resources
- The Audio APIs support both synchronous and asynchronous operations
Local files inspected
py/src/braintrust/integrations/mistral/patchers.py — defines patchers for Chat, Embeddings, Fim, Agents; zero references to audio, speech, or transcription
py/src/braintrust/integrations/mistral/tracing.py — wrapper functions for chat, embeddings, FIM, agents only; no audio wrappers (though _normalize_special_payloads handles input_audio type for chat message inputs, the dedicated Audio API endpoints are not wrapped)
py/src/braintrust/integrations/mistral/integration.py — integration class registers 4 composite patchers; no AudioPatcher
py/src/braintrust/integrations/mistral/test_mistral.py — no audio test cases
py/noxfile.py — test_mistral session tests against LATEST and 1.12.4; no audio coverage
Summary
The Mistral Audio APIs are not instrumented. Calls to
client.audio.speech.create()(text-to-speech) andclient.audio.transcriptions.create()(speech-to-text) produce zero Braintrust tracing. These are documented, production APIs in the Mistral platform.The Braintrust Mistral integration instruments chat completions, embeddings, FIM, and agents, but has no patchers for the audio resource.
What is missing
client.chatcomplete(),stream()client.embeddingscreate()client.fimcomplete(),stream()client.agentscomplete(),stream()client.audio.speechcreate(),create_async()client.audio.transcriptionscreate(),create_async()Text-to-Speech (
audio.speech)Generates spoken audio from text input. Instrumentation should capture:
Speech-to-Text (
audio.transcriptions)Converts audio files into text transcripts. Also supports SSE streaming transcription (
POST /v1/audio/transcriptions#stream). Instrumentation should capture:Relationship to existing issues
This is the Mistral equivalent of:
client.audio.speech,transcriptions,translations) not instrumented #174 — OpenAI Audio API (client.audio.speech,transcriptions,translations) not instrumentedimage_generation(),transcription(),speech(), andrerank()not instrumented #165 — LiteLLMtranscription(),speech()not instrumentedAll three are the same class of gap across different provider SDKs: stable audio generation and transcription APIs that produce zero tracing.
Braintrust docs status
not_found — The Mistral integration page documents chat completions only. No mention of audio API support.
Upstream sources
POST /v1/audio/speech,POST /v1/audio/transcriptions,POST /v1/audio/transcriptions#stream(documented at https://docs.mistral.ai/api/)mistralaiv2.3.1 on PyPI): https://pypi.org/project/mistralai/ — includes audio speech and transcription resourcesLocal files inspected
py/src/braintrust/integrations/mistral/patchers.py— defines patchers forChat,Embeddings,Fim,Agents; zero references toaudio,speech, ortranscriptionpy/src/braintrust/integrations/mistral/tracing.py— wrapper functions for chat, embeddings, FIM, agents only; no audio wrappers (though_normalize_special_payloadshandlesinput_audiotype for chat message inputs, the dedicated Audio API endpoints are not wrapped)py/src/braintrust/integrations/mistral/integration.py— integration class registers 4 composite patchers; no AudioPatcherpy/src/braintrust/integrations/mistral/test_mistral.py— no audio test casespy/noxfile.py—test_mistralsession tests againstLATESTand1.12.4; no audio coverage