Skip to content

LiteLLM: speech() / aspeech() not instrumented #165

@braintrust-bot

Description

@braintrust-bot

Summary

Most of the original LiteLLM gaps from this issue are now covered on main. The Braintrust LiteLLM integration currently instruments:

  • completion() / acompletion()
  • responses() / aresponses()
  • image_generation() / aimage_generation()
  • embedding() / aembedding()
  • moderation()
  • transcription() / atranscription()

The remaining gap is LiteLLM text-to-speech:

  • litellm.speech()
  • litellm.aspeech()

Calls to those APIs still produce zero Braintrust tracing on main.

What is missing

No tracing spans are created when users call litellm.speech() or litellm.aspeech() through either wrap_litellm() or patch_litellm().

Current LiteLLM integration coverage on main includes image generation, transcription, and async embeddings. What is still missing in py/src/braintrust/integrations/litellm/ is:

  • speech patchers in patchers.py
  • speech wrappers in tracing.py
  • speech coverage in test_litellm.py

Braintrust docs status

not_found — The Braintrust LiteLLM integration docs at https://www.braintrust.dev/docs do not mention text-to-speech support.

Upstream sources

Local repo files inspected

  • py/src/braintrust/integrations/litellm/__init__.py
  • py/src/braintrust/integrations/litellm/patchers.py
  • py/src/braintrust/integrations/litellm/tracing.py
  • py/src/braintrust/integrations/litellm/test_litellm.py
  • py/noxfile.py

Relationship to existing issues

Metadata

Metadata

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions