Docs page URL
https://openrouter.ai/docs/api-reference/streaming
API endpoint
/api/v1/chat/completions
Description
I observed a trailing [DONE] SSE sentinel on both OpenRouter streaming APIs:
- OpenAI-compatible:
POST /api/v1/chat/completions
- Anthropic-compatible:
POST /api/v1/messages?beta=true
Observed behavior
OpenAI-compatible path
Response tail:
data: {...final chunk...}
data: [DONE]
Anthropic-compatible path /api
Response tail:
event: message_stop
data: {"type":"message_stop"}
event: data
data: [DONE]
Repro models
Observed on:
- minimax/minimax-m2.5
- anthropic/claude-haiku-4.5
Docs question
I could not find explicit documentation that the Anthropic-compatible endpoint also emits a terminal [DONE] sentinel.
Is this intentional behavior for OpenRouter's streaming layer across both APIs?
If yes, could this be documented explicitly for /api/v1/messages as well?
Steps to reproduce
No response
Docs page URL
https://openrouter.ai/docs/api-reference/streaming
API endpoint
/api/v1/chat/completions
Description
I observed a trailing
[DONE]SSE sentinel on both OpenRouter streaming APIs:POST /api/v1/chat/completionsPOST /api/v1/messages?beta=trueObserved behavior
OpenAI-compatible path
Response tail:
Anthropic-compatible path /api
Response tail:
Repro models
Observed on:
Docs question
I could not find explicit documentation that the Anthropic-compatible endpoint also emits a terminal [DONE] sentinel.
Is this intentional behavior for OpenRouter's streaming layer across both APIs?
If yes, could this be documented explicitly for /api/v1/messages as well?
Steps to reproduce
No response