Lightweight Rust proxy built for OpenClaw — wraps Claude Code CLI as a subprocess and exposes it via Anthropic Messages API and OpenAI Chat Completions API.
Existing Claude Code proxy implementations have compatibility issues with OpenClaw's Anthropic SDK integration. This proxy was built from scratch to handle those edge cases reliably.
Use your Claude subscription (Max, Team, Enterprise) as an OpenClaw provider without a direct API key.
Client (OpenClaw, curl, SDK)
│
▼
┌───────────────┐ stdin/stdout ┌──────────────┐
│ Proxy :3456 │ ◄─── NDJSON ───► │ claude CLI │
│ (axum) │ │ subprocess │
└───────────────┘ └──────────────┘
│
SSE / JSON
│
▼
Client
Each request spawns a claude -p --output-format stream-json subprocess. The proxy filters SSE events — only text content blocks are forwarded (thinking, tool_use, and signature blocks are stripped for SDK compatibility). Stateless: no session persistence.
- Dual API —
/v1/messages(Anthropic) +/v1/chat/completions(OpenAI) - Streaming & non-streaming for both APIs
- System prompt modes — embed in
<system>tags (preserves CLI's 43K default prompt) or replace entirely - Rate limit handling — warnings and rejection forwarding
- Graceful shutdown — SIGTERM / Ctrl+C
- Inactivity timeout — kills stuck subprocesses after 30 min
Download the latest binary from Releases and place it in your $PATH.
# Prerequisites: Claude Code CLI installed and authenticated
npm i -g @anthropic-ai/claude-code && claude auth login
# Build & install
cargo build --release
cp target/release/claude-code-proxy ~/.local/bin/claude-code-proxy --port 3456 --cwd ~ --embed-system-promptImportant
Full setup guide for connecting this proxy as an OpenClaw LLM provider — openclaw.json config, systemd service, model IDs, and troubleshooting:
| Flag | Default | Description |
|---|---|---|
--port |
3456 |
Listen port |
--cwd |
. |
Working directory for CLI subprocess |
--embed-system-prompt |
false |
Embed system prompt in <system> tags, keep CLI default prompt |
--replace-system-prompt |
false |
Replace CLI system prompt entirely via --system-prompt |
--effort |
— | Thinking effort: low, medium, high, max |
--max-turns |
100 |
Max agentic turns per request |
curl -sN http://localhost:3456/v1/messages \
-H 'content-type: application/json' \
-d '{"model":"claude-sonnet-4-6","max_tokens":50,"messages":[{"role":"user","content":"Say hi"}],"stream":true}'Expected: SSE stream with message_start → content_block_delta → message_stop.
| Method | Path | Description |
|---|---|---|
GET |
/health |
Health check |
GET |
/v1/models |
Available models list |
POST |
/v1/messages |
Anthropic Messages API |
POST |
/v1/chat/completions |
OpenAI Chat Completions API |
MIT