/\_/\ Paw
( o.o ) Scratch your code into shape~
> ^ <
Multi-provider AI coding agent for the terminal. Smart routing, solo or team mode, MCP support, session sync, skills, hooks, and automatic fallback.
Disclaimer: Paw is an independent, third-party project. It is not affiliated with, endorsed by, or sponsored by Anthropic, OpenAI or any AI provider. Claude, Codex, GPT, and related names are trademarks of their respective owners.
paw (CLI)
โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโ
โ โ โ
paw mcp paw --help paw [prompt]
(manage) (info) (main flow)
โ
โโโโโโโดโโโโโโ
โ Auto-Detectโ
โ Anthropic โ
โ Codex CLI โ
โ Ollama โ
โโโโโโโฌโโโโโโ
โ
โโโโโโโโโโโผโโโโโโโโโโ
โ Init (parallel) โ
โ MCP + Team detect โ
โ + Session restore โ
โ + Hooks load โ
โโโโโโโโโโโฌโโโโโโโโโโ
โ
โโโโโโโดโโโโโโ
โ REPL โ
โ (Ink UI) โ
โโโโโโโฌโโโโโโ
โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโ
โ โ โ
/commands Solo Mode Team Mode
โ โ โ
โ โโโโโโโโดโโโ โโโโโโโโดโโโโโโโโ
โ โProvider โ โPlan โ Code โ โ
โ โ Call โ โ[Review+Test] โ
โ โโโโโโฌโโโโโ โ โ Optimize โ
โ โ โโโโโโโโฌโโโโโโโโ
โ โโโโโโดโโโโโ โ
โ โ 8 Tools โ Fallback
โ โ + MCP โ on error
โ โโโโโโฌโโโโโ โ
โ โโโโโโโโโโฌโโโโโโโ
โ โ
โ โโโโโโโโดโโโโโโโ
โโโโโโโโโโโโโโถโ Response โ
โ + Session โ
โ + Hooks โ
โ + Sync โ
โโโโโโโโโโโโโโโ
Provider Call โ Success โ Response
โ
โโ Error (429/401/quota) โ Next Provider โ ... โ Ollama (last resort)
Plan(sequential) โ Code(sequential) โ [Review + Test](parallel) โ Optimize(sequential)
Example: anthropic โ planner, reviewer, optimizer
codex โ coder (score: 9)
ollama โ tester (unique spread)
- 3 Providers โ Anthropic (API), Codex (CLI), Ollama (local)
- Auto-detect โ No login prompt; finds Codex CLI and Ollama automatically
- Solo/Team mode โ Single provider or 5-agent pipeline in one terminal
- Session sync โ Conversations persist and sync across terminals in real-time (fs.watch)
- Resume โ
--continueor--session <id>to pick up where you left off - Arrow-key UI โ All panels: โโ navigate, Enter select, Esc back
- Effort levels โ Codex: low/medium/high/max (configurable per model and per team role)
- MCP support โ External tools via Model Context Protocol (stdio/http/sse)
- Skills โ 7 built-in slash commands + user/project custom skills
- Hooks โ Event-driven automation: pre-turn, post-turn, pre-tool, post-tool, on-error, session-start, session-end
- Auto-fallback โ Rate limit? Instantly tries next provider
- Live Ollama detection โ Shows actually pulled models with sizes
- Usage tracking โ Per-provider token count with estimated cost
- Korean IME โ Native stdin handling for smooth CJK input
- Autocomplete โ
/triggers command list; Enter executes, Tab fills - Security hardened โ Injection protection, SSRF blocking, symlink guards
/automode โ Autonomous agent: plan โ execute โ verify โ fix loop until done/pipemode โ Feed shell output to AI: analyze, auto-fix errors, or watch commands- Smart Router โ Auto-detects best mode from your message (EN/KO/JA/ZH)
- Node.js 22+
- npm
- At least one: Anthropic API key, Codex CLI, or Ollama
git clone https://github.com/jhcdev/paw.git
cd paw
npm install
npm link # Installs 'paw' command globallypaw # Auto-detect and start REPL
paw --provider codex # Force Codex
paw --provider ollama # Force Ollama
paw "explain this project" # Direct prompt, no REPL
paw "/team implement JWT auth" # Team mode prompt
paw --continue # Resume last session
paw -c "what did I say before?" # Resume + prompt
paw --session abc123 # Join specific session| Provider | Auth | How it works |
|---|---|---|
| Anthropic | API key (ANTHROPIC_API_KEY) |
Claude models, best reasoning |
| Codex | codex login |
Runs codex exec with ChatGPT subscription |
| Ollama | (none) | Connects to local Ollama server |
API key from console.anthropic.com. Best for reasoning and planning.
# Set in .env
ANTHROPIC_API_KEY=sk-ant-api03-...
# Or configure in REPL
/settings โ Anthropic โ enter API keyModels: Haiku 4.5 (fast), Sonnet 4/4.6 (balanced), Opus 4/4.6 (powerful). Pricing: per-token (e.g. Sonnet $3/1M input, $15/1M output).
Auto-detected if Codex CLI is installed. Uses ChatGPT subscription โ no API key needed.
npm install -g @openai/codex
codex login
paw --provider codexEffort: low, medium (default), high, extra_high
Models: GPT-5.4, GPT-5.4 Mini, GPT-5.3 Codex, GPT-5.3 Codex Spark, GPT-5.2 Codex, GPT-5.2, GPT-5.1 Codex Max/Mini, o4 Mini, o3
Free, no account. Runs models on your machine.
ollama pull qwen3
paw --provider ollamaHardware: 16GB RAM minimum, GPU recommended.
- Gemini โ Google Gemini API
- Groq โ Fast inference
- OpenRouter โ Multi-model hub
Conversations auto-save and sync across terminals.
paw # New session (auto-generated ID)
paw --continue # Resume last session
paw -c "continue working" # Resume + prompt
paw --session abc123 # Join specific sessionTwo terminals with the same session ID see each other's messages instantly (fs.watch, 50ms debounce).
Terminal A: paw --session abc123
Terminal B: paw --session abc123
โ Both see the same conversation, synced in real-time
Stored in ~/.paw/sessions/{id}.json (mode 0600).
Skills are slash commands that prepend a focused prompt to your message.
| Skill | Description |
|---|---|
/review |
Review code for bugs, security, and best practices |
/refactor |
Suggest refactoring improvements |
/test |
Generate test cases |
/explain |
Explain code in detail |
/optimize |
Optimize code for performance |
/document |
Generate documentation |
/commit |
Generate a conventional commit message from git diff |
you /review src/auth.ts
you /commit
you /explain this function
Create user-wide or project-scoped skills as JSON files:
User skill โ ~/.paw/skills/deploy.json:
{
"name": "deploy",
"description": "Check deployment readiness",
"prompt": "Review this code for production deployment: check env vars, error handling, logging, and security."
}Project skill โ .paw/skills/style.json:
{
"name": "style",
"description": "Enforce project style guide",
"prompt": "Review this code against our style guide: 2-space indent, no var, prefer const, JSDoc on exports."
}Skills load automatically on startup. Use /skills to list all available skills.
Hooks let you run shell commands at specific points in the REPL lifecycle.
| Event | When |
|---|---|
pre-turn |
Before sending a message to the model |
post-turn |
After the model responds |
pre-tool |
Before a tool is executed |
post-tool |
After a tool finishes |
on-error |
When any error occurs |
session-start |
When the REPL session starts |
session-end |
When the REPL session ends |
Create .paw/hooks.json in your project (or ~/.paw/hooks.json for user-wide hooks):
{
"hooks": [
{
"name": "log-turns",
"event": "post-turn",
"command": "echo 'Turn complete' >> ~/.paw/activity.log"
},
{
"name": "lint-on-tool",
"event": "post-tool",
"command": "npm run lint --silent 2>/dev/null || true",
"timeout": 15000
},
{
"name": "notify-start",
"event": "session-start",
"command": "notify-send 'Cat\\'s Claw' 'Session started'"
},
{
"name": "git-status",
"event": "pre-turn",
"command": "git status --short"
}
]
}Hooks receive environment variables:
| Variable | Value |
|---|---|
CATS_CLAW_EVENT |
The event name |
CATS_CLAW_CWD |
Current working directory |
Use {{key}} placeholders in commands to interpolate context values. Hooks time out after 10s by default (configurable per hook via timeout in ms).
Manage providers via arrow-key panel:
โญโ Provider Settings โโโโโโโโโโโโโโโโโโโฎ
โ > โ Anthropic (active) โ
โ โ Codex โ
โ โ Ollama (local) โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Select โ choose login or API key โ configured.
Arrow-key panel showing plan-filtered models. Ollama shows actually pulled models:
โญโ Model Selection โโโโโโโโโโโโโโโโโโโโโฎ
โ Active: codex/gpt-5.4 โ
โ Select provider: โ
โ > anthropic โ
โ codex โ
โ ollama โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ Enter
โญโ Select model โโโโโโโโโโโโโโโโโโโโโโโโฎ
โ > gpt-5.4 โ GPT-5.4 โ
โ gpt-5.4-mini โ GPT-5.4 Mini โ
โ o4-mini โ o4 Mini โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ Enter (Anthropic)
โญโ Select model โโโโโโโโโโโโโโโโโโโโโโโโฎ
โ > claude-haiku-4-5 โ Haiku 4.5 โ
โ claude-sonnet-4 โ Sonnet 4 โ
โ claude-sonnet-4-6 โ Sonnet 4.6 โ
โ claude-opus-4 โ Opus 4 โ
โ claude-opus-4-6 โ Opus 4.6 โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โ Enter (Codex)
โญโ Select effort โโโโโโโโโโโโโโโโโโโโโโโฎ
โ Low โ Fast, lighter reasoning โ
โ > Medium โ Balanced (default) โ
โ High โ Complex problems โ
โ Extra High โ Maximum depth โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Direct command also works: /model codex 3 or /model ollama qwen3
One terminal, two modes. Switch anytime.
/mode solo
Single provider handles all messages.
/mode team
5 agents collaborate on every message:
| Role | Job | Runs |
|---|---|---|
| Planner | Architecture & plan | Sequential |
| Coder | Implementation | Sequential |
| Reviewer | Bugs, security | Parallel |
| Tester | Test cases | Parallel |
| Optimizer | Performance | Sequential |
โญโ Team Dashboard โโโโโโโโโโโโโโโโโโโโโโฎ
โ planner codex/gpt-5.4 โ
โ coder codex/gpt-5.4 โ
โ reviewer codex/gpt-5.4 โ
โ tester ollama/qwen3 โ
โ optimizer codex/gpt-5.4 โ
โ โ
โ > Edit role assignment โ
โ Toggle mode (โ team) โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Full arrow-key flow: pick role โ pick provider โ pick model โ pick effort
After each role change, returns to role selection for more edits. Esc to exit.
Select role โ coder
Select provider โ codex
Select model โ gpt-5.4
Select effort โ high
~ coder โ codex/gpt-5.4 (effort: high)
โ Back to role selection
Roles assigned by efficiency scores (greedy unique-first). Adapts from real usage after 3+ runs per role. Scores stored in ~/.paw/team-scores.json.
Provider fails โ instantly tries next. Ollama = local fallback (free, no rate limits).
Runs a self-driving agent that works until the task is done โ no manual intervention.
/auto add input validation to all API endpoints
/auto refactor the auth module to use JWT
/auto fix all TypeScript errors in the project
Flow:
โ Analyzing project... (reads files, package.json)
โ Creating plan... (step-by-step actions)
โ Executing step 1/10... (reads/writes/runs commands)
โ Executing step 2/10...
โ Verifying... (runs build + tests)
โ Build error found
โ Fixing errors... (auto-patches code)
โ Verifying...
โ All checks passed
โ COMPLETED (32.4s)
- Plans work, executes with tools, verifies with build/test
- Auto-fixes errors and retries (max 10 iterations)
- Multi-provider: fallback if one provider fails mid-task
Feeds real terminal output directly to the AI for analysis or automatic fixing.
/pipe npm test โ AI analyzes test failures
/pipe fix npm run build โ AI fixes build errors, re-runs until clean
/pipe fix tsc --noEmit โ AI fixes type errors automatically
/pipe watch npm start โ AI monitors startup output
Three modes:
| Mode | Command | What happens |
|---|---|---|
| Analyze | /pipe <cmd> |
Run โ AI explains output |
| Fix | /pipe fix <cmd> |
Run โ AI fixes errors โ re-run (loop, max 5) |
| Watch | /pipe watch <cmd> |
Run with timeout โ AI analyzes |
Example fix loop:
Running (1/5): npm run build
Errors found โ fixing (1/5)...
Running (2/5): npm run build
Errors found โ fixing (2/5)...
Running (3/5): npm run build
Pass โ no errors
FIXED after 3 iteration(s) (18.2s)
No need to remember commands. Just type naturally โ Paw picks the best execution mode automatically.
| You type | Paw routes to | Why |
|---|---|---|
npm test |
/pipe |
Shell command detected |
implement JWT auth |
/auto |
Complex implementation task |
review this code |
/review skill |
Code review pattern |
์ด ์ฝ๋ ๋ฆฌ๋ทฐํด์ค |
/review skill |
Korean skill match |
๋ชจ๋ ์๋ฌ ์์ ํด์ค |
/auto |
Korean auto task |
tsc --noEmit |
/pipe |
Shell command |
hello |
solo | Simple message |
Supports: English, Korean, Japanese, Chinese.
CJK-aware (shorter messages still trigger correctly).
Disable with explicit / commands to override routing.
| Tool | Description |
|---|---|
list_files |
List files and directories |
read_file |
Read a text file (size guard) |
write_file |
Create or overwrite a file |
edit_file |
Replace a unique string |
search_text |
Search patterns (no injection) |
run_shell |
Shell commands (dangerous blocked) |
glob |
Find files by pattern (ReDoS-safe) |
web_fetch |
Fetch URL (SSRF-protected) |
paw mcp add --transport http github https://api.github.com/mcp \
--header "Authorization:Bearer token"
paw mcp add --transport stdio memory -- npx -y @modelcontextprotocol/server-memory
paw mcp list
paw mcp remove githubโญโ MCP Server Manager โโโโโโโโโโโโโโโโโฎ
โ โ github โ 12 tool(s) โ
โ โ memory โ 9 tool(s) โ
โ > Add server โ
โ Remove server โ
โ Back โ
โ โโ navigate Enter select Esc back โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Supports stdio, HTTP, SSE. Tools auto-injected into all providers. Failed connections show error and aren't saved.
| Command | Description |
|---|---|
/help |
Show all commands |
/status |
Providers, usage, cost |
/settings |
Provider management (โโ) |
/model |
Model catalog & switch (โโ) |
/team |
Team dashboard (โโ) |
/skills |
List all skills (built-in + custom) |
/hooks |
List loaded hooks and events |
/ask <provider> <prompt> |
Query specific provider |
/tools |
Built-in + MCP tools |
/mcp |
MCP server manager (โโ) |
/git |
Status + diff + log |
/sessions |
List past sessions |
/session |
Current session ID |
/history |
Export chat to markdown |
/compact |
Compress conversation |
/init |
Generate CONTEXT.md |
/doctor |
Diagnostics |
/clear |
Reset conversation |
/exit |
Quit |
/auto <task> |
Autonomous agent mode |
/pipe <cmd> |
Feed shell output to AI (fix/watch) |
| Key | Action |
|---|---|
โโ |
Navigate menus |
Enter |
Select / execute autocomplete |
Tab |
Autocomplete (fill only) |
Esc |
Go back / quit |
Ctrl+L |
Clear conversation |
Ctrl+K |
Compact conversation |
anthropic:2r 1.5k $0.003 codex:5r ollama:3r 8.2k mcp: 1
TEAM/gpt-5.4 turns: 2 mcp: off local
- Shell: dangerous patterns blocked (rm -rf /, mkfs, etc.)
- Search: no shell injection (uses execFile, not shell)
- Files: symlink traversal protection (realpath check)
- Web: SSRF blocked (private IPs, metadata endpoints)
- MCP: safe env allowlist (API keys not leaked to child processes)
- Credentials: mode 0600
- Glob: ReDoS-safe regex conversion
| File | Purpose |
|---|---|
~/.paw/credentials.json |
API keys (0600) |
~/.paw/sessions/*.json |
Session history (0600) |
~/.paw/team-scores.json |
Team performance |
~/.paw/skills/*.json |
User-wide custom skills |
~/.paw/hooks.json |
User-wide hooks |
.paw/skills/*.json |
Project-scoped custom skills |
.paw/hooks.json |
Project-scoped hooks |
.mcp.json |
MCP config |
.env |
Environment (optional) |
paw --list # Show saved credentials
paw --logout # Remove all saved keys
paw --logout codex # Remove specific keyyou explain the structure of this project
=^.^= says:
This project has the following structure...
you /model codex 1
~ codex/gpt-5.4 (effort: medium)
you /status
~ Active: codex/gpt-5.4
Usage: codex/gpt-5.4 500 in / 300 out (free)
you /review src/auth.ts
=^.^= Reviewing for bugs, security, and best practices...
you /commit
=^.^= feat(auth): add JWT token validation with expiry check
you /explain
=^.^= This module handles...
# .paw/hooks.json
{
"hooks": [
{ "event": "post-tool", "command": "npm test --silent", "name": "auto-test" }
]
}
# โ tests run automatically after every tool callyou /mode team
you implement JWT auth
=^.^= Planning (codex/gpt-5.4)...
=^.^= Implementing (codex/gpt-5.4)...
=^.^= Reviewing (codex/gpt-5.4)...
=^.^= Testing (ollama/qwen3)...
=^.^= Optimizing (codex/gpt-5.4)...
Total: 21400ms
paw "remember: secret code is TIGER42"
# Later, in any terminal:
paw --continue "what is the secret code?"
# โ "The secret code is TIGER42"you /ask codex refactor this function
=^.^= [codex] Here's the refactored version...
you /ask ollama review this code
=^.^= [ollama] LGTM with one suggestion...
you analyze this codebase
[Fallback: ollama/qwen3]
Rate limit hit. Switched automatically.
- Initial release โ Multi-provider REPL with Ink UI, 8 tools, cat theme
- MCP support โ stdio/HTTP/SSE transport, interactive manager, CLI commands
- Team mode โ 5-agent pipeline with parallel execution, efficiency scoring
- Auto-detect โ Codex login, no startup prompt needed
- Arrow-key UI โ All panels redesigned for โโ + Enter + Esc
- Plan-aware models โ Subscription-based filtering, live Ollama detection
- Codex provider โ Replaced OpenAI API with Codex CLI (ChatGPT subscription)
- Effort levels โ Configurable per model and per team role
- Sessions โ Auto-save, resume, real-time sync across terminals
- Korean IME โ Native stdin handling, smooth CJK input
- Security audit โ 14 vulnerabilities fixed (injection, SSRF, symlink, permissions)
pawCLI โ 3-character global command- Anthropic removed โ Moved to separate plugin jhcdev/paw-anthropic
- Skills system โ 7 built-in skills + user/project custom skills via JSON files
- Hooks system โ Event-driven automation with 7 lifecycle events and shell command execution
- Anthropic provider โ API key mode with per-token pricing
/automode โ Autonomous planโexecuteโverifyโfix agent loop/pipemode โ Shell output โ AI analysis/fix/watch- Smart Router โ Auto-detect best mode from message content (multilingual)
MIT
