Conversation
Implementation PlanContextInvestigation revealed two root causes for buddy reasoning model issues:
Key finding: We do NOT need client-side Deliverables1. Remove hardcoded model + respect server default (from issue 110)
2. Fix activity indicator (from issue 110)
3. Increase buddy max_tokens (from issue 111)
4. Buddy response truncation + residual think-tag strip (from issue 111)
5. Buddy troubleshooting docs
Files to Modify
Testing Approachbuddy-manager.test.ts updates:
Risks
Plan created by mach6 |
… and docs - Remove hardcoded llama3.2 model constant; use first available Ollama model - Remove llama3.2/3.1 preference in pickOllamaModel() - Bump maxTokens 256 → 2048 for reasoning model headroom - Bump Ollama timeout 15s → 30s - Fix activity indicator: showThinking() default was null, never rendered - Add response truncation at 300 words with ...[truncated] - Return null for empty/whitespace-only responses - Add buddy.md docs with setup, commands, and reasoning model troubleshooting - Add tests for truncation, thinking block filtering, model selection
Progress UpdateAll deliverables implemented: 1. Remove hardcoded model — 2. Fix activity indicator — 3. Increase max_tokens — 256 → 2048 for reasoning model headroom. Also bumped Ollama timeout 15s → 30s. 4. Response truncation — Added 5. Buddy docs — New No Tests: 10 new tests — truncation unit tests (6), response processing integration tests (4: truncation, thinking block filtering, whitespace handling, model selection without preference). All 1947 tests pass. Commit: Progress tracked by mach6 |
Reasoning models can take well over 30s for chain-of-thought. Timeout errors no longer poison the Ollama status cache since the server is fine — the model is just slow.
- Add /buddy model and /buddy model <name> subcommands - Buddy won't react until a model is configured — shows nudge message - Model choice persisted to buddy.json (ollamaModel field) - /buddy model lists available Ollama models - Validates model is installed before setting - Show warning on hatch/show if no model configured - Update docs with new command and setup flow
The wiggle fallback silently hid every failure — no model configured, Ollama down, timeouts, crashes. Now: - No model configured → buddy says 'No Ollama model set! Run /buddy model' - Ollama unavailable → returns null (no speech) - respondToNameCall returns null on failure, same as react
Code ReviewCriticalFinding 1: ImportantFinding 2: Finding 3: No auto-pick from available models (completeness) Finding 4: Finding 5: Finding 6: Finding 7: Timeout vs connection-error cache distinction is untested (test-review) SuggestionsFinding 8: Finding 9: Finding 10: Residual Finding 11: Finding 12: Finding 13: Test description mismatch (code-review) Strengths
Agents run: code-reviewer, error-auditor, test-reviewer, completeness-checker Reviewed by mach6 |
Review AssessmentClassifications
Additional finding from manual testing:
Simplifier suggestions (low priority, no behavior change):
Action Plan
Assessment by mach6 |
Progress UpdateFixed 6 review findings + 3 simplifier suggestions: Finding 4 — NO_MODEL_MESSAGE leaking into speech bubbles: Finding 14 — reroll/hatch lose ollamaModel: Both Finding 5 — setOllamaModel silently discards when no buddy: Finding 9 — /buddy model missing from autocomplete: Added Finding 6 — handleModelCommand zero test coverage: Added 7 new tests covering all branches: Ollama down, current model display, no model set, model set via prefix match, model not found, no buddy hatched, Ollama down during set. Finding 13 — Test description mismatch: "truncates long names to 12 chars" → "8 chars". Simplifier: Removed redundant explicit defaults in BuddyController construction, extracted double All 1955 tests pass (0 failures). Commit: Progress tracked by mach6 |
Code ReviewCriticalNo critical findings. ImportantFinding 1: Dead catch block with wrong DOMException name in ollamaChat() Finding 2: "No models installed" conflated with "Ollama not running" in user messages Finding 3: No test for timeout vs connection error cache distinction Finding 4: hatch() and reroll() ollamaModel preservation is untested SuggestionsFinding 5: ollamaChat() calls loadStored() from disk on every invocation Finding 6: Orphaned stale comment on OLLAMA_MODEL_BASE Finding 7: getModelNudge() has no test coverage Finding 8: showThinking() bug fix has no regression test Strengths
Agents run: code-reviewer, error-auditor, test-reviewer, completeness-checker Reviewed by mach6 |
Review AssessmentClassifications
Simplifier suggestions (low priority, no behavior change):
Action Plan
Assessment by mach6 |
Progress UpdateFixed 8 review findings + 3 simplifier suggestions from review round 2: Finding 1 — Dead catch block with wrong DOMException name: Removed wrong Finding 2 — "No models installed" conflated with "not running": Finding 3 — No timeout vs connection error cache tests: Replaced impossible Finding 4 — hatch/reroll ollamaModel preservation untested: Added 3 tests: hatch preserves model, hatch without model stays undefined, reroll preserves model. All assert both in-memory state and disk persistence. Finding 5 — ollamaChat calls loadStored() every invocation: Replaced Finding 6 — Orphaned stale comment: Removed duplicate JSDoc block on Finding 7 — getModelNudge() untested: Added 3 tests: returns null when model configured, returns nudge when no model, returns nudge when no buddy. Finding 8 — showThinking() fix untested: Added 4 regression tests: default label, custom label, hideThinking clears indicator, narrow mode rendering. Simplifier: Removed dead outer try/catch in All 1966 tests pass (0 failures). Commit: Progress tracked by mach6 |
Closes #110
Closes #111
Overhaul of the buddy Ollama integration:
Implementation plan posted as a comment below.