Skip to content

Buddy: Ollama overhaul — model selection, thinking indicator, reasoning model support#112

Merged
m-aebrer merged 9 commits intomasterfrom
feature/issue-110-111-buddy-ollama-overhaul
Apr 7, 2026
Merged

Buddy: Ollama overhaul — model selection, thinking indicator, reasoning model support#112
m-aebrer merged 9 commits intomasterfrom
feature/issue-110-111-buddy-ollama-overhaul

Conversation

@m-aebrer
Copy link
Copy Markdown
Collaborator

@m-aebrer m-aebrer commented Apr 7, 2026

Closes #110
Closes #111

Overhaul of the buddy Ollama integration:

  • Remove hardcoded model, respect server default
  • Fix activity indicator (never shows during Ollama calls)
  • Increase max_tokens for reasoning model compatibility
  • Add response truncation + residual think-tag stripping
  • Document Modelfile fix for reasoning models

Implementation plan posted as a comment below.

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Implementation Plan

Context

Investigation revealed two root causes for buddy reasoning model issues:

  1. Ollama model registration: phi4-mini-reasoning's Modelfile template lacks {{.Thinking}}, so Ollama does not detect it as thinking-capable. Adding thinking tags to the template fixes structured field separation — Ollama's parser handles <think> tag extraction server-side when properly configured.

  2. Buddy system gaps: Hardcoded model, low token budget, no response length guard, broken activity indicator.

Key finding: We do NOT need client-side <think> tag parsing in the provider. Once a model's Modelfile includes {{.Thinking}}, Ollama separates reasoning into a structured reasoning field that our openai-completions provider already handles. The fix belongs on the model/Ollama side, not in dreb's streaming pipeline.

Deliverables

1. Remove hardcoded model + respect server default (from issue 110)

  • Remove OLLAMA_MODEL constant and pickOllamaModel() preference for llama3.2
  • Query Ollama for OLLAMA_DEFAULT_MODEL (via the API) and use that if set
  • If no default, pick from available models (no preference ordering)
  • If no models installed, surface a meaningful message

2. Fix activity indicator (from issue 110)

  • The onThinkingStart/onThinkingEnd callbacks bracket the entire Ollama request as a "buddy is working" UX signal — not tied to model reasoning
  • Debug why the existing wiring in buddy-controller.ts / interactive-mode.ts / buddy-component.ts never results in a visible indicator
  • Likely a rendering or state update issue

3. Increase buddy max_tokens (from issue 111)

  • Bump from 256 to 2048+ — it's all local, no cost, reasoning models need headroom for thinking + response

4. Buddy response truncation + residual think-tag strip (from issue 111)

  • Truncate final response text to ~300 words, append ...[truncated] if exceeded
  • Regex strip any residual <think>...</think> as a cheap one-line safety net (belt-and-suspenders for edge cases where Ollama doesn't handle it)

5. Buddy troubleshooting docs

  • Document the Modelfile fix for reasoning models whose templates don't include {{.Thinking}}
  • Explain how Ollama detects thinking capability (thinking.InferTags looks for {{.Thinking}} in the template, or a registered parser, or model family heuristics)
  • Include example fixed Modelfile template

Files to Modify

File Changes
packages/coding-agent/src/core/buddy/buddy-manager.ts Remove hardcoded model, query server default, bump max_tokens, add truncation + think-tag strip fallback
packages/coding-agent/src/core/buddy/buddy-controller.ts Debug/fix activity indicator callbacks
packages/coding-agent/src/modes/interactive/components/buddy-component.ts Debug/fix activity indicator rendering
packages/coding-agent/src/modes/interactive/interactive-mode.ts Debug/fix activity indicator wiring
packages/coding-agent/test/buddy-manager.test.ts Update mocks (no more llama3.2 references), add truncation + strip tests
Buddy docs (TBD location) Troubleshooting guide for reasoning model Modelfiles

Testing Approach

buddy-manager.test.ts updates:

  • No hardcoded model references in mocks
  • Server default model respected when available
  • Response with <think> tags stripped before display
  • Response over 300 words truncated with ...[truncated]
  • Short response passed through unchanged
  • Response that is only thinking with no answer returns fallback

Risks

  • Activity indicator: Need to explore the rendering pipeline before committing to a fix — could be timing, state, or rendering
  • Ollama default model API: Need to verify how OLLAMA_DEFAULT_MODEL is exposed (may need to query /api/show or environment)
  • Reasoning model quality: phi4-mini-reasoning is math-focused and outputs \boxed{} formatting — may not be ideal as a buddy model regardless of fixes, but that is a user choice

Plan created by mach6

… and docs

- Remove hardcoded llama3.2 model constant; use first available Ollama model
- Remove llama3.2/3.1 preference in pickOllamaModel()
- Bump maxTokens 256 → 2048 for reasoning model headroom
- Bump Ollama timeout 15s → 30s
- Fix activity indicator: showThinking() default was null, never rendered
- Add response truncation at 300 words with ...[truncated]
- Return null for empty/whitespace-only responses
- Add buddy.md docs with setup, commands, and reasoning model troubleshooting
- Add tests for truncation, thinking block filtering, model selection
@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Progress Update

All deliverables implemented:

1. Remove hardcoded modelOLLAMA_MODEL constant replaced with OLLAMA_MODEL_BASE (no hardcoded id/name). pickOllamaModel() no longer prefers llama3.2/3.1 — uses first available model.

2. Fix activity indicatorshowThinking() in buddy-component.ts defaulted label to null via label ?? null, which meant the render check thinkingLabel !== null was always false. Changed to label ?? "thinking".

3. Increase max_tokens — 256 → 2048 for reasoning model headroom. Also bumped Ollama timeout 15s → 30s.

4. Response truncation — Added truncateResponse() helper, truncates at 300 words with ...[truncated]. Applied in ollamaChat(). Also returns null for empty/whitespace responses.

5. Buddy docs — New docs/buddy.md with setup guide, command reference, and troubleshooting section covering reasoning model Modelfile fixes (how Ollama detects thinking capability via {{.Thinking}} in template, how to create a fixed Modelfile).

No <think> tag parsing added — investigation confirmed that fixing the Ollama Modelfile template to include {{.Thinking}} is the proper fix. Ollama's built-in parser handles tag extraction server-side when properly configured.

Tests: 10 new tests — truncation unit tests (6), response processing integration tests (4: truncation, thinking block filtering, whitespace handling, model selection without preference). All 1947 tests pass.

Commit: ca466f8


Progress tracked by mach6

m-aebrer added 3 commits April 7, 2026 10:42
Reasoning models can take well over 30s for chain-of-thought. Timeout
errors no longer poison the Ollama status cache since the server is
fine — the model is just slow.
- Add /buddy model and /buddy model <name> subcommands
- Buddy won't react until a model is configured — shows nudge message
- Model choice persisted to buddy.json (ollamaModel field)
- /buddy model lists available Ollama models
- Validates model is installed before setting
- Show warning on hatch/show if no model configured
- Update docs with new command and setup flow
The wiggle fallback silently hid every failure — no model configured,
Ollama down, timeouts, crashes. Now:
- No model configured → buddy says 'No Ollama model set! Run /buddy model'
- Ollama unavailable → returns null (no speech)
- respondToNameCall returns null on failure, same as react
@m-aebrer m-aebrer marked this pull request as ready for review April 7, 2026 15:03
@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Code Review

Critical

Finding 1: <think>...</think> tag stripping not implemented (completeness)
Issue 111 explicitly requires stripping <think> tags from response text — either at the provider level or in ollamaChat(). The PR deliberately omits this, arguing the Modelfile fix is the proper approach. However, the issue acceptance criteria are unambiguous: dreb must handle models whose Modelfile lacks {{.Thinking}}. Users hitting the original bug will still see raw chain-of-thought in speech bubbles.

Important

Finding 2: OLLAMA_DEFAULT_MODEL server default not respected (completeness)
Issue 110 requires querying the Ollama API for a server-default model. checkOllama() only hits /api/tags — no code path queries for a default. The implementation plan listed this as deliverable 1 but the progress comment silently dropped it.

Finding 3: No auto-pick from available models (completeness)
Issue 110 says "if no default is set, pick from available models." The PR instead requires explicit user configuration via /buddy model. This is a UX regression for users who just want the buddy to work — hatching now produces a non-functional buddy until they discover and run /buddy model.

Finding 4: NO_MODEL_MESSAGE leaks into automated reactions (error-audit)
ollamaChat() returns a truthy string ("No Ollama model set!...") when no model is configured. react() passes it through as a speech bubble quip. This fires on every tool error, agent response, and idle timeout — up to once per minute — displaying a config error as buddy speech. Should return null for automated reactions.

Finding 5: setOllamaModel() silently discards preference when no buddy exists (code-review, error-audit)
Running /buddy model <name> before hatching returns "Buddy model set to: X" but loadStored() returns null so nothing persists. The success message is a lie — preference is lost on next session.

Finding 6: handleModelCommand has zero test coverage (test-review)
/buddy model and /buddy model <name> have 6 distinct branches, all untested. No cases in buddy-controller.test.ts.

Finding 7: Timeout vs connection-error cache distinction is untested (test-review)
The PR adds special handling — timeouts preserve Ollama cache, connection errors invalidate it. Only the connection-error path is tested. No test throws DOMException("Aborted", "AbortError") to verify cache preservation on timeout.

Suggestions

Finding 8: showThinking() label fix is untested (test-review)
The one-line fix (label ?? nulllabel ?? "thinking") has no test coverage. A regression reverting this would go undetected.

Finding 9: /buddy model missing from autocomplete completions (code-review)
getArgumentCompletions lists pet/reroll/off/stats but not model. Users typing /buddy <tab> get no hint the primary new feature exists.

Finding 10: Residual llama3.2 hardcode in error message (completeness)
buddy-manager.ts line ~97: "No models installed. Run: ollama pull llama3.2" still names a specific model. Should be generic (e.g. "ollama pull ").

Finding 11: pickOllamaModel — stored-but-uninstalled model path untested (test-review)
No test for when storedModel is set but not in the available list. Same error message as "never configured" — potentially confusing.

Finding 12: getModelNudge() has no tests (test-review)
Two branches (model set vs not set), neither tested.

Finding 13: Test description mismatch (code-review)
Test says "truncates long names to 12 chars" but asserts <= 8. Description should match the actual limit.

Strengths

  • Clean activity indicator fix — the label ?? nulllabel ?? "thinking" is a precise root-cause fix
  • Comprehensive truncation logic with good unit tests (6 cases) and integration tests (4 cases)
  • Well-structured handleModelCommand with proper Ollama availability checks and model validation
  • Good separation: OLLAMA_MODEL_BASE as a template without id/name is cleaner than the old hardcoded constant
  • Timeout increase (15s→120s) and smart cache invalidation (only on connection errors, not timeouts) are thoughtful
  • Thorough buddy.md documentation with troubleshooting section
  • Test refactoring to use test-model instead of llama3.2 is consistent

Agents run: code-reviewer, error-auditor, test-reviewer, completeness-checker


Reviewed by mach6

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Review Assessment

Review comment

Classifications

Finding Classification Reasoning
1: <think> tag stripping not implemented False positive Deliberate design decision — Ollama handles this server-side when Modelfile is correctly configured. Issue 111 has been updated to reflect this. Adding client-side parsing would be fragile and mask misconfigured models.
2: OLLAMA_DEFAULT_MODEL not respected False positive OLLAMA_DEFAULT_MODEL does not exist as a queryable Ollama API concept. No endpoint exposes a server default. Issue 110 has been updated.
3: No auto-pick from available models False positive Auto-pick was intentionally rejected — it caused the original bug (pulling a new model silently changed the buddy). Explicit /buddy model selection is the correct design. Issue 110 has been updated.
4: NO_MODEL_MESSAGE leaks into automated reactions Genuine issue ollamaChat() returns a truthy "No Ollama model set!" string → react() passes it through → triggerReaction() shows it as a speech bubble on every tool error, agent response, and idle timeout. Config error should not appear as automated quips.
5: setOllamaModel silently discards when no buddy Genuine issue /buddy model <name> before hatching returns success but loadStored() is null so nothing persists. User sees "Buddy model set to: X" but the choice is lost.
6: handleModelCommand zero test coverage Genuine issue 6 branches, all untested. New feature needs test coverage.
7: Timeout vs connection-error distinction untested Nitpick Low risk — straightforward branching logic. The behavior is correct; the gap is minor.
8: showThinking label fix untested Nitpick Simple one-line UI state setter. Low regression risk.
9: /buddy model missing from autocomplete Genuine issue Primary new feature is invisible to tab-completion. Quick fix.
10: Residual llama3.2 in error message Nitpick It is a suggestion of a popular model in a help message, not a hardcoded default. Reasonable.
11: pickOllamaModel stored-but-uninstalled untested Nitpick 3-line function, trivially correct by inspection.
12: getModelNudge untested Nitpick 2-line method, trivially correct.
13: Test description "12 chars" vs 8 Genuine issue Misleading spec — easy fix.

Additional finding from manual testing:

Finding Classification Reasoning
14: reroll/hatch lose ollamaModel Genuine issue Both reroll() and hatch() build a new StoredCompanion without carrying over stored?.ollamaModel. Rerolling resets the model choice, requiring the user to re-run /buddy model. The model is a backend preference, not tied to the buddy identity.

Simplifier suggestions (low priority, no behavior change):

Suggestion Notes
Remove redundant explicit defaults in BuddyController construction (interactive-mode.ts) Values match documented defaults — passing them is a maintenance hazard
Extract getState() to local variable to eliminate ! assertion (buddy-controller.ts line 379) Cleaner, avoids double call
Remove redundant || null in react() ollamaChat() already returns text || null — the extra guard is a no-op

Action Plan

  1. Fix NO_MODEL_MESSAGE leaking into automated reactions (finding 4) — In react(), return null when ollamaChat() returns NO_MODEL_MESSAGE so config errors only surface on explicit name-calls, not as automated speech bubbles.

  2. Preserve ollamaModel across reroll/hatch (finding 14) — Carry over stored?.ollamaModel in both reroll() and hatch() when building the new StoredCompanion.

  3. Guard setOllamaModel against missing buddy (finding 5) — Either check buddy existence in handleModelCommand() and return a warning, or have setOllamaModel() create a minimal stub.

  4. Add model to buddy autocomplete (finding 9) — Add to getArgumentCompletions subcommands array.

  5. Add handleModelCommand tests (finding 6) — Cover: /buddy model with/without Ollama, /buddy model <valid>, /buddy model <invalid>, /buddy model without hatched buddy.

  6. Fix test description typo (finding 13) — "12 chars" → "8 chars".

  7. Apply simplifier suggestions — Remove redundant defaults, extract getState local, remove redundant \|\| null.


Assessment by mach6

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Progress Update

Fixed 6 review findings + 3 simplifier suggestions:

Finding 4 — NO_MODEL_MESSAGE leaking into speech bubbles: ollamaChat() now returns null when no model is configured instead of a truthy error string. Config nudge is already surfaced via getModelNudge() on hatch/show — no need to repeat it as automated quips.

Finding 14 — reroll/hatch lose ollamaModel: Both reroll() and hatch() now carry over stored?.ollamaModel when building the new StoredCompanion. Model choice persists across rerolls.

Finding 5 — setOllamaModel silently discards when no buddy: handleModelCommand() now checks getState() and hasStoredBuddy() before attempting to set model. Returns a warning directing users to hatch first.

Finding 9 — /buddy model missing from autocomplete: Added model to the buddy subcommands array in getArgumentCompletions.

Finding 6 — handleModelCommand zero test coverage: Added 7 new tests covering all branches: Ollama down, current model display, no model set, model set via prefix match, model not found, no buddy hatched, Ollama down during set.

Finding 13 — Test description mismatch: "truncates long names to 12 chars" → "8 chars".

Simplifier: Removed redundant explicit defaults in BuddyController construction, extracted double getState() to local variable eliminating ! assertion, removed redundant || null in react().

All 1955 tests pass (0 failures).

Commit: 0076537


Progress tracked by mach6

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Code Review

Critical

No critical findings.

Important

Finding 1: Dead catch block with wrong DOMException name in ollamaChat()
buddy-manager.ts lines 357–365. The catch block checks err.name === "AbortError" but AbortSignal.timeout() fires "TimeoutError", not "AbortError". With the current condition, timeouts would invalidate the cache — the opposite of the stated intent. Additionally, completeSimple may never actually throw (the openai-completions provider resolves with stopReason: "error" or "aborted" instead of rejecting), making this entire catch block dead code. The actual working behavior likely comes from the response.stopReason checks below. Either fix the error name to "TimeoutError" for defensive coverage, or remove the dead catch block if completeSimple truly never rejects.
Sources: error-auditor, code-reviewer

Finding 2: "No models installed" conflated with "Ollama not running" in user messages
buddy-controller.ts handleModelCommand(). checkOllama() correctly distinguishes "Ollama down" from "Ollama running but no models" at the data layer (returns { available: false, error: "No models installed..." }), but handleModelCommand ignores status.error and always returns "Ollama is not running. Start it with: ollama serve" — factually wrong when Ollama IS running but has no models. The status.error field with the correct pull instructions is available but unused.
Source: completeness-checker

Finding 3: No test for timeout vs connection error cache distinction
buddy-manager.test.ts. The PR introduces smart cache invalidation (timeouts preserve cache, connection errors clear it) but only the connection error path is tested. No test throws a DOMException with the timeout name to verify cache preservation. This is compounded by finding 1 — the wrong error name means the behavior is also wrong, and a test would have caught it.
Source: test-reviewer

Finding 4: hatch() and reroll() ollamaModel preservation is untested
buddy-manager.test.ts. "Preserves ollamaModel across reroll/hatch" is a stated feature, and the implementation carries it forward via conditional spread. But no test asserts state.ollamaModel after hatch or reroll. The writeStoredBuddy helper always writes ollamaModel: "test-model", so the fixture is there — the assertion is missing. If someone drops the spread, all tests still pass.
Source: test-reviewer

Suggestions

Finding 5: ollamaChat() calls loadStored() from disk on every invocation
buddy-manager.ts lines 344–345. ollamaChat is the hot path (every reaction, name-call, idle trigger). It reads buddy.json from disk on every call to get the model name, even though this.state.ollamaModel is always up-to-date (setOllamaModel keeps both in sync). All callers already guard this.state is non-null before reaching ollamaChat. Use this.state.ollamaModel or this.getOllamaModel() instead.
Source: code-reviewer

Finding 6: Orphaned stale comment on OLLAMA_MODEL_BASE
buddy-manager.ts lines 24–25. Two consecutive JSDoc blocks — the old /** Ollama model config for buddy reactions */ was left behind when renaming the constant. Only the last block attaches to the declaration.
Source: code-reviewer

Finding 7: getModelNudge() has no test coverage
buddy-controller.ts. Two branches (model configured vs not), neither tested. This is the only mechanism that prompts users to configure a model after hatching.
Source: test-reviewer

Finding 8: showThinking() bug fix has no regression test
buddy-component.ts. The label ?? null to label ?? "thinking" fix is the root cause fix for the activity indicator bug (issue 110), but has zero test coverage. No BuddyComponent tests exist in the suite.
Source: test-reviewer

Strengths

  • Clean activity indicator root-cause fix — precise one-line change
  • Comprehensive truncation logic with 6 unit tests and 4 integration tests
  • Well-structured handleModelCommand with proper Ollama availability checks and model validation
  • Smart cache invalidation concept (timeout vs connection error) is the right design, even though the implementation has a bug
  • Thorough buddy.md documentation with troubleshooting section for reasoning model Modelfiles
  • Test refactoring from llama3.2 to test-model is consistent throughout
  • ollamaModel preserved across reroll/hatch — correct design (model is a backend pref, not tied to buddy identity)
  • 7 new controller tests and 10 new manager tests show good testing discipline

Agents run: code-reviewer, error-auditor, test-reviewer, completeness-checker


Reviewed by mach6

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Review Assessment

Review comment

Classifications

Finding Classification Reasoning
1: Dead catch block with wrong DOMException name Genuine issue Confirmed both claims. AbortSignal.timeout() fires TimeoutError, not AbortError. Additionally, completeSimple() never throws — the openai-completions provider catches all errors internally and resolves with stopReason: "error" or "aborted". The catch block is dead code. Behavior works by accident via the stopReason checks below.
2: "No models installed" conflated with "Ollama not running" Genuine issue Confirmed. checkOllama() returns distinct error strings for each case, but handleModelCommand() ignores status.error and hardcodes "Ollama is not running" for both. Users with Ollama running but no models get incorrect guidance.
3: No test for timeout vs connection error cache distinction Genuine issue Confirmed. No test exercises the cache preservation/invalidation distinction. The existing "returns null when completeSimple throws" test uses mockRejectedValue which is an impossible scenario since completeSimple never throws.
4: hatch/reroll ollamaModel preservation untested Genuine issue Confirmed. writeStoredBuddy sets ollamaModel: "test-model" but no hatch or reroll test asserts the value survives. The conditional spread could be dropped silently.
5: ollamaChat calls loadStored() every invocation Nitpick this.state is always non-null when ollamaChat() runs, and setOllamaModel() keeps both disk and in-memory in sync. The extra disk read is redundant but not a correctness issue — it is a small JSON file per buddy interaction.
6: Orphaned stale comment on OLLAMA_MODEL_BASE Nitpick Confirmed. Two consecutive JSDoc blocks; first is stale. Cosmetic only.
7: getModelNudge() has no test coverage Genuine issue Confirmed. New method in this PR, two branches, neither tested. This is the only mechanism that prompts users to configure a model after hatching.
8: showThinking() bug fix has no regression test Genuine issue Confirmed. The root-cause fix for the activity indicator bug (issue 110) has zero test coverage. No BuddyComponent tests exist. A regression reverting "thinking" back to null would go undetected.

Simplifier suggestions (low priority, no behavior change):

Suggestion Notes
Remove dead outer try/catch in react() and respondToNameCall() ollamaChat handles all failures internally; outer catch is unreachable
Extract mountAndRevealBuddy() helper in interactive-mode.ts Same 3-method sequence (mountBuddy + showBuddyStatsPanel + checkAndWarnOllama) repeated in reroll/hatch/show cases
Remove redundant duplicate lifecycle test "should start and load existing buddy" is a strict subset of the test below it

Action Plan

  1. Remove dead catch block, handle stopReason correctly in ollamaChat() (finding 1) — Remove the try/catch around completeSimple. Check response.stopReason for "error" (invalidate cache) vs "aborted" (preserve cache). This makes error handling explicit and correct.

  2. Use status.error in handleModelCommand() (finding 2) — Replace hardcoded "Ollama is not running" with status.error so users see correct guidance for "no models" vs "not running".

  3. Fix impossible completeSimple-throws test, add stopReason-based cache tests (finding 3) — Replace mockRejectedValue test with stopReason-based tests: one for "error" (cache invalidated) and one for "aborted" (cache preserved).

  4. Add ollamaModel preservation assertions to hatch/reroll tests (finding 4) — Assert state.ollamaModel equals stored value after both operations.

  5. Add getModelNudge() tests (finding 7) — Two branches: model configured (returns null) and not configured (returns nudge string).

  6. Add showThinking() regression test (finding 8) — Verify showThinking() with no args sets a non-null label, preventing the original bug from recurring.

  7. Apply simplifier suggestions — Remove dead outer try/catch, extract mountAndRevealBuddy helper, remove redundant test.


Assessment by mach6

@m-aebrer
Copy link
Copy Markdown
Collaborator Author

m-aebrer commented Apr 7, 2026

Progress Update

Fixed 8 review findings + 3 simplifier suggestions from review round 2:

Finding 1 — Dead catch block with wrong DOMException name: Removed wrong AbortError check (was dead code since completeSimple never throws). Simplified catch to safety net. Added explicit stopReason: "aborted" handling for timeouts.

Finding 2 — "No models installed" conflated with "not running": handleModelCommand now uses status.error from checkOllama() instead of hardcoded "Ollama is not running" message. Users with Ollama running but no models now get correct guidance.

Finding 3 — No timeout vs connection error cache tests: Replaced impossible mockRejectedValue test with two stopReason-based tests: "error" verifies cache invalidation (fetch called twice), "aborted" verifies cache preservation (fetch called once).

Finding 4 — hatch/reroll ollamaModel preservation untested: Added 3 tests: hatch preserves model, hatch without model stays undefined, reroll preserves model. All assert both in-memory state and disk persistence.

Finding 5 — ollamaChat calls loadStored() every invocation: Replaced loadStored()?.ollamaModel with this.state?.ollamaModel — no unnecessary disk I/O.

Finding 6 — Orphaned stale comment: Removed duplicate JSDoc block on OLLAMA_MODEL_BASE.

Finding 7 — getModelNudge() untested: Added 3 tests: returns null when model configured, returns nudge when no model, returns nudge when no buddy.

Finding 8 — showThinking() fix untested: Added 4 regression tests: default label, custom label, hideThinking clears indicator, narrow mode rendering.

Simplifier: Removed dead outer try/catch in react() and respondToNameCall(). Extracted mountAndRevealBuddy() helper from repeated 3-method sequence in interactive-mode.ts. Removed redundant subset lifecycle test. Updated controller test mocks with error field.

All 1966 tests pass (0 failures).

Commit: 3dcbc43


Progress tracked by mach6

@m-aebrer m-aebrer merged commit 3f259d2 into master Apr 7, 2026
2 checks passed
@m-aebrer m-aebrer deleted the feature/issue-110-111-buddy-ollama-overhaul branch April 7, 2026 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant