feat(runner): upgrade to UBI 10, add glab CLI, pin versions, add freshness workflow#1091
feat(runner): upgrade to UBI 10, add glab CLI, pin versions, add freshness workflow#1091jeremyeder wants to merge 17 commits intomainfrom
Conversation
Comprehensive update of the ambient-runner: Dependencies (pyproject.toml): - Bump all 17 Python deps to latest PyPI releases - Migrate [tool.uv] dev-dependencies to [dependency-groups] dev - Add pytest-asyncio asyncio_mode = "auto" for v1.x compatibility - Update requires-python from >=3.11 to >=3.12 MCP servers (.mcp.json): - Bump workspace-mcp from 1.6.1 to 1.14.2 Container images (Dockerfile): - Upgrade base from ubi9/python-311 to ubi10/ubi (pinned by digest) - Python 3.11 to 3.12 (RHEL 10 system default) - Node.js 20 to 22 LTS (explicit module stream) - Add --break-system-packages for PEP 668 on RHEL 10 - Remove 37 lines of commented-out Playwright code State-sync (state-sync/Dockerfile): - Bump Alpine from 3.19 to 3.21 Housekeeping: - Modernize type hints across 21 files (Optional -> X | None, etc.) - Fix test_google_drive_tool_invocation to use @pytest.mark.skip - Add docs/UPDATE_PROCEDURE.md for repeatable maintenance Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add Zod schema + React form matching claude-agent-sdk 0.1.48 types - Integrate into create-session-dialog behind `advanced-agent-options` flag - Add agentOptions field to CreateAgenticSessionRequest type - Refine UPDATE_PROCEDURE.md with form sync steps and automation strategy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Address code review findings: - Split 1087-line options-form.tsx into 11 sub-components under _components/ - Replace bare <label> elements with Shadcn <Label> throughout - Use stable IDs instead of array index keys in list editors - Add TODO comment documenting backend agentOptions handler gap Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add agentOptions to CreateAgenticSessionRequest in agentic-session.ts (dialog imports from this file, not api/sessions.ts) - Use z.input for ClaudeAgentOptionsForm type so zodResolver input type matches the form type parameter (fields with .default() are optional) - Bump CI Python version from 3.11 to 3.12 to match requires-python Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add httpx to CI test dependencies (moved to dependency-groups in pyproject.toml, not installed by pip install -e .) - Fix UBI 10 base image digest to use OCI manifest index format (buildx needs the multi-arch index digest, not the single manifest) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Install git before gh CLI (gh RPM depends on git package) - Update observability tests to use standalone setup_bridge_observability() (method was refactored from bridge instance method to module function) - Add httpx to CI test dependencies Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove dnf module enable for nodejs (RHEL 10 ships Node.js directly in AppStream, not as a module stream) - Install all optional deps in CI (pip install -e ".[all]") so tests requiring claude-agent-sdk can run Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The interrupt endpoint returns "No active session manager" instead of including the thread_id in the error. Relax assertion to match. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
validate_vertex_credentials_file now checks that the credentials file exists. Provide a temporary file in tests that use Vertex mode. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…dule) The test imports from a top-level `auth` module that doesn't exist. Skip the entire file with a clear reason until migrated to the new ambient_runner.platform.auth module path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…flow - Add glab (GitLab CLI) binary to the runner image - Pin all runner tools with explicit versions via Dockerfile ARGs: gh 2.74.0, glab 1.52.0, uv 0.7.8, pre-commit 4.2.0, gemini-cli 0.1.17 - Switch gh from dnf repo install to versioned binary download - Add weekly CI workflow (runner-tool-versions.yml) that checks all components for updates and opens a PR when newer versions are available - Covers: base image digest, gh, glab, uv, pre-commit, gemini-cli Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Collapse 5 separate fetch steps into 1 with concurrent background curls - Collapse 6 separate sed update steps into 1 combined step - Remove eval-based variable indirection (injection risk) - Update base image references from ubi9/python-311 to ubi10/ubi - Merge gh + glab Dockerfile installs into single RUN layer Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds an advanced Agent Options UI and Zod-backed form, propagates agentOptions in session creation, modernizes Python typing across ambient-runner, updates Dockerfiles and pinned tools (including a weekly runner-image freshness workflow), adds SDD preflight and runner constitution/spec files, bumps runner to Python 3.12, and updates tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Frontend User
participant Dialog as Create Session Dialog
participant Form as AgentOptionsForm (Zod)
participant API as Backend Sessions API
User->>Dialog: Open dialog, enable advanced options
Dialog->>Form: Render AgentOptionsFields (editors)
User->>Form: Edit agentOptions fields
Form->>Form: onChange updates internal state
User->>Dialog: Submit
Dialog->>Form: trigger() validate agentOptions
Form-->>Dialog: valid / invalid
alt valid
Dialog->>API: POST /sessions { ..., agentOptions }
API-->>Dialog: 201 Created
Dialog->>User: Success
else invalid
Dialog->>User: Show validation errors
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
components/runners/ambient-runner/ag_ui_claude_sdk/adapter.py (1)
430-443:⚠️ Potential issue | 🟠 Major
build_options()acceptsthread_idbut never uses it.The parameter is documented as "for session resumption lookup" (line 441) but is never referenced in the method body. Callers like
bridge.py:157-158pass it expecting thread-based resumption behavior, but onlyresume_fromis actually used (line 544). This API contract mismatch creates misleading caller expectations.Either implement thread-based session lookup or remove
thread_idfrom the signature and docstring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ag_ui_claude_sdk/adapter.py` around lines 430 - 443, The build_options method declares and documents a thread_id parameter for session resumption but never uses it; update the implementation so thread-based lookup is actually performed or remove the parameter and docs to match behavior. Locate build_options in ag_ui_claude_sdk.adapter (the function that currently accepts input_data, thread_id, resume_from) and either: (A) implement thread lookup by using thread_id to resolve the session/thread (same lookup path used for resume_from/session id resolution) and feed the resolved session history into the existing resume logic, ensuring callers like bridge.py that pass thread_id get the expected resumption, or (B) remove thread_id from the signature and the docstring (and update any callers such as bridge.py to stop passing thread_id) so the API contract matches the actual resume_from-based behavior.components/runners/ambient-runner/ag_ui_claude_sdk/handlers.py (1)
137-151: 🛠️ Refactor suggestion | 🟠 Major
parent_tool_use_idparameter is accepted but unused in emitted events.The function receives this parameter and documents it for nested results, but never includes it in the
ToolCallEndEventorToolCallResultEventpayloads. This differs fromhandle_tool_use_block, which preserves parent linkage viaparent_message_id. Either remove the parameter or include it in emitted events to maintain parent-child relationships explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ag_ui_claude_sdk/handlers.py` around lines 137 - 151, The handler for ToolResultBlock currently accepts parent_tool_use_id but never attaches it to emitted events; either remove the parameter or, preferably, mirror handle_tool_use_block by including parent linkage: when constructing ToolCallEndEvent and ToolCallResultEvent add a parent_message_id or parent_tool_use_id field populated from the parent_tool_use_id parameter (or map parent_tool_use_id -> parent_message_id if events expect that name) so nested tool results preserve parent-child relationships; update all places in this function where ToolCallEndEvent and ToolCallResultEvent are instantiated to pass the parent id.components/runners/state-sync/Dockerfile (1)
1-14:⚠️ Potential issue | 🔴 CriticalThe Alpine migration is incomplete; this image will neither build nor start.
dnf/git-coreare Fedora-RHEL specific, so the build breaks as soon as this Dockerfile runs on Alpine. On top of that,components/runners/state-sync/sync.shstill uses#!/bin/bash, but Alpine does not shipbashby default, so theENTRYPOINTfails even if the package install is fixed. Both.github/workflows/components-build-deploy.ymland.github/workflows/prod-release-deploy.yamlbuild this Dockerfile, so this blocks CI and release images.Proposed fix
FROM alpine:3.21 -RUN dnf install -y rclone git-core jq && \ - dnf clean all +RUN apk add --no-cache bash rclone git jq🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/state-sync/Dockerfile` around lines 1 - 14, The Dockerfile uses Fedora tools (dnf, git-core) and the container lacks bash, causing builds and ENTRYPOINT (/usr/local/bin/sync.sh) to fail; replace the install step to use Alpine's package manager (apk add --no-cache rclone git jq bash) and/or remove bash dependency by changing the scripts (sync.sh and hydrate.sh) shebangs to /bin/sh and ensuring they are POSIX-compliant; keep the COPY of hydrate.sh and sync.sh and the chmod +x step, and verify ENTRYPOINT still points to /usr/local/bin/sync.sh after you either install bash or update the script shebangs.components/runners/ambient-runner/ambient_runner/bridge.py (1)
155-157:⚠️ Potential issue | 🟠 MajorReturn the stored context from the base accessor.
Line 157 persists
self._context, but the basecontextproperty still hardcodesNone. Any code using thePlatformBridgeinterface will never see the initialized context, which breaks the contract this property documents.Proposed fix
`@property` def context(self) -> RunnerContext | None: """The current ``RunnerContext``, or ``None`` before ``set_context()``.""" - return None + return self._contextAlso applies to: 234-237
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ambient_runner/bridge.py` around lines 155 - 157, The base context property currently returns None instead of the stored value, so update the property getter(s) to return self._context (with the correct Optional[RunnerContext] typing) rather than None; locate the context property in bridge.py and replace the hardcoded None return with return self._context, and apply the same change to the other context accessor instance referenced around lines 234-237 so both PlatformBridge implementations expose the initialized _context set by set_context.components/runners/ambient-runner/ambient_runner/bridges/langgraph/bridge.py (1)
87-95:⚠️ Potential issue | 🟠 MajorDon’t silently ignore
thread_id.The base interface now exposes thread-scoped interruption, but this implementation always calls
interrupt()without using the supplied ID. That gives callers the wrong contract and can cancel the wrong run once multiple threads are active. If LangGraph cannot target a specific thread yet, reject non-NoneIDs explicitly instead of ignoring them.Proposed fix
async def interrupt(self, thread_id: str | None = None) -> None: """Interrupt the current LangGraph execution.""" if self._adapter is None: raise RuntimeError("LangGraphBridge: no adapter to interrupt") + if thread_id is not None: + raise NotImplementedError( + "LangGraphBridge.interrupt() does not support thread_id" + ) if hasattr(self._adapter, "interrupt"): await self._adapter.interrupt()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ambient_runner/bridges/langgraph/bridge.py` around lines 87 - 95, The interrupt method on LangGraphBridge currently ignores the thread_id parameter; update ambient_runner/bridges/langgraph/bridge.py so interrupt(self, thread_id: str | None = None) rejects non-None thread_id when the underlying adapter does not support thread-scoped interruption instead of silently ignoring it: check if thread_id is not None and if self._adapter has an interrupt callable that accepts a thread_id argument (or otherwise supports thread-scoped cancelation), call that with the id; if the adapter only supports global interrupt (self._adapter.interrupt without args) raise a clear exception (e.g., ValueError or NotImplementedError) indicating thread-scoped interrupt is unsupported; keep the existing behavior of calling self._adapter.interrupt() when thread_id is None and the adapter only supports global interrupt.components/runners/ambient-runner/ambient_runner/bridges/gemini_cli/mcp.py (1)
140-159: 🛠️ Refactor suggestion | 🟠 Major
setup_gemini_mcp()return type should bestr, notstr | None.Line 153 always provides a fallback empty dict, line 157 unconditionally injects
ambient-feedback, and line 159 callswrite_gemini_settings()which returnsstr. The function never returnsNone, making thestr | Noneannotation and docstring claim incorrect. This misleads callers and violates the type contract.Proposed fix
def setup_gemini_mcp( context: RunnerContext, cwd_path: str, -) -> str | None: +) -> str: """End-to-end MCP setup: load config, write settings file, write commands. Args: context: Runner context. cwd_path: Working directory. Returns: - Path to the written settings file, or None if no MCP servers. + Path to the written settings file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ambient_runner/bridges/gemini_cli/mcp.py` around lines 140 - 159, The function setup_gemini_mcp always constructs a settings dict (via build_gemini_mcp_settings with a fallback), injects an ambient-feedback entry (_build_feedback_server_entry), and calls write_gemini_settings which returns a string, so update setup_gemini_mcp to reflect a non-optional return: change the type annotation from str | None to str, adjust the docstring return description to state it always returns the settings file path (str), and ensure any callers relying on None are updated if needed; locate setup_gemini_mcp, build_gemini_mcp_settings, _build_feedback_server_entry, and write_gemini_settings to make these edits.components/runners/ambient-runner/Dockerfile (1)
42-46:⚠️ Potential issue | 🟠 MajorUse the pinned uv.lock instead of resolving from pyproject.toml constraints.
pip3 install '/app/ambient-runner[all]'resolves dependencies from pyproject.toml version constraints (e.g.,fastapi>=0.135.1), which allows transitive dependencies to drift between builds. Sinceuv.lockis checked in anduvis already installed in the image, useuv pip install --locked '/app/ambient-runner[all]'to ensure reproducible builds with pinned transitive dependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/Dockerfile` around lines 42 - 46, Replace the current unpinned install command that runs pip3 install '/app/ambient-runner[all]' with a locked install using uv so transitive dependencies are pinned; specifically, when installing the ambient-runner package (the RUN pip3 install line in the Dockerfile) switch to using uv pip install --locked '/app/ambient-runner[all]' so the checked-in uv.lock is honored and builds are reproducible.components/frontend/src/types/agentic-session.ts (1)
228-250: 🛠️ Refactor suggestion | 🟠 MajorConsolidate duplicated
CreateAgenticSessionRequesttype definitions.This file and
components/frontend/src/types/api/sessions.tsboth defineCreateAgenticSessionRequest. Keeping two request contracts will drift and cause subtle API mismatches over time (especially around evolving fields likeagentOptions).Recommend extracting one canonical type and reusing it in both call-site and API layers.
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/frontend/src/types/agentic-session.ts` around lines 228 - 250, The CreateAgenticSessionRequest type is duplicated between this file and components/frontend/src/types/api/sessions.ts; extract a single canonical definition (e.g., export type CreateAgenticSessionRequest = { ... }) into a shared types module (for example components/frontend/src/types/session-types.ts or add to an existing central types barrel), update both this file and components/frontend/src/types/api/sessions.ts to import and re-export/use that single type, and ensure all fields (including agentOptions, repos, activeWorkflow, environmentVariables, etc.) are preserved so callers and API layers reference the exact same symbol.components/runners/ambient-runner/tests/test_e2e_api.py (1)
397-403:⚠️ Potential issue | 🟡 MinorKeep a minimal semantic check on
detail, not just key presence.Right now this passes even if
detailis empty or non-string. Strengthen it to assert a non-empty string while staying message-agnostic.Proposed fix
data = resp.json() assert "detail" in data + assert isinstance(data["detail"], str) + assert data["detail"].strip() != ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/tests/test_e2e_api.py` around lines 397 - 403, The test test_interrupt_returns_structured_error currently only checks that "detail" exists; change it to assert that data["detail"] is a non-empty string (e.g., use isinstance(data.get("detail"), str) and assert data["detail"].strip() != "") so the error detail is semantically useful while remaining message-agnostic; update the assertions in that test to perform these two checks instead of only key presence.components/runners/ambient-runner/ambient_runner/endpoints/run.py (1)
31-40:⚠️ Potential issue | 🟠 MajorDo not silently drop object-shaped
context.Line 31 accepts
dictforcontext, but Line 40 drops it by coercing non-lists to[]. This can lose caller-provided context without any error.💡 Proposed fix
- context_list = self.context if isinstance(self.context, list) else [] + if isinstance(self.context, list): + context_list = self.context + elif isinstance(self.context, dict): + context_list = [self.context] + else: + context_list = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/ambient_runner/endpoints/run.py` around lines 31 - 40, The to_run_agent_input method currently discards object-shaped context by coercing non-lists to an empty list; update the logic so that self.context is preserved: if self.context is a list use it, if it's a dict wrap it into a single-element list (e.g., context_list = [self.context]), and if None default to []; ensure this change is applied where context_list is constructed in to_run_agent_input so callers' dict contexts are not lost when building the RunAgentInput.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/runner-tool-versions.yml:
- Around line 57-97: The background fetches launched with fetch_version ... &
should capture each child PID and wait on them individually so a single failed
fetch causes the job to fail deterministically; modify the section that starts
the concurrent fetches to save each background PID (e.g. pids+=( $! ) after each
fetch_version invocation), then loop over the pids array and run wait "$pid" for
each, checking the exit status and echoing which fetch (use the corresponding
tool name tracked alongside the PID) failed and exit 1 if any wait returns
non‑zero; use the existing fetch_version function name and the PIDs to map
failures back to specific tools so the workflow fails fast and prints which API
call failed.
In
`@components/frontend/src/components/claude-agent-options/_components/agents-editor.tsx`:
- Around line 24-26: addAgent currently builds keys as `agent-${entries.length +
1}`, which can collide after deletions; change addAgent to generate a truly
unique key instead (e.g. use crypto.randomUUID() when available or fall back to
a timestamp or an incrementing counter held in component state/closure) and use
that key when calling onChange({ ...value, [newKey]: { description: "", prompt:
"" } }); keep removeAgent and onChange usage the same but ensure the newKey is
unique (reference: addAgent, removeAgent, value, onChange).
In
`@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx`:
- Around line 54-61: The delete Button rendering an icon-only control (Button
with Trash2) lacks an accessible name; update the Button in key-value-editor.tsx
to include an accessible label (e.g., aria-label or aria-labelledby) that
describes the action and target (for example "Remove entry" or `Remove ${k}` if
`k` is the key), or add visually-hidden text inside the button, and ensure the
onClick handler (removeEntry) remains unchanged; locate the Button element and
Trash2 icon in the component and add the aria attribute or hidden label
accordingly.
- Around line 19-20: The component currently treats object keys as row
identifiers which causes addEntry/no-op, key-collision overwrites, remounts on
edits, nullable env values, and an inaccessible delete button; refactor to use
stable row IDs (e.g., maintain an array of {id, key, value} or a parallel id
map) so entries (derived from value) are rendered with id as React key (not the
editable key), change addEntry to append a new unique id entry instead of adding
an empty string key, update onChange handlers for key edits to rename safely
(detect collisions and either prevent/merge or disallow duplicate keys) and emit
non-null empty strings for the env field (never null) to match schema, and make
the delete control accessible by adding an aria-label or visible label to the
delete button.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`:
- Around line 33-45: The addServer function currently uses
server-${entries.length + 1} which can collide after deletions and
updateServerName will silently overwrite an existing key; change addServer to
compute the next unused default key by scanning Object.keys(value) for existing
server-N names and incrementing N until an unused name is found, then call
onChange with that unique key and a McpFormServer default; modify
updateServerName to check if newName already exists in value and, if so, do not
overwrite (either return early or surface an error/validation) instead of
clobbering the entry, otherwise perform the rename by copying entries into next
and calling onChange(next).
In
`@components/frontend/src/components/claude-agent-options/_components/output-format-field.tsx`:
- Around line 23-38: The textarea's local rawJson state and jsonError are out of
sync with external form value and don't propagate parse failures to the form;
add a useEffect that watches the value prop and updates rawJson = value ?
JSON.stringify(value, null, 2) : "" so external resets/defaults refresh the
editor, and modify handleChange so that when JSON.parse(text) throws you
setJsonError(...) AND call onChange(undefined) (or a provided onInvalid callback
if available) to clear the form value and surface the validation error to the
form instead of silently keeping the last good object; keep the successful parse
path calling onChange(parsed) and setJsonError(null).
In
`@components/frontend/src/components/claude-agent-options/_components/string-list-editor.tsx`:
- Around line 48-55: The delete button in string-list-editor.tsx is icon-only
and lacks an accessible name; update the Button (the one rendering <Trash2 />
with onClick={() => removeItem(i)}) to provide an accessible label (e.g.,
aria-label or aria-labelledby, or include a visually hidden span with the row
index or value) that describes the action and which item will be removed so
screen readers can identify the button's purpose and target.
In `@components/frontend/src/components/claude-agent-options/options-form.tsx`:
- Around line 119-145: The number inputs for the form fields (e.g., the
FormField for "max_budget_usd" and the other optional numeric field using
field.onChange) need guarding against NaN: replace the direct
Number(e.target.value) usage with reading e.target.valueAsNumber and check
Number.isNaN(valueAsNumber) before calling field.onChange; if it is NaN, call
field.onChange(undefined) (or the empty state) otherwise pass the valid number,
and ensure the controlled value uses field.value ?? "" so the input never
receives NaN.
In `@components/frontend/src/components/claude-agent-options/schema.ts`:
- Around line 149-152: The outputFormatSchema currently allows partial objects
because both fields are optional; update outputFormatSchema so that both
properties are required by removing .optional() from the type and schema entries
(i.e., make type: z.literal("json_schema") and schema: z.record(z.unknown())
required on the z.object) so any provided output_format must include both
fields; adjust any related validators that construct or validate output_format
to expect both fields accordingly.
In `@components/frontend/src/types/api/sessions.ts`:
- Around line 143-148: The agentOptions field is currently dropped by the
backend; update the frontend to avoid sending agentOptions unless the backend
supports it: detect the backend capability (e.g., via a feature flag check for
advanced-agent-options or a capability API) and only include agentOptions in the
request payload when that support is present, otherwise omit agentOptions or
surface a validation error to the user (using ClaudeAgentOptionsForm validation
path) so submissions aren’t silently accepted; also add explicit client-side
rejection or UX feedback when agentOptions is present but backend support is
absent to prevent false-success behavior and ensure the backend handler that
writes into the AgenticSession CR spec will actually receive the field.
In `@components/runners/ambient-runner/ambient_runner/bridges/claude/bridge.py`:
- Line 15: The code imports Any and AsyncIterator but removed Optional, yet the
stop_task method signature still uses Optional[str]; update the stop_task
annotation (the stop_task method in the Claude bridge) to use the modern union
syntax str | None instead of Optional[str] so the type name resolves without
importing Optional, and ensure any other occurrences in the same file use str |
None consistently.
In `@components/runners/ambient-runner/Dockerfile`:
- Around line 16-22: The Dockerfile RUN block that downloads and extracts gh and
glab (using GH_VERSION and GLAB_VERSION) should download each archive to a
temporary file, fetch the corresponding published checksum file (or release
signature), verify the archive with sha256sum -c (or use gh release verify for
gh when available for v2.74.0+), and only extract to /usr/local/bin if
verification succeeds; then set executable bits and remove temp files. Update
the RUN sequence that currently pipes directly into tar to instead perform: curl
-o /tmp/gh.tar.gz and curl -o /tmp/gh.sha256 (or signature), verify, extract gh
to /usr/local/bin, repeat for glab, handle verification failures by failing the
build, and keep the existing chmod step after successful extraction.
In `@components/runners/ambient-runner/tests/test_gemini_auth.py`:
- Around line 141-145: The test has two problems: the ctx = _make_context(...)
call is mis-indented outside the async test function and must be moved inside
the async test block so the test can be collected, and the environment dict used
to build contexts contains a duplicate key GOOGLE_APPLICATION_CREDENTIALS (the
second entry overrides the first) — remove the duplicate entry or rename/merge
the intended values so only one GOOGLE_APPLICATION_CREDENTIALS is present;
update the usages around _make_context and setup_gemini_cli_auth to reference
the single correct key so the test logic behaves as intended.
In `@components/runners/ambient-runner/tests/test_git_identity.py`:
- Around line 17-23: Remove the obsolete migration block: delete the
sys.path.insert(0, str(Path(__file__).parent.parent)) line and the pytestmark =
pytest.mark.skip(...) statement (and any unused references to sys or Path in
that top-of-file block); the tests already import ambient_runner.platform.auth
so the sys.path hack and skip marker are unnecessary—remove the entire block
containing sys.path.insert and pytestmark to allow normal test collection.
---
Outside diff comments:
In `@components/frontend/src/types/agentic-session.ts`:
- Around line 228-250: The CreateAgenticSessionRequest type is duplicated
between this file and components/frontend/src/types/api/sessions.ts; extract a
single canonical definition (e.g., export type CreateAgenticSessionRequest = {
... }) into a shared types module (for example
components/frontend/src/types/session-types.ts or add to an existing central
types barrel), update both this file and
components/frontend/src/types/api/sessions.ts to import and re-export/use that
single type, and ensure all fields (including agentOptions, repos,
activeWorkflow, environmentVariables, etc.) are preserved so callers and API
layers reference the exact same symbol.
In `@components/runners/ambient-runner/ag_ui_claude_sdk/adapter.py`:
- Around line 430-443: The build_options method declares and documents a
thread_id parameter for session resumption but never uses it; update the
implementation so thread-based lookup is actually performed or remove the
parameter and docs to match behavior. Locate build_options in
ag_ui_claude_sdk.adapter (the function that currently accepts input_data,
thread_id, resume_from) and either: (A) implement thread lookup by using
thread_id to resolve the session/thread (same lookup path used for
resume_from/session id resolution) and feed the resolved session history into
the existing resume logic, ensuring callers like bridge.py that pass thread_id
get the expected resumption, or (B) remove thread_id from the signature and the
docstring (and update any callers such as bridge.py to stop passing thread_id)
so the API contract matches the actual resume_from-based behavior.
In `@components/runners/ambient-runner/ag_ui_claude_sdk/handlers.py`:
- Around line 137-151: The handler for ToolResultBlock currently accepts
parent_tool_use_id but never attaches it to emitted events; either remove the
parameter or, preferably, mirror handle_tool_use_block by including parent
linkage: when constructing ToolCallEndEvent and ToolCallResultEvent add a
parent_message_id or parent_tool_use_id field populated from the
parent_tool_use_id parameter (or map parent_tool_use_id -> parent_message_id if
events expect that name) so nested tool results preserve parent-child
relationships; update all places in this function where ToolCallEndEvent and
ToolCallResultEvent are instantiated to pass the parent id.
In `@components/runners/ambient-runner/ambient_runner/bridge.py`:
- Around line 155-157: The base context property currently returns None instead
of the stored value, so update the property getter(s) to return self._context
(with the correct Optional[RunnerContext] typing) rather than None; locate the
context property in bridge.py and replace the hardcoded None return with return
self._context, and apply the same change to the other context accessor instance
referenced around lines 234-237 so both PlatformBridge implementations expose
the initialized _context set by set_context.
In `@components/runners/ambient-runner/ambient_runner/bridges/gemini_cli/mcp.py`:
- Around line 140-159: The function setup_gemini_mcp always constructs a
settings dict (via build_gemini_mcp_settings with a fallback), injects an
ambient-feedback entry (_build_feedback_server_entry), and calls
write_gemini_settings which returns a string, so update setup_gemini_mcp to
reflect a non-optional return: change the type annotation from str | None to
str, adjust the docstring return description to state it always returns the
settings file path (str), and ensure any callers relying on None are updated if
needed; locate setup_gemini_mcp, build_gemini_mcp_settings,
_build_feedback_server_entry, and write_gemini_settings to make these edits.
In
`@components/runners/ambient-runner/ambient_runner/bridges/langgraph/bridge.py`:
- Around line 87-95: The interrupt method on LangGraphBridge currently ignores
the thread_id parameter; update ambient_runner/bridges/langgraph/bridge.py so
interrupt(self, thread_id: str | None = None) rejects non-None thread_id when
the underlying adapter does not support thread-scoped interruption instead of
silently ignoring it: check if thread_id is not None and if self._adapter has an
interrupt callable that accepts a thread_id argument (or otherwise supports
thread-scoped cancelation), call that with the id; if the adapter only supports
global interrupt (self._adapter.interrupt without args) raise a clear exception
(e.g., ValueError or NotImplementedError) indicating thread-scoped interrupt is
unsupported; keep the existing behavior of calling self._adapter.interrupt()
when thread_id is None and the adapter only supports global interrupt.
In `@components/runners/ambient-runner/ambient_runner/endpoints/run.py`:
- Around line 31-40: The to_run_agent_input method currently discards
object-shaped context by coercing non-lists to an empty list; update the logic
so that self.context is preserved: if self.context is a list use it, if it's a
dict wrap it into a single-element list (e.g., context_list = [self.context]),
and if None default to []; ensure this change is applied where context_list is
constructed in to_run_agent_input so callers' dict contexts are not lost when
building the RunAgentInput.
In `@components/runners/ambient-runner/Dockerfile`:
- Around line 42-46: Replace the current unpinned install command that runs pip3
install '/app/ambient-runner[all]' with a locked install using uv so transitive
dependencies are pinned; specifically, when installing the ambient-runner
package (the RUN pip3 install line in the Dockerfile) switch to using uv pip
install --locked '/app/ambient-runner[all]' so the checked-in uv.lock is honored
and builds are reproducible.
In `@components/runners/ambient-runner/tests/test_e2e_api.py`:
- Around line 397-403: The test test_interrupt_returns_structured_error
currently only checks that "detail" exists; change it to assert that
data["detail"] is a non-empty string (e.g., use isinstance(data.get("detail"),
str) and assert data["detail"].strip() != "") so the error detail is
semantically useful while remaining message-agnostic; update the assertions in
that test to perform these two checks instead of only key presence.
In `@components/runners/state-sync/Dockerfile`:
- Around line 1-14: The Dockerfile uses Fedora tools (dnf, git-core) and the
container lacks bash, causing builds and ENTRYPOINT (/usr/local/bin/sync.sh) to
fail; replace the install step to use Alpine's package manager (apk add
--no-cache rclone git jq bash) and/or remove bash dependency by changing the
scripts (sync.sh and hydrate.sh) shebangs to /bin/sh and ensuring they are
POSIX-compliant; keep the COPY of hydrate.sh and sync.sh and the chmod +x step,
and verify ENTRYPOINT still points to /usr/local/bin/sync.sh after you either
install bash or update the script shebangs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: fb3cd08a-783a-4e66-a813-b87ddcb8135d
⛔ Files ignored due to path filters (1)
components/runners/ambient-runner/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (50)
.github/workflows/runner-tool-versions.yml.github/workflows/unit-tests.ymlcomponents/frontend/src/components/claude-agent-options/_components/agents-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/hooks-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/key-value-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/output-format-field.tsxcomponents/frontend/src/components/claude-agent-options/_components/plugins-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/sandbox-field.tsxcomponents/frontend/src/components/claude-agent-options/_components/section.tsxcomponents/frontend/src/components/claude-agent-options/_components/string-list-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/system-prompt-field.tsxcomponents/frontend/src/components/claude-agent-options/_components/thinking-field.tsxcomponents/frontend/src/components/claude-agent-options/index.tscomponents/frontend/src/components/claude-agent-options/options-form.tsxcomponents/frontend/src/components/claude-agent-options/schema.tscomponents/frontend/src/components/create-session-dialog.tsxcomponents/frontend/src/types/agentic-session.tscomponents/frontend/src/types/api/sessions.tscomponents/runners/ambient-runner/.mcp.jsoncomponents/runners/ambient-runner/Dockerfilecomponents/runners/ambient-runner/ag_ui_claude_sdk/adapter.pycomponents/runners/ambient-runner/ag_ui_claude_sdk/handlers.pycomponents/runners/ambient-runner/ag_ui_claude_sdk/reasoning_events.pycomponents/runners/ambient-runner/ag_ui_claude_sdk/utils.pycomponents/runners/ambient-runner/ag_ui_gemini_cli/adapter.pycomponents/runners/ambient-runner/ag_ui_gemini_cli/types.pycomponents/runners/ambient-runner/ambient_runner/bridge.pycomponents/runners/ambient-runner/ambient_runner/bridges/claude/bridge.pycomponents/runners/ambient-runner/ambient_runner/bridges/claude/mcp.pycomponents/runners/ambient-runner/ambient_runner/bridges/claude/session.pycomponents/runners/ambient-runner/ambient_runner/bridges/gemini_cli/bridge.pycomponents/runners/ambient-runner/ambient_runner/bridges/gemini_cli/mcp.pycomponents/runners/ambient-runner/ambient_runner/bridges/gemini_cli/session.pycomponents/runners/ambient-runner/ambient_runner/bridges/langgraph/bridge.pycomponents/runners/ambient-runner/ambient_runner/endpoints/feedback.pycomponents/runners/ambient-runner/ambient_runner/endpoints/run.pycomponents/runners/ambient-runner/ambient_runner/middleware/tracing.pycomponents/runners/ambient-runner/ambient_runner/observability_models.pycomponents/runners/ambient-runner/ambient_runner/platform/config.pycomponents/runners/ambient-runner/ambient_runner/platform/context.pycomponents/runners/ambient-runner/ambient_runner/platform/utils.pycomponents/runners/ambient-runner/docs/UPDATE_PROCEDURE.mdcomponents/runners/ambient-runner/pyproject.tomlcomponents/runners/ambient-runner/tests/test_bridge_claude.pycomponents/runners/ambient-runner/tests/test_e2e_api.pycomponents/runners/ambient-runner/tests/test_gemini_auth.pycomponents/runners/ambient-runner/tests/test_git_identity.pycomponents/runners/ambient-runner/tests/test_google_drive_e2e.pycomponents/runners/state-sync/Dockerfile
components/frontend/src/components/claude-agent-options/_components/agents-editor.tsx
Outdated
Show resolved
Hide resolved
| const entries = Object.entries(value); | ||
| const addEntry = () => onChange({ ...value, "": "" }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the key-value-editor.tsx file
find . -name "key-value-editor.tsx" -type fRepository: ambient-code/platform
Length of output: 155
🏁 Script executed:
# Get the file size first to decide how to read it
wc -l components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsxRepository: ambient-code/platform
Length of output: 156
🏁 Script executed:
# Read the file with line numbers to inspect the mentioned lines
cat -n components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsxRepository: ambient-code/platform
Length of output: 2516
🏁 Script executed:
# Also check the schema and options-form files for context
find . -path "*/claude-agent-options/*" -name "schema.ts" -o -path "*/claude-agent-options/*" -name "options-form.tsx" | head -5Repository: ambient-code/platform
Length of output: 207
🏁 Script executed:
cat -n components/frontend/src/components/claude-agent-options/schema.ts | head -200Repository: ambient-code/platform
Length of output: 8554
🏁 Script executed:
cat -n components/frontend/src/components/claude-agent-options/options-form.tsxRepository: ambient-code/platform
Length of output: 20385
Use stable row identifiers instead of editable keys to prevent data loss and unnecessary remounts.
Line 20 creates a no-op (empty key already exists if added twice), Lines 26-35 silently overwrite entries during key collisions, and Line 41 uses the mutable key as the React key. This causes repeated "Add" clicks to fail, renames to lose data when colliding with existing keys, and the input row to remount on every keystroke.
Additionally, Line 52 can emit null for cleared values, but the env field schema (schema.ts:184) only allows strings, causing a type mismatch. Only extra_args (schema.ts:185) permits nullable values.
Finally, the delete button (Lines 54-62) is icon-only without an accessible name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx`
around lines 19 - 20, The component currently treats object keys as row
identifiers which causes addEntry/no-op, key-collision overwrites, remounts on
edits, nullable env values, and an inaccessible delete button; refactor to use
stable row IDs (e.g., maintain an array of {id, key, value} or a parallel id
map) so entries (derived from value) are rendered with id as React key (not the
editable key), change addEntry to append a new unique id entry instead of adding
an empty string key, update onChange handlers for key edits to rename safely
(detect collisions and either prevent/merge or disallow duplicate keys) and emit
non-null empty strings for the env field (never null) to match schema, and make
the delete control accessible by adding an aria-label or visible label to the
delete button.
components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx
Show resolved
Hide resolved
| const addServer = () => { | ||
| onChange({ ...value, [`server-${entries.length + 1}`]: { type: "stdio", command: "", args: [], env: {} } }); | ||
| }; | ||
| const removeServer = (name: string) => { | ||
| const next = { ...value }; | ||
| delete next[name]; | ||
| onChange(next); | ||
| }; | ||
| const updateServerName = (oldName: string, newName: string) => { | ||
| const next: Record<string, McpFormServer> = {}; | ||
| for (const [k, v] of Object.entries(value)) next[k === oldName ? newName : k] = v; | ||
| onChange(next); | ||
| }; |
There was a problem hiding this comment.
Default-name generation and renames can overwrite another server silently.
server-${entries.length + 1} is not unique after deletions, and updateServerName will clobber an existing entry if the user renames into an occupied key. Both paths lose config with no warning. Generate the next unused default name and reject duplicate renames instead of overwriting.
Possible guard
+ const nextServerName = () => {
+ let i = 1;
+ while (`server-${i}` in value) i += 1;
+ return `server-${i}`;
+ };
+
const addServer = () => {
- onChange({ ...value, [`server-${entries.length + 1}`]: { type: "stdio", command: "", args: [], env: {} } });
+ onChange({ ...value, [nextServerName()]: { type: "stdio", command: "", args: [], env: {} } });
};
const removeServer = (name: string) => {
const next = { ...value };
delete next[name];
onChange(next);
};
const updateServerName = (oldName: string, newName: string) => {
+ if (newName !== oldName && newName in value) return;
const next: Record<string, McpFormServer> = {};
for (const [k, v] of Object.entries(value)) next[k === oldName ? newName : k] = v;
onChange(next);
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const addServer = () => { | |
| onChange({ ...value, [`server-${entries.length + 1}`]: { type: "stdio", command: "", args: [], env: {} } }); | |
| }; | |
| const removeServer = (name: string) => { | |
| const next = { ...value }; | |
| delete next[name]; | |
| onChange(next); | |
| }; | |
| const updateServerName = (oldName: string, newName: string) => { | |
| const next: Record<string, McpFormServer> = {}; | |
| for (const [k, v] of Object.entries(value)) next[k === oldName ? newName : k] = v; | |
| onChange(next); | |
| }; | |
| const nextServerName = () => { | |
| let i = 1; | |
| while (`server-${i}` in value) i += 1; | |
| return `server-${i}`; | |
| }; | |
| const addServer = () => { | |
| onChange({ ...value, [nextServerName()]: { type: "stdio", command: "", args: [], env: {} } }); | |
| }; | |
| const removeServer = (name: string) => { | |
| const next = { ...value }; | |
| delete next[name]; | |
| onChange(next); | |
| }; | |
| const updateServerName = (oldName: string, newName: string) => { | |
| if (newName !== oldName && newName in value) return; | |
| const next: Record<string, McpFormServer> = {}; | |
| for (const [k, v] of Object.entries(value)) next[k === oldName ? newName : k] = v; | |
| onChange(next); | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`
around lines 33 - 45, The addServer function currently uses
server-${entries.length + 1} which can collide after deletions and
updateServerName will silently overwrite an existing key; change addServer to
compute the next unused default key by scanning Object.keys(value) for existing
server-N names and incrementing N until an unused name is found, then call
onChange with that unique key and a McpFormServer default; modify
updateServerName to check if newName already exists in value and, if so, do not
overwrite (either return early or surface an error/validation) instead of
clobbering the entry, otherwise perform the rename by copying entries into next
and calling onChange(next).
| // Typed as Record<string, unknown> because the backend accepts arbitrary JSON. | ||
| // The frontend validates via ClaudeAgentOptionsForm (Zod schema) before sending. | ||
| // TODO: Backend handler in components/backend/ must unmarshal this field and write | ||
| // it into the AgenticSession CR spec. Until then, Go encoding/json silently drops | ||
| // the field. Safe while the `advanced-agent-options` flag defaults to off. | ||
| agentOptions?: Record<string, unknown>; |
There was a problem hiding this comment.
Avoid shipping a silently dropped request field.
The comment confirms agentOptions is currently ignored by backend unmarshalling. That creates false-success behavior if advanced-agent-options is enabled: users can submit options that never apply.
Please gate sending this field on backend support (or return explicit validation/rejection until backend persists it).
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/frontend/src/types/api/sessions.ts` around lines 143 - 148, The
agentOptions field is currently dropped by the backend; update the frontend to
avoid sending agentOptions unless the backend supports it: detect the backend
capability (e.g., via a feature flag check for advanced-agent-options or a
capability API) and only include agentOptions in the request payload when that
support is present, otherwise omit agentOptions or surface a validation error to
the user (using ClaudeAgentOptionsForm validation path) so submissions aren’t
silently accepted; also add explicit client-side rejection or UX feedback when
agentOptions is present but backend support is absent to prevent false-success
behavior and ensure the backend handler that writes into the AgenticSession CR
spec will actually receive the field.
| # Install GitHub CLI and GitLab CLI (binary downloads, pinned) | ||
| RUN ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/') && \ | ||
| curl -fsSL "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${ARCH}.tar.gz" \ | ||
| | tar -xz -C /usr/local/bin --strip-components=2 "gh_${GH_VERSION}_linux_${ARCH}/bin/gh" && \ | ||
| curl -fsSL "https://gitlab.com/gitlab-org/cli/-/releases/v${GLAB_VERSION}/downloads/glab_${GLAB_VERSION}_linux_${ARCH}.tar.gz" \ | ||
| | tar -xz -C /usr/local/bin --strip-components=1 bin/glab && \ | ||
| chmod +x /usr/local/bin/gh /usr/local/bin/glab |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "Dockerfile" | grep -i ambient-runnerRepository: ambient-code/platform
Length of output: 109
🏁 Script executed:
cat -n components/runners/ambient-runner/DockerfileRepository: ambient-code/platform
Length of output: 3850
🏁 Script executed:
fd -t f "uv.lock" components/runners/ambient-runner/Repository: ambient-code/platform
Length of output: 106
🏁 Script executed:
# Check if uv.lock is actually used anywhere
rg "uv.lock" components/runners/ambient-runner/
# Check ambient-runner structure
ls -la components/runners/ambient-runner/Repository: ambient-code/platform
Length of output: 1131
🏁 Script executed:
# Check the pyproject.toml to understand the project setup
cat -n components/runners/ambient-runner/pyproject.toml
# Check the UPDATE_PROCEDURE.md for context on how dependencies should be managed
cat -n components/runners/ambient-runner/docs/UPDATE_PROCEDURE.mdRepository: ambient-code/platform
Length of output: 12690
🏁 Script executed:
# Check if GitHub and GitLab publish checksums for their releases
# Let's look at what verification methods are available
web_search_query="GitHub CLI releases checksums sha256 verification"Repository: ambient-code/platform
Length of output: 47
🌐 Web query:
GitHub CLI releases checksums verification
💡 Result:
To verify GitHub CLI (gh) releases checksums: 1. Download the desired release assets from https://github.com/cli/cli/releases, including the binary archive (e.g., gh_2.88.0_linux_amd64.tar.gz) and gh_VERSION_checksums.txt. 2. Verify using standard tools like sha256sum: sha256sum --check gh_2.88.0_checksums.txt This checks all listed assets against their SHA256 hashes in the file. Since version 2.50.0, releases also include cryptographically signed build provenance attestations for stronger verification. If gh is already installed, verify a downloaded binary: gh attestation verify PATH_TO_BINARY.tar.gz -R cli/cli Or for release-level verification: gh release verify vVERSION -R cli/cli (Verifies the release has valid attestations; use gh release verify-asset vVERSION PATH_TO_BINARY.tar.gz for specific assets.) These methods ensure integrity against tampering. Checksums are provided per release (e.g., confirmed in v2.51.0, v2.16.1 assets lists), and attestation verification uses Sigstore public good PKI tracing back to the build workflow.
Citations:
- 1: gh release create --checksums cli/cli#1711
- 2: https://cli.github.com/manual/gh_release_verify
- 3: https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/verifying-the-integrity-of-a-release
- 4: https://cli.github.com/manual/gh_attestation_verify
- 5: https://github.com/cli/cli/releases
- 6: https://github.com/cli/cli
- 7: https://github.com/cli/cli/releases/tag/v2.51.0
- 8: https://github.com/cli/cli/releases/tag/v2.16.1
- 9: https://github.com/cli/cli/releases/tag/v2.19.0
- 10: https://pkg.go.dev/github.com/cli/cli/v2@v2.88.1
🌐 Web query:
GitLab CLI glab releases checksums verification
💡 Result:
GitLab CLI (glab) releases provide checksums.txt files for verifying downloaded binary integrity. Download the latest release from https://gitlab.com/gitlab-org/cli/-/releases. Assets include platform-specific binaries (e.g., glab_X.Y.Z_linux_amd64.tar.gz) and checksums.txt containing SHA256 hashes. To verify: 1. Download the desired binary and checksums.txt. 2. Use: sha256sum -c checksums.txt --ignore-missing Or for specific file: sha256sum -c checksums.txt --check --status <binary_file> Example for v1.51.0: checksums.txt lists hashes like SHA256 (glab_1.51.0_linux_amd64.tar.gz) = ... glab has no built-in command for release binary checksum verification (unlike 'glab securefile download' for project secure files, which auto-verifies checksums). Use system tools like sha256sum/shasum. Official docs recommend downloading from releases page but do not explicitly detail verification steps; however, presence of checksums.txt indicates standard practice. Legacy GitHub releases (profclems/glab, archived) also had checksums.txt. For GitLab releases: 'glab release download ' downloads assets but does not verify checksums (open issue for secure files, not general releases). Attestation verification: 'glab attestation verify' is experimental for SLSA provenance (GitLab.com only, requires cosign), not checksums.
Citations:
- 1: https://docs.gitlab.com/cli/release/download/
- 2: https://docs.gitlab.com/cli/
- 3: https://gitlab.com/gitlab-org/cli/-/blob/main/docs/installation_options.md?ref_type=heads
- 4: https://github.com/profclems/glab/releases
- 5: https://gitlab.com/gitlab-org/cli/-/tags
- 6: https://gitlab.com/gitlab-org/cli
- 7: https://gitlab.com/gitlab-org/cli/-/releases/v1.51.0
🏁 Script executed:
# Check if glab 1.52.0 supports attestation verification
web_search_query="glab 1.52.0 attestation verify support"Repository: ambient-code/platform
Length of output: 47
🌐 Web query:
glab attestation verify SLSA support version history
💡 Result:
The glab attestation verify command verifies SLSA provenance attestations for artifacts in GitLab projects. It is experimental, requires the cosign binary, and works only on GitLab.com. Usage: glab attestation verify <project_id> <artifact_path>. It fetches Sigstore bundles via the Attestations API and verifies them against SLSA provenance (predicate_type: https://slsa.dev/provenance/v1). SLSA support in glab was added in version 1.81.0 (January 2026) with commit 508116ae: "feat(attestation): implement attestation verification in glab". Latest glab version is 1.90.0 (March 2026). GitLab generates SLSA Level 1 provenance automatically for CI artifacts, with Level 3 in development (phased rollout planned, including glab enhancements). Verify using cosign or slsa-verifier tools.
Citations:
- 1: https://docs.gitlab.com/cli/attestation/verify/
- 2: https://rpm.pbone.net/changelog_idpl_141414480_com_glab-1.81.0-1.1.x86_64.rpm.html
- 3: https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/slsa_level_3
- 4: https://docs.gitlab.com/api/attestations/
- 5: https://docs.gitlab.com/ci/pipeline_security/slsa/
- 6: https://docs.gitlab.com/ci/pipeline_security/slsa/provenance_v1/
Verify the gh and glab archives before extracting them.
These lines stream privileged CLIs straight from the network into /usr/local/bin with no checksum validation. Both tools publish checksums with their releases: download each to a temp file, verify with sha256sum -c, then extract. For gh (v2.74.0+), you can also use stronger cryptographic attestation verification with gh release verify.
🧰 Tools
🪛 Hadolint (2.14.0)
[warning] 17-17: Set the SHELL option -o pipefail before RUN with a pipe in it. If you are using /bin/sh in an alpine image or if your shell is symlinked to busybox then consider explicitly setting your SHELL to /bin/ash, or disable this check
(DL4006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/runners/ambient-runner/Dockerfile` around lines 16 - 22, The
Dockerfile RUN block that downloads and extracts gh and glab (using GH_VERSION
and GLAB_VERSION) should download each archive to a temporary file, fetch the
corresponding published checksum file (or release signature), verify the archive
with sha256sum -c (or use gh release verify for gh when available for v2.74.0+),
and only extract to /usr/local/bin if verification succeeds; then set executable
bits and remove temp files. Update the RUN sequence that currently pipes
directly into tar to instead perform: curl -o /tmp/gh.tar.gz and curl -o
/tmp/gh.sha256 (or signature), verify, extract gh to /usr/local/bin, repeat for
glab, handle verification failures by failing the build, and keep the existing
chmod step after successful extraction.
- KeyValueEditor: use stable ref-based React keys (fixes focus loss on key rename), prevent empty-key collision on repeated add - McpServersEditor: same stable key fix, use monotonic counter for collision-safe server naming - Dockerfile: consolidate 4 dnf install layers into 1, move python3-devel removal into same layer as pip install (actually reclaims space) - Workflow: consolidate 6 sequential sed calls into single invocation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx (3)
77-85:⚠️ Potential issue | 🟠 MajorAdd an accessible name to the delete button.
The icon-only destructive control is unlabeled for assistive technologies.
Proposed fix
<Button type="button" variant="ghost" size="icon" className="h-8 w-8 shrink-0" + aria-label={`Remove ${k || "entry"}`} onClick={() => removeEntry(i)} >As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx` around lines 77 - 85, The delete Button rendering the Trash2 icon in key-value-editor.tsx is missing an accessible name; update the Button (the one calling removeEntry(i)) to provide an accessible label by adding an aria-label (e.g., aria-label="Remove entry") or include visually hidden text/Tooltip tied to the Button so screen readers can announce it; ensure the label is descriptive and kept in sync with removeEntry semantics and the destructive nature of the control.
75-75:⚠️ Potential issue | 🟠 MajorAvoid emitting
nullfor cleared values in this shared editor path.Converting empty input to
nullbreaks downstreamenv/headerscontracts (Record<string, string>) and can cause validation failures.Proposed fix
- onChange={(e) => updateEntry(i, k, e.target.value || null)} + onChange={(e) => updateEntry(i, k, e.target.value)}As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx` at line 75, The onChange currently converts empty inputs to null (onChange={(e) => updateEntry(i, k, e.target.value || null)}), which violates the Record<string,string> contract; change it to pass an empty string instead of null (e.g., onChange={(e) => updateEntry(i, k, e.target.value)} or explicitly map empty value to ""), so updateEntry (and any consumers expecting headers/env) always receives a string and no nulls are emitted.
49-58:⚠️ Potential issue | 🟠 MajorPrevent silent overwrite when renaming to an existing key.
updateEntrycan clobber another entry whennewKeyalready exists, causing data loss.Proposed fix
const updateEntry = (index: number, newKey: string, newVal: string | null) => { + const oldKey = entries[index][0]; + if (newKey !== oldKey && newKey in value) return; const next: Record<string, string | null> = {}; for (let i = 0; i < entries.length; i++) { if (i === index) { next[newKey] = newVal; } else { next[entries[i][0]] = entries[i][1]; } } onChange(next); };As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx` around lines 49 - 58, The updateEntry function can silently overwrite an existing entry when newKey equals an existing key; modify updateEntry to detect if newKey already exists (by scanning entries or checking entries.some(e => e[0] === newKey)) and handle it explicitly—either prevent the rename and bail out (e.g., return without calling onChange or surface an error/validation callback) or merge/replace intentionally (e.g., preserve both values or prompt). Update references in updateEntry (which builds next and calls onChange) so you don't blindly set next[newKey] when that key already exists; choose and implement one clear behavior (prevent rename or handle merge) and ensure onChange receives the correct resulting map.components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx (1)
60-65:⚠️ Potential issue | 🟠 MajorGuard against duplicate server names during rename.
Renaming to an existing key silently overwrites another server entry.
Proposed fix
const updateServerName = (index: number, newName: string) => { + const oldName = entries[index][0]; + if (newName !== oldName && newName in value) return; const next: Record<string, McpFormServer> = {}; for (let i = 0; i < entries.length; i++) { next[i === index ? newName : entries[i][0]] = entries[i][1]; } onChange(next); };As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx` around lines 60 - 65, The rename logic in updateServerName allows silently overwriting an existing server when newName matches another entry key; update updateServerName to detect if newName already exists in entries (except the current index) and abort or surface an error instead of building next and calling onChange; use entries and McpFormServer to check keys, and only proceed to construct the next Record<string, McpFormServer> and call onChange when the newName is unique (or provide a clear validation callback/return value to the caller).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/runner-tool-versions.yml:
- Around line 218-301: The workflow currently skips the entire "Create branch,
commit, and open PR" step when an automation PR already exists
(steps.existing_pr.outputs.pr_exists), so subsequent runs never update the
branch; change the logic to always update/push the auto/update-runner-image
branch when changes exist and only conditionally create a PR: keep the
BRANCH/Git commands (git checkout -b "$BRANCH", git add "$DOCKERFILE", git
commit, git push -u origin "$BRANCH") and remove or relax the if-condition that
prevents the step from running when a PR exists, and replace the unconditional
gh pr create call with conditional behavior that uses gh pr create only when
steps.existing_pr.outputs.pr_exists == 'false' and uses gh pr edit (or gh pr
reopen/patch) to update the existing PR when it is 'true', referencing BRANCH
and the gh pr create/gh pr edit commands to locate the change.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`:
- Around line 86-88: The delete Button rendering the Trash2 icon is missing an
accessible name; update the Button in mcp-servers-editor.tsx (the Button that
calls removeServer(i) and renders <Trash2 />) to include an explicit accessible
label (e.g., aria-label or aria-labelledby) that describes the action,
preferably including the server identity (like aria-label={`Remove server
${server.name ?? i + 1}`}), or add visually hidden text inside the button;
ensure the label is unique/meaningful for screen readers and retains the
existing onClick that calls removeServer(i).
---
Duplicate comments:
In
`@components/frontend/src/components/claude-agent-options/_components/key-value-editor.tsx`:
- Around line 77-85: The delete Button rendering the Trash2 icon in
key-value-editor.tsx is missing an accessible name; update the Button (the one
calling removeEntry(i)) to provide an accessible label by adding an aria-label
(e.g., aria-label="Remove entry") or include visually hidden text/Tooltip tied
to the Button so screen readers can announce it; ensure the label is descriptive
and kept in sync with removeEntry semantics and the destructive nature of the
control.
- Line 75: The onChange currently converts empty inputs to null (onChange={(e)
=> updateEntry(i, k, e.target.value || null)}), which violates the
Record<string,string> contract; change it to pass an empty string instead of
null (e.g., onChange={(e) => updateEntry(i, k, e.target.value)} or explicitly
map empty value to ""), so updateEntry (and any consumers expecting headers/env)
always receives a string and no nulls are emitted.
- Around line 49-58: The updateEntry function can silently overwrite an existing
entry when newKey equals an existing key; modify updateEntry to detect if newKey
already exists (by scanning entries or checking entries.some(e => e[0] ===
newKey)) and handle it explicitly—either prevent the rename and bail out (e.g.,
return without calling onChange or surface an error/validation callback) or
merge/replace intentionally (e.g., preserve both values or prompt). Update
references in updateEntry (which builds next and calls onChange) so you don't
blindly set next[newKey] when that key already exists; choose and implement one
clear behavior (prevent rename or handle merge) and ensure onChange receives the
correct resulting map.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`:
- Around line 60-65: The rename logic in updateServerName allows silently
overwriting an existing server when newName matches another entry key; update
updateServerName to detect if newName already exists in entries (except the
current index) and abort or surface an error instead of building next and
calling onChange; use entries and McpFormServer to check keys, and only proceed
to construct the next Record<string, McpFormServer> and call onChange when the
newName is unique (or provide a clear validation callback/return value to the
caller).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c4cf0ebb-0fac-4c31-a2c4-fbb990099c25
📒 Files selected for processing (4)
.github/workflows/runner-tool-versions.ymlcomponents/frontend/src/components/claude-agent-options/_components/key-value-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsxcomponents/runners/ambient-runner/Dockerfile
| - name: Create branch, commit, and open PR | ||
| if: steps.diff.outputs.any == 'true' && steps.existing_pr.outputs.pr_exists == 'false' | ||
| env: | ||
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| CUR_BASE: ${{ steps.current.outputs.base_digest }} | ||
| LAT_BASE: ${{ steps.latest.outputs.base_digest }} | ||
| CUR_GH: ${{ steps.current.outputs.gh }} | ||
| LAT_GH: ${{ steps.latest.outputs.gh }} | ||
| CUR_GLAB: ${{ steps.current.outputs.glab }} | ||
| LAT_GLAB: ${{ steps.latest.outputs.glab }} | ||
| CUR_UV: ${{ steps.current.outputs.uv }} | ||
| LAT_UV: ${{ steps.latest.outputs.uv }} | ||
| CUR_PC: ${{ steps.current.outputs.pre_commit }} | ||
| LAT_PC: ${{ steps.latest.outputs.pre_commit }} | ||
| CUR_GEM: ${{ steps.current.outputs.gemini_cli }} | ||
| LAT_GEM: ${{ steps.latest.outputs.gemini_cli }} | ||
| UPD_BASE: ${{ steps.diff.outputs.base }} | ||
| UPD_GH: ${{ steps.diff.outputs.gh }} | ||
| UPD_GLAB: ${{ steps.diff.outputs.glab }} | ||
| UPD_UV: ${{ steps.diff.outputs.uv }} | ||
| UPD_PC: ${{ steps.diff.outputs.pc }} | ||
| UPD_GEM: ${{ steps.diff.outputs.gem }} | ||
| run: | | ||
| BRANCH="auto/update-runner-image" | ||
|
|
||
| git config user.name "github-actions[bot]" | ||
| git config user.email "github-actions[bot]@users.noreply.github.com" | ||
|
|
||
| git push origin --delete "$BRANCH" 2>&1 || true | ||
|
|
||
| git checkout -b "$BRANCH" | ||
| git add "$DOCKERFILE" | ||
|
|
||
| # Build a short summary for the commit title | ||
| CHANGED="" | ||
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | ||
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | ||
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | ||
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | ||
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | ||
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | ||
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | ||
|
|
||
| git commit -m "chore(runner): update ${CHANGED} | ||
|
|
||
| Automated weekly runner image freshness update." | ||
|
|
||
| git push -u origin "$BRANCH" | ||
|
|
||
| # Build PR body with version table | ||
| status() { | ||
| if [ "$1" = "true" ]; then echo "\`$2\` -> \`$3\`"; else echo "\`$2\` (up to date)"; fi | ||
| } | ||
|
|
||
| PR_BODY="## Runner Image Freshness Update | ||
|
|
||
| | Component | Status | | ||
| |-----------|--------| | ||
| | Base image (ubi10/ubi) | $(status "$UPD_BASE" "${CUR_BASE:0:12}…" "${LAT_BASE:0:12}…") | | ||
| | gh (GitHub CLI) | $(status "$UPD_GH" "$CUR_GH" "$LAT_GH") | | ||
| | glab (GitLab CLI) | $(status "$UPD_GLAB" "$CUR_GLAB" "$LAT_GLAB") | | ||
| | uv | $(status "$UPD_UV" "$CUR_UV" "$LAT_UV") | | ||
| | pre-commit | $(status "$UPD_PC" "$CUR_PC" "$LAT_PC") | | ||
| | gemini-cli | $(status "$UPD_GEM" "$CUR_GEM" "$LAT_GEM") | | ||
|
|
||
| ### Components not version-pinned (updated with base image) | ||
|
|
||
| git, jq, Node.js, Go (go-toolset) — installed via dnf from UBI 10 AppStream. | ||
| Their versions advance when the base image digest is updated. | ||
|
|
||
| ## Test Plan | ||
|
|
||
| - [ ] Container image builds successfully | ||
| - [ ] Runner starts and all tools are accessible | ||
| - [ ] Smoke test: \`gh version\`, \`glab version\`, \`uv --version\`, \`gemini --version\` | ||
|
|
||
| --- | ||
| *Auto-generated by runner-tool-versions workflow*" | ||
|
|
||
| gh pr create \ | ||
| --title "chore(runner): freshen runner image (${CHANGED})" \ | ||
| --body "$PR_BODY" \ | ||
| --base main \ | ||
| --head "$BRANCH" |
There was a problem hiding this comment.
Existing freshness PRs stop receiving new updates.
Once auto/update-runner-image is already open, this entire step is skipped, so later runs detect newer upstream versions but never commit or push them to the branch. That leaves the single automation PR stale instead of keeping it current.
🔧 Suggested fix
- - name: Create branch, commit, and open PR
- if: steps.diff.outputs.any == 'true' && steps.existing_pr.outputs.pr_exists == 'false'
+ - name: Update branch and open PR if needed
+ if: steps.diff.outputs.any == 'true'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@
run: |
BRANCH="auto/update-runner-image"
@@
- git push origin --delete "$BRANCH" 2>&1 || true
-
- git checkout -b "$BRANCH"
+ git checkout -B "$BRANCH"
git add "$DOCKERFILE"
@@
- git push -u origin "$BRANCH"
+ git push -u --force-with-lease origin "$BRANCH"
@@
- gh pr create \
- --title "chore(runner): freshen runner image (${CHANGED})" \
- --body "$PR_BODY" \
- --base main \
- --head "$BRANCH"
+ if [ "${{ steps.existing_pr.outputs.pr_exists }}" = "false" ]; then
+ gh pr create \
+ --title "chore(runner): freshen runner image (${CHANGED})" \
+ --body "$PR_BODY" \
+ --base main \
+ --head "$BRANCH"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Create branch, commit, and open PR | |
| if: steps.diff.outputs.any == 'true' && steps.existing_pr.outputs.pr_exists == 'false' | |
| env: | |
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | |
| CUR_BASE: ${{ steps.current.outputs.base_digest }} | |
| LAT_BASE: ${{ steps.latest.outputs.base_digest }} | |
| CUR_GH: ${{ steps.current.outputs.gh }} | |
| LAT_GH: ${{ steps.latest.outputs.gh }} | |
| CUR_GLAB: ${{ steps.current.outputs.glab }} | |
| LAT_GLAB: ${{ steps.latest.outputs.glab }} | |
| CUR_UV: ${{ steps.current.outputs.uv }} | |
| LAT_UV: ${{ steps.latest.outputs.uv }} | |
| CUR_PC: ${{ steps.current.outputs.pre_commit }} | |
| LAT_PC: ${{ steps.latest.outputs.pre_commit }} | |
| CUR_GEM: ${{ steps.current.outputs.gemini_cli }} | |
| LAT_GEM: ${{ steps.latest.outputs.gemini_cli }} | |
| UPD_BASE: ${{ steps.diff.outputs.base }} | |
| UPD_GH: ${{ steps.diff.outputs.gh }} | |
| UPD_GLAB: ${{ steps.diff.outputs.glab }} | |
| UPD_UV: ${{ steps.diff.outputs.uv }} | |
| UPD_PC: ${{ steps.diff.outputs.pc }} | |
| UPD_GEM: ${{ steps.diff.outputs.gem }} | |
| run: | | |
| BRANCH="auto/update-runner-image" | |
| git config user.name "github-actions[bot]" | |
| git config user.email "github-actions[bot]@users.noreply.github.com" | |
| git push origin --delete "$BRANCH" 2>&1 || true | |
| git checkout -b "$BRANCH" | |
| git add "$DOCKERFILE" | |
| # Build a short summary for the commit title | |
| CHANGED="" | |
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | |
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | |
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | |
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | |
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | |
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | |
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | |
| git commit -m "chore(runner): update ${CHANGED} | |
| Automated weekly runner image freshness update." | |
| git push -u origin "$BRANCH" | |
| # Build PR body with version table | |
| status() { | |
| if [ "$1" = "true" ]; then echo "\`$2\` -> \`$3\`"; else echo "\`$2\` (up to date)"; fi | |
| } | |
| PR_BODY="## Runner Image Freshness Update | |
| | Component | Status | | |
| |-----------|--------| | |
| | Base image (ubi10/ubi) | $(status "$UPD_BASE" "${CUR_BASE:0:12}…" "${LAT_BASE:0:12}…") | | |
| | gh (GitHub CLI) | $(status "$UPD_GH" "$CUR_GH" "$LAT_GH") | | |
| | glab (GitLab CLI) | $(status "$UPD_GLAB" "$CUR_GLAB" "$LAT_GLAB") | | |
| | uv | $(status "$UPD_UV" "$CUR_UV" "$LAT_UV") | | |
| | pre-commit | $(status "$UPD_PC" "$CUR_PC" "$LAT_PC") | | |
| | gemini-cli | $(status "$UPD_GEM" "$CUR_GEM" "$LAT_GEM") | | |
| ### Components not version-pinned (updated with base image) | |
| git, jq, Node.js, Go (go-toolset) — installed via dnf from UBI 10 AppStream. | |
| Their versions advance when the base image digest is updated. | |
| ## Test Plan | |
| - [ ] Container image builds successfully | |
| - [ ] Runner starts and all tools are accessible | |
| - [ ] Smoke test: \`gh version\`, \`glab version\`, \`uv --version\`, \`gemini --version\` | |
| --- | |
| *Auto-generated by runner-tool-versions workflow*" | |
| gh pr create \ | |
| --title "chore(runner): freshen runner image (${CHANGED})" \ | |
| --body "$PR_BODY" \ | |
| --base main \ | |
| --head "$BRANCH" | |
| - name: Update branch and open PR if needed | |
| if: steps.diff.outputs.any == 'true' | |
| env: | |
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | |
| CUR_BASE: ${{ steps.current.outputs.base_digest }} | |
| LAT_BASE: ${{ steps.latest.outputs.base_digest }} | |
| CUR_GH: ${{ steps.current.outputs.gh }} | |
| LAT_GH: ${{ steps.latest.outputs.gh }} | |
| CUR_GLAB: ${{ steps.current.outputs.glab }} | |
| LAT_GLAB: ${{ steps.latest.outputs.glab }} | |
| CUR_UV: ${{ steps.current.outputs.uv }} | |
| LAT_UV: ${{ steps.latest.outputs.uv }} | |
| CUR_PC: ${{ steps.current.outputs.pre_commit }} | |
| LAT_PC: ${{ steps.latest.outputs.pre_commit }} | |
| CUR_GEM: ${{ steps.current.outputs.gemini_cli }} | |
| LAT_GEM: ${{ steps.latest.outputs.gemini_cli }} | |
| UPD_BASE: ${{ steps.diff.outputs.base }} | |
| UPD_GH: ${{ steps.diff.outputs.gh }} | |
| UPD_GLAB: ${{ steps.diff.outputs.glab }} | |
| UPD_UV: ${{ steps.diff.outputs.uv }} | |
| UPD_PC: ${{ steps.diff.outputs.pc }} | |
| UPD_GEM: ${{ steps.diff.outputs.gem }} | |
| run: | | |
| BRANCH="auto/update-runner-image" | |
| git config user.name "github-actions[bot]" | |
| git config user.email "github-actions[bot]@users.noreply.github.com" | |
| git checkout -B "$BRANCH" | |
| git add "$DOCKERFILE" | |
| # Build a short summary for the commit title | |
| CHANGED="" | |
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | |
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | |
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | |
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | |
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | |
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | |
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | |
| git commit -m "chore(runner): update ${CHANGED} | |
| Automated weekly runner image freshness update." | |
| git push -u --force-with-lease origin "$BRANCH" | |
| # Build PR body with version table | |
| status() { | |
| if [ "$1" = "true" ]; then echo "\`$2\` -> \`$3\`"; else echo "\`$2\` (up to date)"; fi | |
| } | |
| PR_BODY="## Runner Image Freshness Update | |
| | Component | Status | | |
| |-----------|--------| | |
| | Base image (ubi10/ubi) | $(status "$UPD_BASE" "${CUR_BASE:0:12}…" "${LAT_BASE:0:12}…") | | |
| | gh (GitHub CLI) | $(status "$UPD_GH" "$CUR_GH" "$LAT_GH") | | |
| | glab (GitLab CLI) | $(status "$UPD_GLAB" "$CUR_GLAB" "$LAT_GLAB") | | |
| | uv | $(status "$UPD_UV" "$CUR_UV" "$LAT_UV") | | |
| | pre-commit | $(status "$UPD_PC" "$CUR_PC" "$LAT_PC") | | |
| | gemini-cli | $(status "$UPD_GEM" "$CUR_GEM" "$LAT_GEM") | | |
| ### Components not version-pinned (updated with base image) | |
| git, jq, Node.js, Go (go-toolset) — installed via dnf from UBI 10 AppStream. | |
| Their versions advance when the base image digest is updated. | |
| ## Test Plan | |
| - [ ] Container image builds successfully | |
| - [ ] Runner starts and all tools are accessible | |
| - [ ] Smoke test: \`gh version\`, \`glab version\`, \`uv --version\`, \`gemini --version\` | |
| --- | |
| *Auto-generated by runner-tool-versions workflow*" | |
| if [ "${{ steps.existing_pr.outputs.pr_exists }}" = "false" ]; then | |
| gh pr create \ | |
| --title "chore(runner): freshen runner image (${CHANGED})" \ | |
| --body "$PR_BODY" \ | |
| --base main \ | |
| --head "$BRANCH" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/runner-tool-versions.yml around lines 218 - 301, The
workflow currently skips the entire "Create branch, commit, and open PR" step
when an automation PR already exists (steps.existing_pr.outputs.pr_exists), so
subsequent runs never update the branch; change the logic to always update/push
the auto/update-runner-image branch when changes exist and only conditionally
create a PR: keep the BRANCH/Git commands (git checkout -b "$BRANCH", git add
"$DOCKERFILE", git commit, git push -u origin "$BRANCH") and remove or relax the
if-condition that prevents the step from running when a PR exists, and replace
the unconditional gh pr create call with conditional behavior that uses gh pr
create only when steps.existing_pr.outputs.pr_exists == 'false' and uses gh pr
edit (or gh pr reopen/patch) to update the existing PR when it is 'true',
referencing BRANCH and the gh pr create/gh pr edit commands to locate the
change.
components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx
Outdated
Show resolved
Hide resolved
- bridge.py: change Optional[str] to str | None (missing import) - state-sync/Dockerfile: use apk instead of dnf (Alpine base) - test_gemini_auth.py: fix indentation, remove duplicate dict key and unused var - test_git_identity.py: remove broken sys.path hack and stale skip marker - agents-editor: unique key gen after deletions + aria-label - key-value-editor: aria-label + emit empty string not null - string-list-editor: aria-label on delete buttons - mcp-servers-editor: aria-label on delete buttons - schema.ts: make outputFormat fields required - thinking-field: NaN guard with valueAsNumber - output-format-field: sync rawJson on external reset, clear on error - options-form: NaN guards on all number inputs - workflow: track PIDs for background fetches, update stale PRs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
components/runners/ambient-runner/tests/test_gemini_auth.py (1)
220-224:⚠️ Potential issue | 🟠 MajorUse the existing
tmp_pathcredential file instead of creating a second unmanaged temp file.This test now mixes two credential paths (
creds_fileandcred_path) and leaves adelete=Falsetemp file behind. Reusecreds_filefor the context to keep setup deterministic and avoid file leakage in CI runs.As per coding guidelines, "`**`: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."Suggested fix
- with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as f: - cred_path = f.name ctx = _make_context( USE_VERTEX="1", - GOOGLE_APPLICATION_CREDENTIALS=cred_path, + GOOGLE_APPLICATION_CREDENTIALS=str(creds_file), GOOGLE_CLOUD_PROJECT="my-project", GOOGLE_CLOUD_LOCATION="us-central1", )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/runners/ambient-runner/tests/test_gemini_auth.py` around lines 220 - 224, The test currently creates an extra unmanaged temp file (cred_path) and sets GOOGLE_APPLICATION_CREDENTIALS to it; instead reuse the existing tmp_path credential file variable (creds_file) when calling _make_context so the context uses creds_file for GOOGLE_APPLICATION_CREDENTIALS and avoid creating a second tempfile with delete=False; update the variables passed to _make_context (the USE_VERTEX and GOOGLE_APPLICATION_CREDENTIALS entries) to reference creds_file instead of cred_path and remove the tempfile creation logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/runner-tool-versions.yml:
- Around line 262-279: The workflow fails with --force-with-lease because there
is no remote-tracking ref to validate; before creating/checking out and pushing
the branch (around git checkout -B "$BRANCH" and git push -u --force-with-lease
origin "$BRANCH"), fetch the remote branch to establish a remote-tracking ref
(e.g., run a git fetch origin "$BRANCH" or git fetch origin) and then base the
local branch off origin/"$BRANCH" if it exists so --force-with-lease can verify
the lease; add that fetch step immediately prior to git checkout -B "$BRANCH"
and ensure the subsequent git push uses the fetched remote-tracking ref.
In
`@components/frontend/src/components/claude-agent-options/_components/agents-editor.tsx`:
- Around line 41-42: The agent list is using the editable agent "name" as the
React key in entries.map(([name, agent]) => ...) which causes the agent card to
remount on every rename; change the key to a stable, non-editable id (the same
stable-ID pattern used in KeyValueEditor and StringListEditor) so UI elements
keep focus during edits. Locate the map in agents-editor.tsx (entries.map ...)
and replace the key usage with the agent's stable id (e.g., agent.id or the
stable-id field generated where agents are created/loaded) or wire in the same
stable-id generation helper used by KeyValueEditor/StringListEditor, ensuring
all references to the unique identifier are preserved and not the mutable name.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`:
- Line 99: The KeyValueEditor onChange currently casts its output to
Record<string,string> which may hide nulls and cause mcpServerFormConfigSchema
(which expects z.string()) to reject the env record; change the update path
(where KeyValueEditor value is passed into updateServer/updateEntry) to sanitize
the incoming e (from KeyValueEditor) by filtering out any keys whose values are
null or undefined and only pass a Record<string,string> with string values to
updateServer (and ensure the env property on the server object is set to that
sanitized record before calling updateServer).
In `@components/runners/ambient-runner/ambient_runner/bridges/claude/bridge.py`:
- Line 15: Replace the deprecated typing.AsyncIterator import with the modern
collections.abc.AsyncIterator: update the import statement that currently uses
"from typing import Any, AsyncIterator" in the Claude bridge (symbol: file-level
import in bridge.py) to import Any from typing and AsyncIterator from
collections.abc; apply the same change wherever typing.AsyncIterator is used
across the codebase so all async iterator type hints reference
collections.abc.AsyncIterator instead of typing.AsyncIterator.
In `@components/runners/state-sync/Dockerfile`:
- Around line 1-3: The Dockerfile currently installs rclone, git and jq but
omits bash and sqlite3 which are required by the ENTRYPOINT script sync.sh (uses
bash-specific syntax and calls sqlite3 for WAL checkpointing); update the RUN
apk add --no-cache command to include bash and sqlite (sqlite-dev is not
necessary unless building) so sync.sh can run correctly and the WAL checkpoint
step (sqlite3) executes as expected.
---
Outside diff comments:
In `@components/runners/ambient-runner/tests/test_gemini_auth.py`:
- Around line 220-224: The test currently creates an extra unmanaged temp file
(cred_path) and sets GOOGLE_APPLICATION_CREDENTIALS to it; instead reuse the
existing tmp_path credential file variable (creds_file) when calling
_make_context so the context uses creds_file for GOOGLE_APPLICATION_CREDENTIALS
and avoid creating a second tempfile with delete=False; update the variables
passed to _make_context (the USE_VERTEX and GOOGLE_APPLICATION_CREDENTIALS
entries) to reference creds_file instead of cred_path and remove the tempfile
creation logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 858939e6-c72b-4094-ab05-cf88444fc405
📒 Files selected for processing (12)
.github/workflows/runner-tool-versions.ymlcomponents/frontend/src/components/claude-agent-options/_components/agents-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/key-value-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/output-format-field.tsxcomponents/frontend/src/components/claude-agent-options/_components/string-list-editor.tsxcomponents/frontend/src/components/claude-agent-options/_components/thinking-field.tsxcomponents/frontend/src/components/claude-agent-options/options-form.tsxcomponents/frontend/src/components/claude-agent-options/schema.tscomponents/runners/ambient-runner/ambient_runner/bridges/claude/bridge.pycomponents/runners/ambient-runner/tests/test_gemini_auth.pycomponents/runners/state-sync/Dockerfile
| git checkout -B "$BRANCH" | ||
| git add "$DOCKERFILE" | ||
|
|
||
| # Build a short summary for the commit title | ||
| CHANGED="" | ||
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | ||
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | ||
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | ||
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | ||
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | ||
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | ||
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | ||
|
|
||
| git commit -m "chore(runner): update ${CHANGED} | ||
|
|
||
| Automated weekly runner image freshness update." | ||
|
|
||
| git push -u --force-with-lease origin "$BRANCH" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does git push --force-with-lease origin still protect an existing remote branch when the local clone has not fetchedorigin/?
💡 Result:
No, git push --force-with-lease origin does not protect an existing remote branch if the local clone has not fetched origin/. In this scenario, there is no local remote-tracking branch (refs/remotes/origin/), so Git cannot perform the safety check against the expected remote ref value. The push will fail with an error, preventing the force push and thereby protecting the remote branch from being overwritten blindly.
Citations:
- 1: https://stackoverflow.com/questions/52823692/git-push-force-with-lease-vs-force
- 2: https://git-scm.com/docs/git-push.html
- 3: https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-push.html
🌐 Web query:
git push --force-with-lease behavior missing remote-tracking branch existing remote branch documentation
💡 Result:
When using git push --force-with-lease to an existing remote branch where the corresponding local remote-tracking branch (e.g., origin/branch) is missing, the push fails with an error. This occurs because --force-with-lease requires the local remote-tracking branch for the ref being updated to determine the expected remote value. Without it, Git cannot perform the safety check and treats it as an error. Official Git documentation states: "--force-with-lease alone, without specifying the details, will protect all remote refs that are going to be updated by requiring their current value to be the same as the remote-tracking branch we have for them." It further notes: "it is an error if we do not have such a remote-tracking branch." A common cause is the remote branch being deleted (no longer advertised by the remote), but a stale local remote-tracking ref remaining. git fetch alone does not remove these; use git fetch --prune or git remote prune origin to clean up stale tracking branches, after which the push will fail with "stale info" (confirming no tracking branch exists). Workarounds: - Use git push --force (less safe, overwrites unconditionally). - Explicitly specify the expected value: git push --force-with-lease=branch:0000000000000000000000000000000000000000 branch (assumes remote branch is at zero-oid, i.e., non-existent; use actual expected commit if known). - Fetch to create/update the tracking branch if the remote branch exists. Example error: ! [rejected] branch -> branch (stale info)
Citations:
- 1: https://stackoverflow.com/questions/52823692/git-push-force-with-lease-vs-force
- 2: https://git-scm.com/docs/git-push
- 3: https://git-scm.com/docs/git-push.html
- 4: https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-push.html
- 5: https://stackoverflow.com/questions/56191415/why-is-git-push-force-with-lease-failing-with-rejected-stale-info-even
Fetch the remote-tracking branch before pushing with --force-with-lease.
When the branch already exists on the remote (e.g., from a previous workflow run), --force-with-lease will fail with a "stale info" error because the local clone has no remote-tracking ref to validate against. This causes the workflow to fail on updates to an existing automation PR branch.
🔧 Proposed fix
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
+ git fetch origin "$BRANCH":"refs/remotes/origin/$BRANCH" || true
git checkout -B "$BRANCH"
git add "$DOCKERFILE"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| git checkout -B "$BRANCH" | |
| git add "$DOCKERFILE" | |
| # Build a short summary for the commit title | |
| CHANGED="" | |
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | |
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | |
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | |
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | |
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | |
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | |
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | |
| git commit -m "chore(runner): update ${CHANGED} | |
| Automated weekly runner image freshness update." | |
| git push -u --force-with-lease origin "$BRANCH" | |
| git fetch origin "$BRANCH":"refs/remotes/origin/$BRANCH" || true | |
| git checkout -B "$BRANCH" | |
| git add "$DOCKERFILE" | |
| # Build a short summary for the commit title | |
| CHANGED="" | |
| [ "$UPD_BASE" = "true" ] && CHANGED="${CHANGED}base-image " | |
| [ "$UPD_GH" = "true" ] && CHANGED="${CHANGED}gh " | |
| [ "$UPD_GLAB" = "true" ] && CHANGED="${CHANGED}glab " | |
| [ "$UPD_UV" = "true" ] && CHANGED="${CHANGED}uv " | |
| [ "$UPD_PC" = "true" ] && CHANGED="${CHANGED}pre-commit " | |
| [ "$UPD_GEM" = "true" ] && CHANGED="${CHANGED}gemini-cli " | |
| CHANGED=$(echo "$CHANGED" | xargs | tr ' ' ', ') | |
| git commit -m "chore(runner): update ${CHANGED} | |
| Automated weekly runner image freshness update." | |
| git push -u --force-with-lease origin "$BRANCH" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/runner-tool-versions.yml around lines 262 - 279, The
workflow fails with --force-with-lease because there is no remote-tracking ref
to validate; before creating/checking out and pushing the branch (around git
checkout -B "$BRANCH" and git push -u --force-with-lease origin "$BRANCH"),
fetch the remote branch to establish a remote-tracking ref (e.g., run a git
fetch origin "$BRANCH" or git fetch origin) and then base the local branch off
origin/"$BRANCH" if it exists so --force-with-lease can verify the lease; add
that fetch step immediately prior to git checkout -B "$BRANCH" and ensure the
subsequent git push uses the fetched remote-tracking ref.
components/frontend/src/components/claude-agent-options/_components/agents-editor.tsx
Outdated
Show resolved
Hide resolved
| </div> | ||
| <div> | ||
| <Label className="text-xs text-muted-foreground">Environment</Label> | ||
| <KeyValueEditor value={server.env ?? {}} onChange={(e) => updateServer(name, { ...server, env: e as Record<string, string> })} /> |
There was a problem hiding this comment.
Unsafe type cast may propagate null values to schema validation.
KeyValueEditor's onChange callback explicitly types its output as Record<string, string | null>, and the updateEntry function can pass null values directly into the record. The cast e as Record<string, string> suppresses the type error but the mcpServerFormConfigSchema validates that env contains z.string() values only, which will reject records with null values.
Filter out null values before passing to updateServer:
🐛 Proposed fix
-<KeyValueEditor value={server.env ?? {}} onChange={(e) => updateServer(name, { ...server, env: e as Record<string, string> })} />
+<KeyValueEditor value={server.env ?? {}} onChange={(e) => updateServer(name, { ...server, env: Object.fromEntries(Object.entries(e).filter(([, v]) => v !== null)) as Record<string, string> })} />📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <KeyValueEditor value={server.env ?? {}} onChange={(e) => updateServer(name, { ...server, env: e as Record<string, string> })} /> | |
| <KeyValueEditor value={server.env ?? {}} onChange={(e) => updateServer(name, { ...server, env: Object.fromEntries(Object.entries(e).filter(([, v]) => v !== null)) as Record<string, string> })} /> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@components/frontend/src/components/claude-agent-options/_components/mcp-servers-editor.tsx`
at line 99, The KeyValueEditor onChange currently casts its output to
Record<string,string> which may hide nulls and cause mcpServerFormConfigSchema
(which expects z.string()) to reject the env record; change the update path
(where KeyValueEditor value is passed into updateServer/updateEntry) to sanitize
the incoming e (from KeyValueEditor) by filtering out any keys whose values are
null or undefined and only pass a Record<string,string> with string values to
updateServer (and ensure the env property on the server object is set to that
sanitized record before calling updateServer).
| from typing import Any, AsyncIterator, Optional | ||
|
|
||
| from ag_ui.core import BaseEvent, EventType, RunAgentInput, RunStartedEvent, RunFinishedEvent | ||
| from typing import Any, AsyncIterator |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify where AsyncIterator is imported from typing vs collections.abc
echo "Typing-based AsyncIterator imports:"
rg -n 'from typing import .*AsyncIterator|from typing import AsyncIterator' -C2
echo
echo "collections.abc AsyncIterator imports:"
rg -n 'from collections\.abc import .*AsyncIterator|from collections\.abc import AsyncIterator' -C2Repository: ambient-code/platform
Length of output: 7266
Consider modernizing to collections.abc.AsyncIterator for Python 3.12+ alignment.
This import is functionally correct but typing.AsyncIterator is deprecated in Python 3.13+. For consistency, apply this change across the codebase (currently 14+ files use typing.AsyncIterator).
🧰 Tools
🪛 Ruff (0.15.7)
[warning] 15-15: Import from collections.abc instead: AsyncIterator
Import from collections.abc
(UP035)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/runners/ambient-runner/ambient_runner/bridges/claude/bridge.py` at
line 15, Replace the deprecated typing.AsyncIterator import with the modern
collections.abc.AsyncIterator: update the import statement that currently uses
"from typing import Any, AsyncIterator" in the Claude bridge (symbol: file-level
import in bridge.py) to import Any from typing and AsyncIterator from
collections.abc; apply the same change wherever typing.AsyncIterator is used
across the codebase so all async iterator type hints reference
collections.abc.AsyncIterator instead of typing.AsyncIterator.
- state-sync/Dockerfile: add bash and sqlite packages (scripts require them) - agents-editor: use stable IDs for React keys instead of editable name Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces Spec-Driven Development (SDD) enforcement infrastructure: - SDD manifest (.specify/sdd-manifest.yaml) declaring managed components and their paths, constitutions, specs, and enforcement mode - Runner constitution (.specify/constitutions/runner.md) with 6 principles extracted from PR #1091 patterns (version pinning, freshness automation, dependency procedures, layer discipline, schema sync, bridge modularity) - Runner spec (.specify/specs/runner.md) documenting component boundary, current state, maintenance workflows, and change protocol - SDD preflight CI job (.github/workflows/sdd-preflight.yml) that checks PRs for managed-path modifications and warns or blocks accordingly The runner is the first component onboarded to SDD in warn mode. Additional components can be added to the manifest to gradually migrate the codebase. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
| File | Component | Mode |
|---|---|---|
.github/workflows/runner-tool-versions.yml |
runner | warn |
components/frontend/src/components/claude-agent-options/index.ts |
runner | warn |
components/frontend/src/components/claude-agent-options/options-form.tsx |
runner | warn |
components/frontend/src/components/claude-agent-options/schema.ts |
runner | warn |
components/runners/ambient-runner/.mcp.json |
runner | warn |
components/runners/ambient-runner/Dockerfile |
runner | warn |
components/runners/ambient-runner/pyproject.toml |
runner | warn |
components/runners/ambient-runner/uv.lock |
runner | warn |
No action required — these components are in warn mode. Consider using the component's agent workflow for future changes.
📖 Specs: Runner Spec · Runner Constitution
Introduces Spec-Driven Development (SDD) enforcement infrastructure: - SDD manifest (.specify/sdd-manifest.yaml) declaring managed components and their paths, constitutions, specs, and enforcement mode - Runner constitution (.specify/constitutions/runner.md) with 6 principles extracted from PR #1091 patterns (version pinning, freshness automation, dependency procedures, layer discipline, schema sync, bridge modularity) - Runner spec (.specify/specs/runner.md) documenting component boundary, current state, maintenance workflows, and change protocol - SDD preflight CI job (.github/workflows/sdd-preflight.yml) that checks PRs for managed-path modifications and warns or blocks accordingly The runner is the first component onboarded to SDD in warn mode. Additional components can be added to the manifest to gradually migrate the codebase. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sdd-preflight.yml:
- Around line 69-75: The globstar conversion is being overwritten by the
single-star conversion when building the regex in the regex variable; update the
conversion order in the code that builds regex (the pipeline that processes
"$pattern" before appending to "$PATTERN_FILE") so that '**' is preserved and
translated to '.*' before you translate single '*' to '[^/]*' (e.g., convert
'**' to a temporary placeholder, run the single-star sed, then replace the
placeholder with '.*'), ensuring patterns like
components/runners/ambient-runner/** correctly match nested files.
- Around line 84-93: The loop currently breaks on the first matching pattern
which can misclassify files when multiple patterns match; instead, iterate all
patterns without breaking, record matches for the current changed_file, track
whether any matching mode is "enforce" (e.g., set a flag enforce_found), and
after the loop append a single row for that file to VIOLATIONS if enforce_found
is true otherwise to WARNINGS; reference the existing loop (while IFS=$'\t' read
-r regex comp mode; do), the variables changed_file, mode, VIOLATIONS, and
WARNINGS, and remove the break so all regex matches are evaluated and
enforcement takes precedence.
In @.specify/sdd-manifest.yaml:
- Around line 27-30: The SDD manifest's paths list omits the session-dialog
integration file, so changes to create-session-dialog.tsx can bypass SDD checks;
update the paths array (next to existing entries like
components/frontend/src/components/claude-agent-options/** and
components/runners/ambient-runner/**) to include
components/frontend/src/components/create-session-dialog.tsx (or a glob covering
that file) so agentOptions wiring changes are covered by the manifest.
In @.specify/specs/runner.md:
- Around line 55-83: Normalize blank lines around the Markdown headings and
tables to satisfy MD022/MD058: ensure there is exactly one blank line before and
after each top-level heading (e.g., "Base Image", "Maintenance Workflows") and
before/after each table (the "Pinned Tools" and "Key Dependencies" tables) and
subheadings like "Weekly: Tool Freshness (`runner-tool-versions.yml`)" and
"Monthly: Dependency Bump (`UPDATE_PROCEDURE.md`)"; update the runner.md content
so headings and tables have a single blank line separation from surrounding
paragraphs/lists to pass the markdownlint rules.
- Around line 18-51: The markdown uses fenced code blocks for two directory
trees without a language, triggering MD040; update the two blocks that start
with "components/runners/ambient-runner/" and
"components/frontend/src/components/claude-agent-options/" in
.specify/specs/runner.md to add a language token (e.g., text) after the opening
``` so each becomes ```text; ensure both opening fences are changed and no other
content is modified.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: dd586722-1e75-4dba-b2f7-ff194842fa9a
📒 Files selected for processing (4)
.github/workflows/sdd-preflight.yml.specify/constitutions/runner.md.specify/sdd-manifest.yaml.specify/specs/runner.md
| while IFS=$'\t' read -r comp mode pattern; do | ||
| # Escape regex special chars in the pattern, then convert globs | ||
| regex=$(printf '%s' "$pattern" \ | ||
| | sed 's/[.+^${}()|[\]]/\\&/g' \ | ||
| | sed 's/\*\*/.*/g' \ | ||
| | sed 's/\*/[^\/]*/g') | ||
| printf '%s\t%s\t%s\n' "$regex" "$comp" "$mode" >> "$PATTERN_FILE" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
pattern='components/runners/ambient-runner/**'
old_regex=$(printf '%s' "$pattern" \
| sed 's/[.+^${}()|[\]]/\\&/g' \
| sed 's/\*\*/.*/g' \
| sed 's/\*/[^\/]*/g')
new_regex=$(printf '%s' "$pattern" \
| sed 's/[.+^${}()|[\]]/\\&/g' \
| sed 's/\*\*/__GLOBSTAR__/g' \
| sed 's/\*/[^\/]*/g' \
| sed 's/__GLOBSTAR__/.*/g')
printf 'old_regex=%s\nnew_regex=%s\n\n' "$old_regex" "$new_regex"
for f in \
'components/runners/ambient-runner/main.py' \
'components/runners/ambient-runner/ambient_runner/bridges/claude/bridge.py'
do
old_match=$(printf '%s' "$f" | grep -qE "^${old_regex}$" && echo match || echo miss)
new_match=$(printf '%s' "$f" | grep -qE "^${new_regex}$" && echo match || echo miss)
printf '%s\told:%s\tnew:%s\n' "$f" "$old_match" "$new_match"
doneRepository: ambient-code/platform
Length of output: 318
🏁 Script executed:
cat -n .github/workflows/sdd-preflight.yml | sed -n '65,80p'Repository: ambient-code/platform
Length of output: 842
🏁 Script executed:
grep -B 20 "COMPONENT_PATHS=" .github/workflows/sdd-preflight.yml | head -30Repository: ambient-code/platform
Length of output: 928
🏁 Script executed:
sed -n '30,70p' .github/workflows/sdd-preflight.ymlRepository: ambient-code/platform
Length of output: 1698
🏁 Script executed:
cat .specify/sdd-manifest.yaml 2>/dev/null || echo "Manifest file not found in default location"Repository: ambient-code/platform
Length of output: 1844
** glob handling is broken, causing deeply nested files to bypass enforcement.
The second sed call on line 74 overwrites the .* pattern created by the globstar conversion, downgrading it to [^/]*. This breaks enforcement for the runner component's components/runners/ambient-runner/** pattern—nested files like ambient_runner/bridges/claude/bridge.py are incorrectly missed.
🔧 Proposed fix (preserve globstar before single-star conversion)
regex=$(printf '%s' "$pattern" \
| sed 's/[.+^${}()|[\]]/\\&/g' \
- | sed 's/\*\*/.*/g' \
- | sed 's/\*/[^\/]*/g')
+ | sed 's/\*\*/__GLOBSTAR__/g' \
+ | sed 's/\*/[^\/]*/g' \
+ | sed 's/__GLOBSTAR__/.*/g')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while IFS=$'\t' read -r comp mode pattern; do | |
| # Escape regex special chars in the pattern, then convert globs | |
| regex=$(printf '%s' "$pattern" \ | |
| | sed 's/[.+^${}()|[\]]/\\&/g' \ | |
| | sed 's/\*\*/.*/g' \ | |
| | sed 's/\*/[^\/]*/g') | |
| printf '%s\t%s\t%s\n' "$regex" "$comp" "$mode" >> "$PATTERN_FILE" | |
| while IFS=$'\t' read -r comp mode pattern; do | |
| # Escape regex special chars in the pattern, then convert globs | |
| regex=$(printf '%s' "$pattern" \ | |
| | sed 's/[.+^${}()|[\]]/\\&/g' \ | |
| | sed 's/\*\*/__GLOBSTAR__/g' \ | |
| | sed 's/\*/[^\/]*/g' \ | |
| | sed 's/__GLOBSTAR__/.*/g') | |
| printf '%s\t%s\t%s\n' "$regex" "$comp" "$mode" >> "$PATTERN_FILE" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sdd-preflight.yml around lines 69 - 75, The globstar
conversion is being overwritten by the single-star conversion when building the
regex in the regex variable; update the conversion order in the code that builds
regex (the pipeline that processes "$pattern" before appending to
"$PATTERN_FILE") so that '**' is preserved and translated to '.*' before you
translate single '*' to '[^/]*' (e.g., convert '**' to a temporary placeholder,
run the single-star sed, then replace the placeholder with '.*'), ensuring
patterns like components/runners/ambient-runner/** correctly match nested files.
| while IFS=$'\t' read -r regex comp mode; do | ||
| if printf '%s' "$changed_file" | grep -qE "^${regex}$"; then | ||
| row="| \`${changed_file}\` | **${comp}** | ${mode} |" | ||
| if [ "$mode" = "enforce" ]; then | ||
| VIOLATIONS="${VIOLATIONS}${row}"$'\n' | ||
| else | ||
| WARNINGS="${WARNINGS}${row}"$'\n' | ||
| fi | ||
| break | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
changed_file='components/frontend/src/components/create-session-dialog.tsx'
patterns=$'components/frontend/src/components/.*\trunner\twarn\ncomponents/frontend/src/components/create-session-dialog\\.tsx\trunner\tenforce'
# Current behavior (first match wins)
current_mode=""
while IFS=$'\t' read -r regex comp mode; do
if printf '%s' "$changed_file" | grep -qE "^${regex}$"; then
current_mode="$mode"
break
fi
done <<< "$patterns"
# Desired behavior (enforce wins)
desired_mode=""
while IFS=$'\t' read -r regex comp mode; do
if printf '%s' "$changed_file" | grep -qE "^${regex}$"; then
if [ "$mode" = "enforce" ]; then
desired_mode="enforce"
break
elif [ -z "$desired_mode" ]; then
desired_mode="$mode"
fi
fi
done <<< "$patterns"
printf 'current=%s\ndesired=%s\n' "$current_mode" "$desired_mode"Repository: ambient-code/platform
Length of output: 93
🏁 Script executed:
# First, let's verify the file exists and check the actual code at lines 84-93
cat -n .github/workflows/sdd-preflight.yml | sed -n '80,100p'Repository: ambient-code/platform
Length of output: 938
🏁 Script executed:
# Find where PATTERN_FILE is created/populated
cat -n .github/workflows/sdd-preflight.yml | grep -B5 -A5 "PATTERN_FILE"Repository: ambient-code/platform
Length of output: 1363
🏁 Script executed:
# Find where COMPONENT_PATHS is defined
cat -n .github/workflows/sdd-preflight.yml | head -80Repository: ambient-code/platform
Length of output: 3375
🏁 Script executed:
# Let's check if the sdd-manifest.yaml file exists and look at its structure
if [ -f ".specify/sdd-manifest.yaml" ]; then
cat ".specify/sdd-manifest.yaml"
else
echo "File not found, checking for manifest files"
find . -name "*sdd-manifest*" -o -name "*manifest*" | head -20
fiRepository: ambient-code/platform
Length of output: 1844
🏁 Script executed:
# Let's verify if there are any overlapping patterns in the current manifest
# by checking if any file would match multiple patterns
cat > /tmp/check_overlap.sh << 'EOF'
#!/bin/bash
# Sample files that might exist
test_files=(
"components/runners/ambient-runner/README.md"
"components/frontend/src/components/claude-agent-options/index.ts"
".github/workflows/runner-tool-versions.yml"
"components/frontend/src/components/some-other-file.tsx"
)
# Current patterns from the manifest (converted to regex format)
patterns=(
"components/runners/ambient-runner/.*\trunner\twarn"
"components/frontend/src/components/claude-agent-options/.*\trunner\twarn"
"\\.github/workflows/runner-tool-versions\\.yml\trunner\twarn"
)
echo "=== Checking for pattern overlaps ==="
for file in "${test_files[@]}"; do
echo ""
echo "File: $file"
match_count=0
while IFS=$'\t' read -r regex comp mode; do
if printf '%s' "$file" | grep -qE "^${regex}$"; then
echo " ✓ Matches: regex=$regex, comp=$comp, mode=$mode"
((match_count++))
fi
done <<< "$(printf '%s\n' "${patterns[@]}")"
if [ $match_count -eq 0 ]; then
echo " ✗ No matches"
elif [ $match_count -gt 1 ]; then
echo " ⚠ WARNING: Multiple matches!"
fi
done
EOF
bash /tmp/check_overlap.shRepository: ambient-code/platform
Length of output: 397
🏁 Script executed:
# Let me check the actual pattern conversion to understand overlap better
# Simulate what the workflow does with the actual manifest patterns
cat > /tmp/test_patterns.sh << 'EOF'
#!/bin/bash
set -euo pipefail
MANIFEST=".specify/sdd-manifest.yaml"
DEFAULT_MODE=$(yq '.default-mode // "warn"' "$MANIFEST")
echo "Default mode: $DEFAULT_MODE"
echo ""
echo "=== Extracting COMPONENT_PATHS ==="
COMPONENT_PATHS=$(yq -r '
.managed-components | to_entries[] |
.key as $comp |
(.value.mode // "'"$DEFAULT_MODE"'") as $mode |
.value.paths[] |
$comp + "\t" + $mode + "\t" + .
' "$MANIFEST")
echo "$COMPONENT_PATHS"
echo ""
echo "=== Converting to regex patterns ==="
while IFS=$'\t' read -r comp mode pattern; do
regex=$(printf '%s' "$pattern" \
| sed 's/[.+^${}()|[\]]/\\&/g' \
| sed 's/\*\*/.*/g' \
| sed 's/\*/[^\/]*/g')
printf 'Regex: %-60s | Comp: %-10s | Mode: %s\n' "$regex" "$comp" "$mode"
done <<< "$COMPONENT_PATHS"
echo ""
echo "=== Checking for overlapping patterns ==="
# Check if any two patterns could match the same file
regex_array=()
comp_array=()
mode_array=()
while IFS=$'\t' read -r comp mode pattern; do
regex=$(printf '%s' "$pattern" \
| sed 's/[.+^${}()|[\]]/\\&/g' \
| sed 's/\*\*/.*/g' \
| sed 's/\*/[^\/]*/g')
regex_array+=("$regex")
comp_array+=("$comp")
mode_array+=("$mode")
done <<< "$COMPONENT_PATHS"
# Check if any patterns have different modes
echo "Number of patterns: ${`#regex_array`[@]}"
unique_modes=$(printf '%s\n' "${mode_array[@]}" | sort -u)
echo "Unique modes: $unique_modes"
# Check if there are overlaps and different modes
if [ "${`#regex_array`[@]}" -gt 1 ]; then
mode_count=$(printf '%s\n' "${mode_array[@]}" | sort -u | wc -l)
if [ "$mode_count" -gt 1 ]; then
echo "⚠ MIXED MODES DETECTED - potential precedence issue"
else
echo "✓ All patterns have same mode - no precedence risk"
fi
fi
EOF
bash /tmp/test_patterns.shRepository: ambient-code/platform
Length of output: 119
🏁 Script executed:
# Let's parse the manifest directly without yq
cat > /tmp/parse_manifest.py << 'EOF'
import yaml
with open('.specify/sdd-manifest.yaml', 'r') as f:
manifest = yaml.safe_load(f)
default_mode = manifest.get('default-mode', 'warn')
print(f"Default mode: {default_mode}\n")
print("=== Managed Components and Paths ===")
for comp, comp_data in manifest.get('managed-components', {}).items():
mode = comp_data.get('mode', default_mode)
paths = comp_data.get('paths', [])
print(f"\nComponent: {comp}")
print(f" Mode: {mode}")
print(f" Paths:")
for path in paths:
print(f" - {path}")
print("\n=== Pattern Analysis ===")
all_patterns = []
for comp, comp_data in manifest.get('managed-components', {}).items():
mode = comp_data.get('mode', default_mode)
paths = comp_data.get('paths', [])
for path in paths:
all_patterns.append((comp, mode, path))
print(f"Total patterns: {len(all_patterns)}")
unique_modes = set(m for _, m, _ in all_patterns)
print(f"Unique modes: {unique_modes}")
if len(unique_modes) > 1:
print("\n⚠ MIXED MODES DETECTED - potential precedence issue if patterns overlap")
else:
print("\n✓ All patterns have the same mode - no precedence risk with current data")
# Check for potential overlaps (patterns where one is a prefix of another)
print("\n=== Checking for pattern overlaps ===")
overlap_found = False
for i, (comp1, mode1, path1) in enumerate(all_patterns):
for j, (comp2, mode2, path2) in enumerate(all_patterns):
if i < j:
# Check if path1 could match path2 (considering globs)
path1_simple = path1.replace('**', '*')
path2_simple = path2.replace('**', '*')
if path1_simple != path2_simple:
# Check if they could conceptually overlap
if path1.startswith(path2.split('*')[0]) or path2.startswith(path1.split('*')[0]):
print(f"Potential overlap:")
print(f" Pattern 1: {path1} (mode: {mode1})")
print(f" Pattern 2: {path2} (mode: {mode2})")
overlap_found = True
if not overlap_found:
print("No obvious overlaps in current patterns")
EOF
python3 << 'PYSCRIPT'
import yaml
with open('.specify/sdd-manifest.yaml', 'r') as f:
manifest = yaml.safe_load(f)
default_mode = manifest.get('default-mode', 'warn')
print(f"Default mode: {default_mode}\n")
print("=== Managed Components and Paths ===")
for comp, comp_data in manifest.get('managed-components', {}).items():
mode = comp_data.get('mode', default_mode)
paths = comp_data.get('paths', [])
print(f"\nComponent: {comp}")
print(f" Mode: {mode}")
print(f" Paths:")
for path in paths:
print(f" - {path}")
print("\n=== Pattern Analysis ===")
all_patterns = []
for comp, comp_data in manifest.get('managed-components', {}).items():
mode = comp_data.get('mode', default_mode)
paths = comp_data.get('paths', [])
for path in paths:
all_patterns.append((comp, mode, path))
print(f"Total patterns: {len(all_patterns)}")
unique_modes = set(m for _, m, _ in all_patterns)
print(f"Unique modes: {unique_modes}")
if len(unique_modes) > 1:
print("\n⚠ MIXED MODES DETECTED - potential precedence issue if patterns overlap")
else:
print("\n✓ All patterns have the same mode - no precedence risk with current data")
PYSCRIPTRepository: ambient-code/platform
Length of output: 460
🏁 Script executed:
# Let's verify the reproduction case is actually applicable by understanding
# how patterns would be ordered if we had overlapping modes
cat > /tmp/test_precedence.sh << 'EOF'
#!/bin/bash
set -euo pipefail
# Simulate what would happen if we had overlapping patterns with different modes
# This mirrors a realistic scenario where multiple components are enabled
# Scenario: Both runner (warn) and a hypothetical component (enforce) patterns
patterns=$(cat <<'PATTERNS'
components/frontend/src/components/.* runner warn
components/frontend/src/components/create-session-dialog\.tsx backend enforce
PATTERNS
)
test_file="components/frontend/src/components/create-session-dialog.tsx"
echo "=== Testing file: $test_file ==="
echo ""
echo "Patterns (in order):"
echo "$patterns"
echo ""
# Current behavior (first match wins)
echo "=== Current behavior (first match wins) ==="
current_result=""
while IFS=$'\t' read -r regex comp mode; do
if printf '%s' "$test_file" | grep -qE "^${regex}$"; then
current_result="$mode (from $comp)"
echo "Pattern: $regex"
echo " Matches: YES"
echo " Result: $mode (from $comp)"
echo " Action: BREAK"
break
else
echo "Pattern: $regex"
echo " Matches: NO"
fi
done <<< "$patterns"
echo ""
echo "Final result: $current_result"
echo ""
# Desired behavior (enforce takes precedence)
echo "=== Desired behavior (enforce wins) ==="
desired_result=""
while IFS=$'\t' read -r regex comp mode; do
if printf '%s' "$test_file" | grep -qE "^${regex}$"; then
echo "Pattern: $regex"
echo " Matches: YES"
if [ "$mode" = "enforce" ]; then
desired_result="enforce (from $comp)"
echo " Mode is enforce: breaking with result=$mode"
break
elif [ -z "$desired_result" ]; then
desired_result="$mode (from $comp)"
echo " Storing: $mode, continuing..."
else
echo " Already have result, skipping"
fi
else
echo "Pattern: $regex"
echo " Matches: NO"
fi
done <<< "$patterns"
echo ""
echo "Final result: $desired_result"
echo ""
echo "=== Impact ==="
echo "Current: $current_result"
echo "Desired: $desired_result"
if [ "$current_result" != "$desired_result" ]; then
echo "⚠ PRECEDENCE BUG: Enforcement would be incorrectly downgraded"
fi
EOF
bash /tmp/test_precedence.shRepository: ambient-code/platform
Length of output: 929
First-match break can downgrade an enforce hit to warn.
When patterns overlap across components with different enforcement modes, breaking on the first match makes violations order-dependent. A file matching both a broader "warn" pattern and a more specific "enforce" pattern will be incorrectly classified as a warning. As more components are onboarded with mixed modes, this latent bug will cause enforcement boundaries to be unexpectedly downgraded.
The proposed fix correctly evaluates all matching patterns and prioritizes "enforce" results, ensuring enforcement takes precedence regardless of pattern order.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sdd-preflight.yml around lines 84 - 93, The loop currently
breaks on the first matching pattern which can misclassify files when multiple
patterns match; instead, iterate all patterns without breaking, record matches
for the current changed_file, track whether any matching mode is "enforce"
(e.g., set a flag enforce_found), and after the loop append a single row for
that file to VIOLATIONS if enforce_found is true otherwise to WARNINGS;
reference the existing loop (while IFS=$'\t' read -r regex comp mode; do), the
variables changed_file, mode, VIOLATIONS, and WARNINGS, and remove the break so
all regex matches are evaluated and enforcement takes precedence.
| paths: | ||
| - components/runners/ambient-runner/** | ||
| - components/frontend/src/components/claude-agent-options/** | ||
| - .github/workflows/runner-tool-versions.yml |
There was a problem hiding this comment.
Broaden managed paths to include the session-dialog integration point.
agentOptions wiring is also enforced through components/frontend/src/components/create-session-dialog.tsx; leaving it out creates an SDD coverage gap where boundary-sensitive changes can bypass preflight warnings/enforcement.
🔧 Proposed scope fix
paths:
- components/runners/ambient-runner/**
- components/frontend/src/components/claude-agent-options/**
+ - components/frontend/src/components/create-session-dialog.tsx
- .github/workflows/runner-tool-versions.yml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| paths: | |
| - components/runners/ambient-runner/** | |
| - components/frontend/src/components/claude-agent-options/** | |
| - .github/workflows/runner-tool-versions.yml | |
| paths: | |
| - components/runners/ambient-runner/** | |
| - components/frontend/src/components/claude-agent-options/** | |
| - components/frontend/src/components/create-session-dialog.tsx | |
| - .github/workflows/runner-tool-versions.yml |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.specify/sdd-manifest.yaml around lines 27 - 30, The SDD manifest's paths
list omits the session-dialog integration file, so changes to
create-session-dialog.tsx can bypass SDD checks; update the paths array (next to
existing entries like components/frontend/src/components/claude-agent-options/**
and components/runners/ambient-runner/**) to include
components/frontend/src/components/create-session-dialog.tsx (or a glob covering
that file) so agentOptions wiring changes are covered by the manifest.
| ``` | ||
| components/runners/ambient-runner/ | ||
| ├── Dockerfile # Runner container image | ||
| ├── main.py # FastAPI entry point | ||
| ├── pyproject.toml # Python dependencies | ||
| ├── uv.lock # Resolved dependency lock | ||
| ├── .mcp.json # MCP server configuration | ||
| ├── ag_ui_claude_sdk/ # Claude AG-UI adapter | ||
| ├── ag_ui_gemini_cli/ # Gemini AG-UI adapter | ||
| ├── ambient_runner/ # Core runner package | ||
| │ ├── bridges/ # Provider bridges | ||
| │ │ ├── claude/ | ||
| │ │ ├── gemini_cli/ | ||
| │ │ └── langgraph/ | ||
| │ ├── endpoints/ # FastAPI routes | ||
| │ ├── middleware/ # Request middleware | ||
| │ └── platform/ # Platform integration | ||
| ├── tests/ # Test suite | ||
| └── docs/ | ||
| └── UPDATE_PROCEDURE.md # Maintenance procedure | ||
|
|
||
| .github/workflows/ | ||
| └── runner-tool-versions.yml # Automated freshness checks | ||
| ``` | ||
|
|
||
| ### Supporting Frontend Paths | ||
|
|
||
| ``` | ||
| components/frontend/src/components/claude-agent-options/ | ||
| ├── schema.ts # Zod schema (mirrors SDK types) | ||
| ├── options-form.tsx # Main form component | ||
| ├── index.ts # Barrel exports | ||
| └── _components/ # Per-section editors | ||
| ``` |
There was a problem hiding this comment.
Add explicit languages to fenced code blocks (MD040).
Both tree blocks should declare a language (for example text) to satisfy markdownlint.
📝 Suggested markdown fix
-```
+```text
components/runners/ambient-runner/
...
-```
+```
-```
+```text
components/frontend/src/components/claude-agent-options/
...
-```
+```🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 18-18: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 45-45: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.specify/specs/runner.md around lines 18 - 51, The markdown uses fenced code
blocks for two directory trees without a language, triggering MD040; update the
two blocks that start with "components/runners/ambient-runner/" and
"components/frontend/src/components/claude-agent-options/" in
.specify/specs/runner.md to add a language token (e.g., text) after the opening
``` so each becomes ```text; ensure both opening fences are changed and no other
content is modified.
| ### Base Image | ||
| - **UBI 10** (`registry.access.redhat.com/ubi10/ubi@sha256:...`) | ||
| - Python 3.12 (system default), Node.js (AppStream), Go (go-toolset) | ||
|
|
||
| ### Pinned Tools | ||
| | Tool | Dockerfile ARG | Purpose | | ||
| |------|---------------|---------| | ||
| | gh | `GH_VERSION` | GitHub CLI for repo operations | | ||
| | glab | `GLAB_VERSION` | GitLab CLI for repo operations | | ||
| | uv | `UV_VERSION` | Python package management | | ||
| | pre-commit | `PRE_COMMIT_VERSION` | Git hook framework | | ||
| | gemini-cli | `GEMINI_CLI_VERSION` | Google Gemini CLI | | ||
|
|
||
| ### Key Dependencies | ||
| | Package | Constraint | Role | | ||
| |---------|-----------|------| | ||
| | claude-agent-sdk | `>=0.1.50` | Claude Code agent SDK | | ||
| | anthropic | `>=0.86.0` | Anthropic API client | | ||
| | mcp | `>=1.9.2` | Model Context Protocol | | ||
| | ag-ui-protocol | `>=0.6.2` | AG-UI event protocol | | ||
|
|
||
| ## Maintenance Workflows | ||
|
|
||
| ### Weekly: Tool Freshness (`runner-tool-versions.yml`) | ||
| - Checks all pinned tools against upstream registries | ||
| - Opens a PR if any component has a newer version | ||
| - Does not auto-merge | ||
|
|
||
| ### Monthly: Dependency Bump (`UPDATE_PROCEDURE.md`) |
There was a problem hiding this comment.
Normalize blank lines around headings and tables (MD022/MD058).
The current section/table layout violates markdownlint spacing rules and may fail doc lint checks.
📝 Suggested spacing fix
### Base Image
+
- **UBI 10** (`registry.access.redhat.com/ubi10/ubi@sha256:...`)
- Python 3.12 (system default), Node.js (AppStream), Go (go-toolset)
### Pinned Tools
+
| Tool | Dockerfile ARG | Purpose |
|------|---------------|---------|
| gh | `GH_VERSION` | GitHub CLI for repo operations |
...
| gemini-cli | `GEMINI_CLI_VERSION` | Google Gemini CLI |
### Key Dependencies
+
| Package | Constraint | Role |
|---------|-----------|------|
| claude-agent-sdk | `>=0.1.50` | Claude Code agent SDK |
...
### Weekly: Tool Freshness (`runner-tool-versions.yml`)
+
- Checks all pinned tools against upstream registries
...
### Monthly: Dependency Bump (`UPDATE_PROCEDURE.md`)
+
- Bumps all Python dependencies to latest stable🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 55-55: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 59-59: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 60-60: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
[warning] 68-68: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 69-69: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
[warning] 78-78: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 83-83: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.specify/specs/runner.md around lines 55 - 83, Normalize blank lines around
the Markdown headings and tables to satisfy MD022/MD058: ensure there is exactly
one blank line before and after each top-level heading (e.g., "Base Image",
"Maintenance Workflows") and before/after each table (the "Pinned Tools" and
"Key Dependencies" tables) and subheadings like "Weekly: Tool Freshness
(`runner-tool-versions.yml`)" and "Monthly: Dependency Bump
(`UPDATE_PROCEDURE.md`)"; update the runner.md content so headings and tables
have a single blank line separation from surrounding paragraphs/lists to pass
the markdownlint rules.
Summary
Combines PR #845 and PR #1045 into a single PR, with improvements from code review.
Runner / Dockerfile
ARGversions: gh, glab, uv, pre-commit, gemini-cliRUNlayer (single ARCH computation)Dependencies
Frontend
advanced-agent-optionsUnleash flagFreshness workflow
runner-tool-versions.yml) that checks all runner components for updatesevalusageVersion table
ARG GH_VERSION=2.74.0ARG GLAB_VERSION=1.52.0ARG UV_VERSION=0.7.8ARG PRE_COMMIT_VERSION=4.2.0ARG GEMINI_CLI_VERSION=0.1.17Test plan
python3 --versionshows 3.12,node --versionshows v22gh version,glab version,uv --version,gemini --versionall work inside the containeradvanced-agent-optionsflag is enabledSupersedes #845 and #1045.
🤖 Generated with Claude Code