diff --git a/docs/c1-chart.md b/docs/c1-chart.md
new file mode 100644
index 0000000..b52f5ac
--- /dev/null
+++ b/docs/c1-chart.md
@@ -0,0 +1,46 @@
+```mermaid
+graph LR
+ subgraph Actors[" Actors "]
+ direction TB
+ SRE["SRE / DevOps\n[Person]"]
+ OnCall["On-Call Operator\n[Person · HITL]"]
+ end
+
+ subgraph External[" External Systems "]
+ direction TB
+ Prom["Prometheus\nAlertManager\n[External Monitor]"]
+ Platforms["Chat Platforms\nSlack · MS Teams · Google Chat\n[Inbound Event Sources]"]
+ end
+
+ subgraph Core[" Alert Routing Infrastructure [Software System] "]
+ Rail["Extension Tool · L0 Agent · Dapr\nDedup · Rate Limit · Route · Reply"]
+ end
+
+ subgraph Downstream[" Downstream "]
+ direction TB
+ L1["L1 Agents\n[Specialist Automation · A2A JSON-RPC]"]
+ StateStore["Dapr State Store\nRedis 7\n[Fingerprints · Rate Counters]"]
+ Broker["Dapr Pub/Sub\nRabbitMQ\n[alerts-inbound · alerts-outbound]"]
+ end
+
+ SRE -->|"configures alert rules"| Prom
+ Prom -->|"webhook · HTTPS"| Platforms
+ Platforms -->|"Event JSON · Sync · HTTPS"| Rail
+ Rail -->|"formatted reply · HTTPS"| Platforms
+ OnCall -->|"HITL approval"| Rail
+ Rail -->|"A2A JSON-RPC · HTTPS"| L1
+ Rail <-->|"[State] get/set/expire · RESP3"| StateStore
+ Rail <-->|"[Pub/Sub] AMQP 0-9-1"| Broker
+
+ classDef actor fill:#1a2a1a,stroke:#00E5A0,stroke-width:1.5px,color:#00E5A0
+ classDef external fill:#1a1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef core fill:#0d2a1a,stroke:#00E5A0,stroke-width:2.5px,color:#00E5A0
+ classDef infra fill:#2a1a00,stroke:#FF9900,stroke-width:1.5px,color:#FF9900
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class SRE,OnCall actor
+ class Prom,Platforms external
+ class Rail core
+ class StateStore,Broker infra
+ class L1 agents
+```
diff --git a/docs/c2-chart.md b/docs/c2-chart.md
new file mode 100644
index 0000000..a710c85
--- /dev/null
+++ b/docs/c2-chart.md
@@ -0,0 +1,83 @@
+```mermaid
+graph LR
+ %% ── External Systems
+ Prom["Prometheus
AlertManager
[External Monitor]"]
+ Platforms["Chat Platforms
Slack · Teams · GChat
[Inbound Sources]"]
+
+ %% ── System Boundary
+ subgraph Boundary["System Boundary — Alert Routing Infrastructure"]
+ direction TB
+
+ %% Extension Tool
+ subgraph ET["Extension Tool"]
+ direction TB
+
+ GW["Message Gateway
Platform Adapter"]
+ VP["Validation Pipeline
Schema · Dedup · Rate Limiter"]
+ DaprET["Dapr Sidecar
Pub: alerts-inbound · Sub: alerts-outbound
State: fingerprints · counters"]
+
+ GW --> VP --> DaprET
+ end
+
+ %% Dapr State Store
+ subgraph State["Dapr State Store · Redis 7"]
+ Redis[("Redis 7
Fingerprints · Rate Counters")]
+ end
+
+ %% Dapr Pub/Sub Broker
+ subgraph MsgBroker["Dapr Pub/Sub · RabbitMQ"]
+ direction TB
+ CH1["alerts-inbound"]
+ CH2["alerts-outbound"]
+ end
+
+ %% L0 Agent
+ subgraph L0["L0 Agent"]
+ direction TB
+ QM["Queue Manager
Alert Consumer · Payload Parser"]
+ A2A["A2A Server
auto → L1 dispatch
hitl → HITL approval"]
+ RH["Response Handler
Formatter · Router"]
+ DaprL0["Dapr Sidecar
Sub: alerts-inbound · Pub: alerts-outbound"]
+
+ DaprL0 --> QM --> A2A --> RH --> DaprL0
+ end
+
+ PlatOut["Outbound Platform APIs
Slack Web API · Teams Graph API · GChat REST
[Block Kit · Adaptive Card · Card JSON]"]
+ end
+
+ %% L1 Agents
+ L1["L1 Agents
[A2A JSON-RPC · HTTPS]"]
+
+ %% ── Flows
+ Prom -->|"webhook · HTTPS"| Platforms
+ Platforms -->|"[Inbound] Event JSON · Sync"| GW
+ DaprET <-->|"[State] Redis RESP3 · get/set/expire"| Redis
+ DaprET -->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-inbound"| CH1
+ CH1 -->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-inbound"| DaprL0
+ DaprL0 -.->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-outbound"| CH2
+ CH2 -.->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-outbound"| DaprET
+ DaprET -.->|"[Outbound] ResponsePayload · HTTP callback"| GW
+ GW -.->|"[Outbound] formatted reply"| PlatOut
+ A2A <-.->|"A2A JSON-RPC · HTTPS · Async"| L1
+
+ %% ── Styling
+ classDef external fill:#1a1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef gateway fill:#0d1a2a,stroke:#00C9FF,stroke-width:1px,color:#00C9FF
+ classDef validate fill:#1a0d2a,stroke:#7B61FF,stroke-width:1px,color:#7B61FF
+ classDef dapr fill:#1e0d2a,stroke:#7B61FF,stroke-width:1.5px,color:#C084FC
+ classDef broker fill:#2a1a00,stroke:#FF9900,stroke-width:1px,color:#FF9900
+ classDef redis fill:#1a0d1a,stroke:#7B61FF,stroke-width:1.5px,color:#7B61FF
+ classDef l0core fill:#0d2a1a,stroke:#00E5A0,stroke-width:1px,color:#00E5A0
+ classDef outbound fill:#2a0d14,stroke:#FF4D6D,stroke-width:1px,color:#FF4D6D
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class Prom,Platforms external
+ class GW gateway
+ class VP validate
+ class DaprET,DaprL0 dapr
+ class CH1,CH2 broker
+ class Redis redis
+ class QM,A2A,RH l0core
+ class PlatOut outbound
+ class L1 agents
+```
diff --git a/docs/c3-l0-agent.md b/docs/c3-l0-agent.md
new file mode 100644
index 0000000..2dc39ff
--- /dev/null
+++ b/docs/c3-l0-agent.md
@@ -0,0 +1,53 @@
+```mermaid
+graph LR
+ subgraph L0[" L0 Agent — Component Detail "]
+ direction LR
+
+ DaprL0["Dapr Sidecar\n[Sub: alerts-inbound\nPub: alerts-outbound]"]
+
+ subgraph QM[" Queue Manager "]
+ direction TB
+ AC["Alert Consumer\n[Dapr /subscribe callback\n→ RawAlert]"]
+ PP["Payload Parser\n[Normalise · Enrich\nScore severity\n→ NormalisedAlert\n{routingHint: auto|hitl}]"]
+ AC --> PP
+ end
+
+ subgraph Core[" Core "]
+ A2A["A2A Server\n[Central core · A2A JSON-RPC · HTTPS\nauto → dispatches to L1 Agents\nhitl → HITL operator approval\n→ AgentResponse · HumanResponse]"]
+ end
+
+ subgraph RH[" Response Handler "]
+ direction TB
+ FM["Formatter\n[Slack → Block Kit\nTeams → Adaptive Card\nGChat → Card JSON]"]
+ RT["Router\n[Publishes to Dapr\n→ RoutedResponse]"]
+ FM --> RT
+ end
+
+ DaprL0 -->|"RawAlert"| AC
+ PP -->|"NormalisedAlert"| A2A
+ A2A -.->|"AgentResponse / HumanResponse"| FM
+ RT -.->|"RoutedResponse"| DaprL0
+ end
+
+ InboundBroker["Dapr Pub/Sub\n[alerts-inbound]"]
+ OutboundBroker["Dapr Pub/Sub\n[alerts-outbound]"]
+ L1["L1 Agents\n[Specialist Automation]"]
+
+ InboundBroker -->|"[Pub/Sub] AMQP 0-9-1 deliver · alerts-inbound"| DaprL0
+ DaprL0 -.->|"[Pub/Sub] AMQP 0-9-1 publish · alerts-outbound"| OutboundBroker
+ A2A <-.->|"A2A JSON-RPC · HTTPS · Async"| L1
+
+ classDef dapr fill:#1e0d2a,stroke:#C084FC,stroke-width:2px,color:#C084FC
+ classDef queuemgr fill:#0d1a2a,stroke:#00C9FF,stroke-width:1.5px,color:#00C9FF
+ classDef a2a fill:#0d2a1a,stroke:#00E5A0,stroke-width:2.5px,color:#00E5A0
+ classDef response fill:#2a0d14,stroke:#FF4D6D,stroke-width:1.5px,color:#FF4D6D
+ classDef broker fill:#2a1a00,stroke:#FF9900,stroke-width:1px,color:#FF9900
+ classDef agents fill:#1e1a2a,stroke:#C084FC,stroke-width:1.5px,color:#C084FC
+
+ class DaprL0 dapr
+ class AC,PP queuemgr
+ class A2A a2a
+ class FM,RT response
+ class InboundBroker,OutboundBroker broker
+ class L1 agents
+```
diff --git a/docs/c3-message-gateway.md b/docs/c3-message-gateway.md
new file mode 100644
index 0000000..e1782cb
--- /dev/null
+++ b/docs/c3-message-gateway.md
@@ -0,0 +1,42 @@
+```mermaid
+flowchart TB
+ subgraph Platforms
+ SL[Slack webhook]
+ TM[Teams webhook]
+ DC[Discord websocket]
+ end
+
+ subgraph Adapters["Adapters — BaseAdapter: verify · normalize · monitor() · send()"]
+
+ AD["SlackAdapter · TeamsAdapter · DiscordAdapter"]
+ end
+
+ subgraph NL["Normalization layer"]
+ IM["InboundMessage
platform-agnostic canonical schema"]
+ end
+
+ subgraph GW["Gateway — hub-and-spoke"]
+ SUP["_supervise()
auto-restart with backoff"]
+ DISP["_dispatch()
fan-out to handlers"]
+ ROUTE["send()
route OutboundMessage"]
+ HEALTH["health
per-platform status"]
+ end
+
+ subgraph Core["Core / Dispatcher — main.py"]
+ DAPR["publish_to_dapr()
Dapr broker publish"]
+ REPLY["OutboundMessage
platform-agnostic reply"]
+ end
+
+ SL --> AD
+ TM --> AD
+ DC --> AD
+
+ AD --> IM
+
+ IM --> SUP
+ SUP --> DISP
+ DISP --> DAPR
+ DAPR --> REPLY
+ REPLY --> ROUTE
+ ROUTE --> AD
+```
diff --git a/helm-chart/templates/agent-deployment.yaml b/helm-chart/templates/agent-deployment.yaml
index 0343bb4..9f73ef9 100644
--- a/helm-chart/templates/agent-deployment.yaml
+++ b/helm-chart/templates/agent-deployment.yaml
@@ -22,6 +22,10 @@ spec:
metadata:
labels:
app: {{ printf "agent-%s" .name }}
+ {{- if .annotations }}
+ annotations:
+ {{- toYaml .annotations | nindent 8 }}
+ {{- end }}
spec:
containers:
- name: {{ printf "agent-%s" .name }}
diff --git a/helm-chart/values.yaml b/helm-chart/values.yaml
index ce15451..ef00990 100644
--- a/helm-chart/values.yaml
+++ b/helm-chart/values.yaml
@@ -31,8 +31,12 @@ pinecone:
agents:
- name: l0
enabled: true
- image: 01community/agent-l0:v1
+ image: 01community/agent-l0:v1.2
containerPort: 3000
+ annotations:
+ dapr.io/enabled: "true"
+ dapr.io/app-id: "l0-agent"
+ dapr.io/app-port: "3000"
env:
# Application settings
NODE_ENV: production
@@ -42,6 +46,11 @@ agents:
L2_AGENT_BASE_URL: http://agent-l2.01cloud.svc.cluster.local:10002
L1_AGENT_BASE_PORT: "10001"
L0_AGENT_PORT: "10000"
+ DAPR_PUBSUB_NAME: pubsub
+ DAPR_INBOUND_TOPIC_NAME: inbound-alerts
+ DAPR_OUTBOUND_TOPIC_NAME: outbound-alerts
+ DAPR_DECISION_TOPIC_NAME: user-decisions
+ GATEWAY_SHARED_SECRET: "dev-secret-do-not-use-in-production"
# Prometheus Alertmanager
PROMETHEUS_ALERT_URL: http://prometheus-kube-prometheus-alertmanager.monitoring.svc.cluster.local:9093/api/v2/alerts
# Runtime settings
@@ -95,7 +104,7 @@ agents:
# TRACELOOP_ENABLED: "false"
# TRACELOOP_BASE_URL: http://otel-collector-collector.opentelemetry.svc.cluster.local:4318
# STM (Short-Term Memory) settings
- STM_ENABLE_POSTGRES: "true"
+ STM_ENABLE_POSTGRES: "false"
STM_ENABLE_VECTOR: "false"
# Pinecone Vector Database
PINECONE_INDEX_NAME: level-1-agent
diff --git a/k8s-agent/level-0-agent/app/routes/dapr.$.tsx b/k8s-agent/level-0-agent/app/routes/dapr.$.tsx
new file mode 100644
index 0000000..cbf87b7
--- /dev/null
+++ b/k8s-agent/level-0-agent/app/routes/dapr.$.tsx
@@ -0,0 +1,330 @@
+import { type ActionFunctionArgs, type LoaderFunctionArgs } from "react-router";
+import crypto from "node:crypto";
+import { ClientFactory } from "@a2a-js/sdk/client";
+import { v4 as uuidv4 } from "uuid";
+
+const DAPR_HTTP_PORT = process.env.DAPR_HTTP_PORT || "3500";
+const DAPR_PUBSUB_NAME = process.env.DAPR_PUBSUB_NAME || "pubsub";
+const DAPR_INBOUND_TOPIC_NAME =
+ process.env.DAPR_INBOUND_TOPIC_NAME || "inbound-alerts";
+const DAPR_OUTBOUND_TOPIC_NAME =
+ process.env.DAPR_OUTBOUND_TOPIC_NAME ||
+ process.env.DAPR_OUTBOUND_TOPIC ||
+ "outbound-alerts";
+const DAPR_DECISION_TOPIC_NAME =
+ process.env.DAPR_DECISION_TOPIC_NAME || "user-decisions";
+const GATEWAY_SHARED_SECRET =
+ process.env.GATEWAY_SHARED_SECRET || "replace-with-a-shared-secret-at-least-32-chars";
+const AGENT_BASE_URL = process.env.AGENT_BASE_URL || "http://localhost:10001";
+
+type DecodedContextId = {
+ v: number;
+ p: string;
+ c: string;
+ t: string | null;
+ m: string;
+ u: string;
+ ts: string;
+};
+
+type DaprCloudEvent = {
+ data?: T;
+};
+
+type InboundAlertPayload = {
+ text?: string | null;
+ context_id?: string;
+};
+
+type UserDecisionPayload = {
+ context_id?: string;
+ decisions?: unknown[];
+};
+
+type StreamPublishPayload = {
+ platform: string;
+ channel: string;
+ thread_id?: string;
+ text: string | null;
+ context_id: string;
+ interrupt_payload?: unknown;
+};
+
+function truncateContextId(contextId?: string | null) {
+ if (!contextId) {
+ return "";
+ }
+ return contextId.slice(0, 20);
+}
+
+function verifyAndParseContextId(
+ contextId: string,
+ secret: string,
+): DecodedContextId | null {
+ if (!contextId || !contextId.startsWith("cid_v1_")) {
+ return null;
+ }
+
+ try {
+ const [encodedPayload, encodedSignature] = contextId.slice(7).split(".");
+ if (!encodedPayload || !encodedSignature) {
+ return null;
+ }
+
+ const payloadBuffer = Buffer.from(encodedPayload, "base64url");
+ const signature = Buffer.from(encodedSignature, "base64url");
+ const expectedSignature = crypto
+ .createHmac("sha256", secret)
+ .update(payloadBuffer)
+ .digest();
+
+ if (
+ signature.length !== expectedSignature.length ||
+ !crypto.timingSafeEqual(signature, expectedSignature)
+ ) {
+ return null;
+ }
+
+ return JSON.parse(payloadBuffer.toString("utf8")) as DecodedContextId;
+ } catch (error) {
+ console.error("[Dapr Action] Failed to parse context_id", error);
+ return null;
+ }
+}
+
+async function publishToGateway(payload: StreamPublishPayload) {
+ const url = `http://localhost:${DAPR_HTTP_PORT}/v1.0/publish/${DAPR_PUBSUB_NAME}/${DAPR_OUTBOUND_TOPIC_NAME}`;
+
+ try {
+ const response = await fetch(url, {
+ method: "POST",
+ headers: { "Content-Type": "application/json" },
+ body: JSON.stringify(payload),
+ });
+
+ if (!response.ok) {
+ console.error("[Dapr Action] Gateway publish failed", {
+ status: response.status,
+ contextId: truncateContextId(payload.context_id),
+ });
+ }
+ } catch (error) {
+ console.error("[Dapr Action] Error publishing to Dapr", error);
+ }
+}
+
+function extractParts(event: any): any[] {
+ if (Array.isArray(event?.message?.parts)) {
+ return event.message.parts;
+ }
+ if (Array.isArray(event?.status?.message?.parts)) {
+ return event.status.message.parts;
+ }
+ return [];
+}
+
+function extractText(event: any): string | null {
+ const text = extractParts(event)
+ .filter((part) => part?.kind === "text" && typeof part.text === "string")
+ .map((part) => part.text)
+ .join("");
+
+ return text || null;
+}
+
+function extractInterruptPayload(event: any): unknown | null {
+ if (event?.kind === "interrupt" && event?.interrupt) {
+ return event.interrupt;
+ }
+
+ if (event?.kind === "status-update" && event?.status?.state === "input-required") {
+ const interruptPart = extractParts(event).find(
+ (part) => part?.kind === "data" && part?.data?.interrupt,
+ );
+ if (interruptPart) {
+ return interruptPart.data.interrupt;
+ }
+ }
+
+ return null;
+}
+
+function buildOutboundPayload(
+ event: any,
+ route: DecodedContextId,
+ contextId: string,
+): StreamPublishPayload | null {
+ const text = extractText(event);
+ const interruptPayload = extractInterruptPayload(event);
+
+ if (!text && !interruptPayload) {
+ return null;
+ }
+
+ return {
+ platform: route.p,
+ channel: route.c,
+ thread_id: route.t || undefined,
+ text: text ?? null,
+ context_id: contextId,
+ ...(interruptPayload ? { interrupt_payload: interruptPayload } : {}),
+ };
+}
+
+async function forwardStream(
+ stream: AsyncIterable,
+ route: DecodedContextId,
+ contextId: string,
+) {
+ for await (const event of stream) {
+ const payload = buildOutboundPayload(event, route, contextId);
+ if (!payload) {
+ continue;
+ }
+ await publishToGateway(payload);
+ }
+}
+
+async function processInboundAlert(data: InboundAlertPayload) {
+ const contextId = data.context_id;
+ if (!contextId) {
+ console.error("[Dapr Action] Missing context_id on inbound alert");
+ return;
+ }
+
+ const route = verifyAndParseContextId(contextId, GATEWAY_SHARED_SECRET);
+ if (!route) {
+ console.error("[Dapr Action] Invalid context_id", truncateContextId(contextId));
+ return;
+ }
+
+ if (typeof data.text !== "string" || data.text.length === 0) {
+ console.error(
+ "[Dapr Action] Inbound alert missing text",
+ truncateContextId(contextId),
+ );
+ return;
+ }
+
+ try {
+ const factory = new ClientFactory();
+ const client = await factory.createFromUrl(AGENT_BASE_URL);
+ const stream = client.sendMessageStream({
+ message: {
+ messageId: uuidv4(),
+ contextId,
+ role: "user",
+ parts: [{ kind: "text", text: data.text }],
+ kind: "message",
+ },
+ });
+
+ await forwardStream(stream as AsyncIterable, route, contextId);
+ } catch (error) {
+ console.error("[Dapr Action] Failed to process inbound alert", error);
+ }
+}
+
+async function processUserDecisions(data: UserDecisionPayload) {
+ const contextId = data.context_id;
+ if (!contextId) {
+ console.error("[Dapr Action] Missing context_id on user decision");
+ return;
+ }
+
+ const route = verifyAndParseContextId(contextId, GATEWAY_SHARED_SECRET);
+ if (!route) {
+ console.error("[Dapr Action] Invalid context_id", truncateContextId(contextId));
+ return;
+ }
+
+ if (!Array.isArray(data.decisions) || data.decisions.length === 0) {
+ console.error(
+ "[Dapr Action] User decision missing decisions array",
+ truncateContextId(contextId),
+ );
+ return;
+ }
+
+ try {
+ const factory = new ClientFactory();
+ const client = await factory.createFromUrl(AGENT_BASE_URL);
+ const stream = client.sendMessageStream({
+ message: {
+ messageId: uuidv4(),
+ contextId,
+ role: "user",
+ parts: [
+ {
+ kind: "data",
+ data: { decisions: data.decisions },
+ },
+ ],
+ kind: "message",
+ },
+ });
+
+ await forwardStream(stream as AsyncIterable, route, contextId);
+ } catch (error) {
+ console.error("[Dapr Action] Failed to process user decision", error);
+ }
+}
+
+export async function loader({ request }: LoaderFunctionArgs) {
+ const url = new URL(request.url);
+ const path = url.pathname;
+
+ if (path.includes("subscribe")) {
+ return new Response(
+ JSON.stringify([
+ {
+ pubsubname: DAPR_PUBSUB_NAME,
+ topic: DAPR_INBOUND_TOPIC_NAME,
+ route: `/dapr/${DAPR_INBOUND_TOPIC_NAME}`,
+ },
+ {
+ pubsubname: DAPR_PUBSUB_NAME,
+ topic: DAPR_DECISION_TOPIC_NAME,
+ route: `/dapr/${DAPR_DECISION_TOPIC_NAME}`,
+ },
+ ]),
+ {
+ headers: { "Content-Type": "application/json" },
+ },
+ );
+ }
+
+ if (path.includes("config")) {
+ return new Response(JSON.stringify({}), {
+ headers: { "Content-Type": "application/json" },
+ });
+ }
+
+ return new Response("Not Found", { status: 404 });
+}
+
+export async function action({ request }: ActionFunctionArgs) {
+ const url = new URL(request.url);
+ const path = url.pathname.replace(/\/$/, "");
+
+ let cloudEvent: DaprCloudEvent;
+ try {
+ cloudEvent = (await request.json()) as DaprCloudEvent<
+ InboundAlertPayload | UserDecisionPayload
+ >;
+ } catch {
+ return new Response("Invalid JSON", { status: 400 });
+ }
+
+ if (path === `/dapr/${DAPR_INBOUND_TOPIC_NAME}`) {
+ void processInboundAlert((cloudEvent.data || {}) as InboundAlertPayload);
+ return new Response("OK", { status: 200 });
+ }
+
+ if (path === `/dapr/${DAPR_DECISION_TOPIC_NAME}`) {
+ void processUserDecisions((cloudEvent.data || {}) as UserDecisionPayload);
+ return new Response("OK", { status: 200 });
+ }
+
+ return new Response("Not Found", { status: 404 });
+}
diff --git a/k8s-agent/message-gateway/.agent/commands/speckit.analyze.md b/k8s-agent/message-gateway/.agent/commands/speckit.analyze.md
new file mode 100644
index 0000000..98b04b0
--- /dev/null
+++ b/k8s-agent/message-gateway/.agent/commands/speckit.analyze.md
@@ -0,0 +1,184 @@
+---
+description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Goal
+
+Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
+
+## Operating Constraints
+
+**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
+
+**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
+
+## Execution Steps
+
+### 1. Initialize Analysis Context
+
+Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
+
+- SPEC = FEATURE_DIR/spec.md
+- PLAN = FEATURE_DIR/plan.md
+- TASKS = FEATURE_DIR/tasks.md
+
+Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
+For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+### 2. Load Artifacts (Progressive Disclosure)
+
+Load only the minimal necessary context from each artifact:
+
+**From spec.md:**
+
+- Overview/Context
+- Functional Requirements
+- Non-Functional Requirements
+- User Stories
+- Edge Cases (if present)
+
+**From plan.md:**
+
+- Architecture/stack choices
+- Data Model references
+- Phases
+- Technical constraints
+
+**From tasks.md:**
+
+- Task IDs
+- Descriptions
+- Phase grouping
+- Parallel markers [P]
+- Referenced file paths
+
+**From constitution:**
+
+- Load `.specify/memory/constitution.md` for principle validation
+
+### 3. Build Semantic Models
+
+Create internal representations (do not include raw artifacts in output):
+
+- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
+- **User story/action inventory**: Discrete user actions with acceptance criteria
+- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
+- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
+
+### 4. Detection Passes (Token-Efficient Analysis)
+
+Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
+
+#### A. Duplication Detection
+
+- Identify near-duplicate requirements
+- Mark lower-quality phrasing for consolidation
+
+#### B. Ambiguity Detection
+
+- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
+- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.)
+
+#### C. Underspecification
+
+- Requirements with verbs but missing object or measurable outcome
+- User stories missing acceptance criteria alignment
+- Tasks referencing files or components not defined in spec/plan
+
+#### D. Constitution Alignment
+
+- Any requirement or plan element conflicting with a MUST principle
+- Missing mandated sections or quality gates from constitution
+
+#### E. Coverage Gaps
+
+- Requirements with zero associated tasks
+- Tasks with no mapped requirement/story
+- Non-functional requirements not reflected in tasks (e.g., performance, security)
+
+#### F. Inconsistency
+
+- Terminology drift (same concept named differently across files)
+- Data entities referenced in plan but absent in spec (or vice versa)
+- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
+- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
+
+### 5. Severity Assignment
+
+Use this heuristic to prioritize findings:
+
+- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
+- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
+- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
+- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
+
+### 6. Produce Compact Analysis Report
+
+Output a Markdown report (no file writes) with the following structure:
+
+## Specification Analysis Report
+
+| ID | Category | Severity | Location(s) | Summary | Recommendation |
+|----|----------|----------|-------------|---------|----------------|
+| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
+
+(Add one row per finding; generate stable IDs prefixed by category initial.)
+
+**Coverage Summary Table:**
+
+| Requirement Key | Has Task? | Task IDs | Notes |
+|-----------------|-----------|----------|-------|
+
+**Constitution Alignment Issues:** (if any)
+
+**Unmapped Tasks:** (if any)
+
+**Metrics:**
+
+- Total Requirements
+- Total Tasks
+- Coverage % (requirements with >=1 task)
+- Ambiguity Count
+- Duplication Count
+- Critical Issues Count
+
+### 7. Provide Next Actions
+
+At end of report, output a concise Next Actions block:
+
+- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
+- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
+- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
+
+### 8. Offer Remediation
+
+Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
+
+## Operating Principles
+
+### Context Efficiency
+
+- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
+- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
+- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
+- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
+
+### Analysis Guidelines
+
+- **NEVER modify files** (this is read-only analysis)
+- **NEVER hallucinate missing sections** (if absent, report them accurately)
+- **Prioritize constitution violations** (these are always CRITICAL)
+- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
+- **Report zero issues gracefully** (emit success report with coverage statistics)
+
+## Context
+
+$ARGUMENTS
diff --git a/k8s-agent/message-gateway/.agent/commands/speckit.checklist.md b/k8s-agent/message-gateway/.agent/commands/speckit.checklist.md
new file mode 100644
index 0000000..b7624e2
--- /dev/null
+++ b/k8s-agent/message-gateway/.agent/commands/speckit.checklist.md
@@ -0,0 +1,295 @@
+---
+description: Generate a custom checklist for the current feature based on user requirements.
+---
+
+## Checklist Purpose: "Unit Tests for English"
+
+**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
+
+**NOT for verification/testing**:
+
+- ❌ NOT "Verify the button clicks correctly"
+- ❌ NOT "Test error handling works"
+- ❌ NOT "Confirm the API returns 200"
+- ❌ NOT checking if code/implementation matches the spec
+
+**FOR requirements quality validation**:
+
+- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
+- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
+- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
+- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
+- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
+
+**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Execution Steps
+
+1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
+ - All file paths must be absolute.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
+ - Be generated from the user's phrasing + extracted signals from spec/plan/tasks
+ - Only ask about information that materially changes checklist content
+ - Be skipped individually if already unambiguous in `$ARGUMENTS`
+ - Prefer precision over breadth
+
+ Generation algorithm:
+ 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
+ 2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
+ 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
+ 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
+ 5. Formulate questions chosen from these archetypes:
+ - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
+ - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
+ - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
+ - Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
+ - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
+ - Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
+
+ Question formatting rules:
+ - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
+ - Limit to A–E options maximum; omit table if a free-form answer is clearer
+ - Never ask the user to restate what they already said
+ - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
+
+ Defaults when interaction impossible:
+ - Depth: Standard
+ - Audience: Reviewer (PR) if code-related; Author otherwise
+ - Focus: Top 2 relevance clusters
+
+ Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
+
+3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
+ - Derive checklist theme (e.g., security, review, deploy, ux)
+ - Consolidate explicit must-have items mentioned by user
+ - Map focus selections to category scaffolding
+ - Infer any missing context from spec/plan/tasks (do NOT hallucinate)
+
+4. **Load feature context**: Read from FEATURE_DIR:
+ - spec.md: Feature requirements and scope
+ - plan.md (if exists): Technical details, dependencies
+ - tasks.md (if exists): Implementation tasks
+
+ **Context Loading Strategy**:
+ - Load only necessary portions relevant to active focus areas (avoid full-file dumping)
+ - Prefer summarizing long sections into concise scenario/requirement bullets
+ - Use progressive disclosure: add follow-on retrieval only if gaps detected
+ - If source docs are large, generate interim summary items instead of embedding raw text
+
+5. **Generate checklist** - Create "Unit Tests for Requirements":
+ - Create `FEATURE_DIR/checklists/` directory if it doesn't exist
+ - Generate unique checklist filename:
+ - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
+ - Format: `[domain].md`
+ - File handling behavior:
+ - If file does NOT exist: Create new file and number items starting from CHK001
+ - If file exists: Append new items to existing file, continuing from the last CHK ID (e.g., if last item is CHK015, start new items at CHK016)
+ - Never delete or replace existing checklist content - always preserve and append
+
+ **CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
+ Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
+ - **Completeness**: Are all necessary requirements present?
+ - **Clarity**: Are requirements unambiguous and specific?
+ - **Consistency**: Do requirements align with each other?
+ - **Measurability**: Can requirements be objectively verified?
+ - **Coverage**: Are all scenarios/edge cases addressed?
+
+ **Category Structure** - Group items by requirement quality dimensions:
+ - **Requirement Completeness** (Are all necessary requirements documented?)
+ - **Requirement Clarity** (Are requirements specific and unambiguous?)
+ - **Requirement Consistency** (Do requirements align without conflicts?)
+ - **Acceptance Criteria Quality** (Are success criteria measurable?)
+ - **Scenario Coverage** (Are all flows/cases addressed?)
+ - **Edge Case Coverage** (Are boundary conditions defined?)
+ - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
+ - **Dependencies & Assumptions** (Are they documented and validated?)
+ - **Ambiguities & Conflicts** (What needs clarification?)
+
+ **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
+
+ ❌ **WRONG** (Testing implementation):
+ - "Verify landing page displays 3 episode cards"
+ - "Test hover states work on desktop"
+ - "Confirm logo click navigates home"
+
+ ✅ **CORRECT** (Testing requirements quality):
+ - "Are the exact number and layout of featured episodes specified?" [Completeness]
+ - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
+ - "Are hover state requirements consistent across all interactive elements?" [Consistency]
+ - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
+ - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
+ - "Are loading states defined for asynchronous episode data?" [Completeness]
+ - "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
+
+ **ITEM STRUCTURE**:
+ Each item should follow this pattern:
+ - Question format asking about requirement quality
+ - Focus on what's WRITTEN (or not written) in the spec/plan
+ - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
+ - Reference spec section `[Spec §X.Y]` when checking existing requirements
+ - Use `[Gap]` marker when checking for missing requirements
+
+ **EXAMPLES BY QUALITY DIMENSION**:
+
+ Completeness:
+ - "Are error handling requirements defined for all API failure modes? [Gap]"
+ - "Are accessibility requirements specified for all interactive elements? [Completeness]"
+ - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
+
+ Clarity:
+ - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
+ - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
+ - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
+
+ Consistency:
+ - "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
+ - "Are card component requirements consistent between landing and detail pages? [Consistency]"
+
+ Coverage:
+ - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
+ - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
+ - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
+
+ Measurability:
+ - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
+ - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
+
+ **Scenario Classification & Coverage** (Requirements Quality Focus):
+ - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
+ - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
+ - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
+ - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
+
+ **Traceability Requirements**:
+ - MINIMUM: ≥80% of items MUST include at least one traceability reference
+ - Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
+ - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
+
+ **Surface & Resolve Issues** (Requirements Quality Problems):
+ Ask questions about the requirements themselves:
+ - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
+ - Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
+ - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
+ - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
+ - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
+
+ **Content Consolidation**:
+ - Soft cap: If raw candidate items > 40, prioritize by risk/impact
+ - Merge near-duplicates checking the same requirement aspect
+ - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
+
+ **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
+ - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
+ - ❌ References to code execution, user actions, system behavior
+ - ❌ "Displays correctly", "works properly", "functions as expected"
+ - ❌ "Click", "navigate", "render", "load", "execute"
+ - ❌ Test cases, test plans, QA procedures
+ - ❌ Implementation details (frameworks, APIs, algorithms)
+
+ **✅ REQUIRED PATTERNS** - These test requirements quality:
+ - ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
+ - ✅ "Is [vague term] quantified/clarified with specific criteria?"
+ - ✅ "Are requirements consistent between [section A] and [section B]?"
+ - ✅ "Can [requirement] be objectively measured/verified?"
+ - ✅ "Are [edge cases/scenarios] addressed in requirements?"
+ - ✅ "Does the spec define [missing aspect]?"
+
+6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001.
+
+7. **Report**: Output full path to checklist file, item count, and summarize whether the run created a new file or appended to an existing one. Summarize:
+ - Focus areas selected
+ - Depth level
+ - Actor/timing
+ - Any explicit user-specified must-have items incorporated
+
+**Important**: Each `/speckit.checklist` command invocation uses a short, descriptive checklist filename and either creates a new file or appends to an existing one. This allows:
+
+- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
+- Simple, memorable filenames that indicate checklist purpose
+- Easy identification and navigation in the `checklists/` folder
+
+To avoid clutter, use descriptive types and clean up obsolete checklists when done.
+
+## Example Checklist Types & Sample Items
+
+**UX Requirements Quality:** `ux.md`
+
+Sample items (testing the requirements, NOT the implementation):
+
+- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
+- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
+- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
+- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
+- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
+- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
+
+**API Requirements Quality:** `api.md`
+
+Sample items:
+
+- "Are error response formats specified for all failure scenarios? [Completeness]"
+- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
+- "Are authentication requirements consistent across all endpoints? [Consistency]"
+- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
+- "Is versioning strategy documented in requirements? [Gap]"
+
+**Performance Requirements Quality:** `performance.md`
+
+Sample items:
+
+- "Are performance requirements quantified with specific metrics? [Clarity]"
+- "Are performance targets defined for all critical user journeys? [Coverage]"
+- "Are performance requirements under different load conditions specified? [Completeness]"
+- "Can performance requirements be objectively measured? [Measurability]"
+- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
+
+**Security Requirements Quality:** `security.md`
+
+Sample items:
+
+- "Are authentication requirements specified for all protected resources? [Coverage]"
+- "Are data protection requirements defined for sensitive information? [Completeness]"
+- "Is the threat model documented and requirements aligned to it? [Traceability]"
+- "Are security requirements consistent with compliance obligations? [Consistency]"
+- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
+
+## Anti-Examples: What NOT To Do
+
+**❌ WRONG - These test implementation, not requirements:**
+
+```markdown
+- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
+- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
+- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
+- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
+```
+
+**✅ CORRECT - These test requirements quality:**
+
+```markdown
+- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
+- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
+- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
+- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
+- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
+- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
+```
+
+**Key Differences:**
+
+- Wrong: Tests if the system works correctly
+- Correct: Tests if the requirements are written correctly
+- Wrong: Verification of behavior
+- Correct: Validation of requirement quality
+- Wrong: "Does it do X?"
+- Correct: "Is X clearly specified?"
diff --git a/k8s-agent/message-gateway/.agent/commands/speckit.clarify.md b/k8s-agent/message-gateway/.agent/commands/speckit.clarify.md
new file mode 100644
index 0000000..f2a9696
--- /dev/null
+++ b/k8s-agent/message-gateway/.agent/commands/speckit.clarify.md
@@ -0,0 +1,181 @@
+---
+description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
+handoffs:
+ - label: Build Technical Plan
+ agent: speckit.plan
+ prompt: Create a plan for the spec. I am building with...
+---
+
+## User Input
+
+```text
+$ARGUMENTS
+```
+
+You **MUST** consider the user input before proceeding (if not empty).
+
+## Outline
+
+Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
+
+Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
+
+Execution steps:
+
+1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
+ - `FEATURE_DIR`
+ - `FEATURE_SPEC`
+ - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
+ - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
+ - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
+
+2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
+
+ Functional Scope & Behavior:
+ - Core user goals & success criteria
+ - Explicit out-of-scope declarations
+ - User roles / personas differentiation
+
+ Domain & Data Model:
+ - Entities, attributes, relationships
+ - Identity & uniqueness rules
+ - Lifecycle/state transitions
+ - Data volume / scale assumptions
+
+ Interaction & UX Flow:
+ - Critical user journeys / sequences
+ - Error/empty/loading states
+ - Accessibility or localization notes
+
+ Non-Functional Quality Attributes:
+ - Performance (latency, throughput targets)
+ - Scalability (horizontal/vertical, limits)
+ - Reliability & availability (uptime, recovery expectations)
+ - Observability (logging, metrics, tracing signals)
+ - Security & privacy (authN/Z, data protection, threat assumptions)
+ - Compliance / regulatory constraints (if any)
+
+ Integration & External Dependencies:
+ - External services/APIs and failure modes
+ - Data import/export formats
+ - Protocol/versioning assumptions
+
+ Edge Cases & Failure Handling:
+ - Negative scenarios
+ - Rate limiting / throttling
+ - Conflict resolution (e.g., concurrent edits)
+
+ Constraints & Tradeoffs:
+ - Technical constraints (language, storage, hosting)
+ - Explicit tradeoffs or rejected alternatives
+
+ Terminology & Consistency:
+ - Canonical glossary terms
+ - Avoided synonyms / deprecated terms
+
+ Completion Signals:
+ - Acceptance criteria testability
+ - Measurable Definition of Done style indicators
+
+ Misc / Placeholders:
+ - TODO markers / unresolved decisions
+ - Ambiguous adjectives ("robust", "intuitive") lacking quantification
+
+ For each category with Partial or Missing status, add a candidate question opportunity unless:
+ - Clarification would not materially change implementation or validation strategy
+ - Information is better deferred to planning phase (note internally)
+
+3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
+ - Maximum of 5 total questions across the whole session.
+ - Each question must be answerable with EITHER:
+ - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
+ - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
+ - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
+ - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
+ - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
+ - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
+ - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
+
+4. Sequential questioning loop (interactive):
+ - Present EXACTLY ONE question at a time.
+ - For multiple‑choice questions:
+ - **Analyze all options** and determine the **most suitable option** based on:
+ - Best practices for the project type
+ - Common patterns in similar implementations
+ - Risk reduction (security, performance, maintainability)
+ - Alignment with any explicit project goals or constraints visible in the spec
+ - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
+ - Format as: `**Recommended:** Option [X] - `
+ - Then render all options as a Markdown table:
+
+ | Option | Description |
+ |--------|-------------|
+ | A |