Helixent is a blue rabbit that writes code. It includes an Agent Loop, a Coding Agent, and a nice CLI.
ScreenFlow.mp4
-
Model Foundation
- A stable core
Modelabstraction plus provider-facing contracts, designed to keep model integrations clean and reusable. - Multiple models are supported.
- A stable core
-
Agent Loop (Middleware-Ready)
- A reusable ReAct-style agent loop.
- First-class middleware support for extending behavior (state, tool orchestration, skills, etc.).
- Human-in-the-loop support for approval of tool calls.
- See Middleware
-
Skills Support
- The standard agent skill format is supported.
- Skills are discovered and loaded from:
~/.agents/skills~/.helixent/skills${current_project}/.agents/skills${current_project}/.helixent/skills
- Duplicate skill names in different folders are allowed.
-
Long-term memory
- Project root
AGENTS.mdsupport: if anAGENTS.mdexists at the repository root, it is automatically picked up as project guidance.
- Project root
-
Coding Agent
- A coding-focused agent layer with practical tools (e.g.
bash,read_file,write_file,str_replace,list_files,glob_search,grep_search,apply_patch,file_info,mkdir,move_path, etc.) for developer workflows. - Todo-list-based plan mode is supported.
- A coding-focused agent layer with practical tools (e.g.
-
CLI
- A CLI (with TUI support) for running agents interactively and iterating quickly.
Helixent is now available on npm, so you can install globally and run, or choose to run via npx without installing:
npm install -g helixent@latest
cd path/to/your/project
helixent
helixent --helpcd path/to/your/project
npx helixent@latest
npx helixent --helpHelixent stores your CLI configuration in:
~/.helixent/config.yaml
helixent config model listhelixent config model addhelixent config model remove <model_name>Or select from the list of configured models:
helixent config model removehelixent config model set-default <model_name>Or select from the list of configured models:
helixent config model set-defaultThis section shows how to build Helixent from source and link the helixent CLI into your global PATH on macOS.
bun installAll pushes and pull requests run bun run check in GitHub Actions. Local commits are also blocked by the pre-commit hook until the same check passes.
bun run devbun run build:binAfter the build completes, you should have:
dist/bin/helixent
Make sure your changes pass all the linting, type checking, and tests by running:
bun run checkOr run tests only by running:
bun run testThis is also run automatically by the pre-commit hook. This also causes the committing process a little bit slower, but we think it's worth it. After all, in an AI-dominated GitHub universe, we should be able to handle the last mile of code quality.
Helixent is organized into three layers, plus a community area for third-party integrations.
src/
├── foundation/ # Layer 1 – Core primitives
├── agent/ # Layer 2 – Agent loop
├── coding/ # Layer 3 – Coding agent (domain-specific)
└── community/ # Third-party integrations (e.g. OpenAI)
Core primitives that everything else builds on:
- Model — A unified abstraction over LLM providers. Define a model once, swap providers without changing agent code.
- Message — A single transcript type that flows end-to-end through the system — the single source of truth for the conversation.
- Tool — Tool definitions and execution plumbing (the "actions" an agent can invoke).
A reusable ReAct-style agent loop:
- Maintains state over a conversation transcript.
- Orchestrates "think → act → observe" steps in a loop.
- Invokes tool calls in parallel and feeds observations back into the next reasoning step.
- Supports middleware for extending behavior (see below).
This layer depends only on Foundation and remains generic — not tied to any specific domain.
A domain-specific agent built on top of the generic agent loop, pre-configured with coding-oriented tools (read_file, write_file, str_replace, bash, list_files, glob_search, grep_search, apply_patch, file_info, mkdir, move_path, etc.) and the skills middleware.
Optional, decoupled adapters that implement Foundation interfaces for specific providers:
community/openai—OpenAIModelProviderbacked by theopenaiSDK, compatible with any OpenAI-compatible endpoint.
Here is a complete example that creates a coding agent using an OpenAI-compatible provider:
import { createCodingAgent } from "helixent/coding";
import { OpenAIModelProvider } from "helixent/community/openai";
import { Model } from "helixent/foundation";
// 1. Set up a model provider (any OpenAI-compatible endpoint works)
const provider = new OpenAIModelProvider({
baseURL: "https://api.openai.com/v1",
apiKey: process.env.OPENAI_API_KEY,
});
// 2. Create a model instance with your preferred options
const model = new Model("gpt-4o", provider, {
max_tokens: 16 * 1024,
thinking: { type: "enabled" },
});
// 3. Create the agent — tools and skills are wired up automatically
const agent = await createCodingAgent({ model });
// 4. Stream the agent's response
const stream = await agent.stream({
role: "user",
content: [{ type: "text", text: "Create a hello world web server in the current directory." }],
});
for await (const message of stream) {
for (const content of message.content) {
if (content.type === "thinking" && content.thinking) {
console.info("💡", content.thinking);
} else if (content.type === "text" && content.text) {
console.info(content.text);
} else if (content.type === "tool_use") {
console.info("🔧", content.name, content.input.description ?? "");
}
}
}Helixent provides a middleware system that lets you observe and mutate the agent's behavior at every stage of the loop. Middleware hooks are invoked sequentially in array order.
| Hook | When it runs |
|---|---|
beforeAgentRun |
Once after the user message is appended, before the first step |
afterAgentRun |
Once when the agent is about to stop (no tool calls) |
beforeAgentStep |
At the start of each step, before the model is invoked |
afterAgentStep |
At the end of each step, after all tool calls complete |
beforeModel |
Before the model context is sent to the provider |
afterModel |
After the model response is received |
beforeToolUse |
Immediately before a tool is invoked |
afterToolUse |
Immediately after a tool invocation resolves |
Each hook receives the current context and can return a partial update to merge back in, or void to leave it unchanged.
Agent loops are inherently asynchronous — the model thinks, tools execute, results stream back, often in parallel. JavaScript/TypeScript has native async/await baked into the language and runtime, making concurrent orchestration straightforward without the callback gymnastics or asyncio boilerplate you'd face in Python.
Among JS runtimes, we chose Bun specifically because:
- Same runtime as Claude Code — Bun powers Claude Code and a growing number of TypeScript-first tools. It's built for speed, and a compiled build is a single native executable.
- Performance — HTTP, filesystem I/O, and cold starts are all noticeably faster than Node's, which adds up when an agent loop issues dozens of tool calls per run.
- Standalone executables —
bun build --compileoutputs one self-contained binary. Shipping a CLI is as simple as handing users a single file—no separate runtime install. - Batteries included — Test runner, bundler, and TypeScript support ship with Bun, so there's no separate toolchain to wire up.
- Sub-agent — Spawn child agents from within a run to handle subtasks independently, each with their own context and tool set.
- Agent Team — Multi-agent collaboration where agents can coordinate, delegate, and share results to tackle complex problems together.
- Print Mode — A Claude Code-style rendering mode that streams the agent's thinking, tool calls, and outputs in a rich, human-friendly terminal UI.
- Sessioning — A local, file-based session store for the agent's context and history.
