A demo application built on the Wippy framework that showcases the
butschster/telegram package. It implements an LLM-powered Telegram bot
that routes user messages to specialized child processes — chat sessions, one-shot tasks, and scheduled reminders — with
a real-time web dashboard for monitoring.
Telegram User
│
▼
┌─────────────────────────────────┐
│ Telegram Webhook (POST) │ ← validates secret, parses update
│ /telegram/webhook │
└───────────┬─────────────────────┘
▼
┌─────────────────────────────────┐
│ Text Handler (entry.lua) │ ← spawns per-user router process
└───────────┬─────────────────────┘
▼
┌─────────────────────────────────┐
│ Router Process (router.lua) │ ← LLM classifies message intent
│ One per Telegram user │
└───────┬───────┬───────┬─────────┘
▼ ▼ ▼
┌───────┐ ┌─────┐ ┌────────┐
│ Chat │ │Task │ │Reminder│ ← child processes
└───────┘ └─────┘ └────────┘
The router uses an LLM to classify each incoming message and decide whether to:
- Reply directly to the user
- Propose spawning a new child process (chat, task, or reminder)
- Route a message to an existing child process
- Stop a running child process
All session events are published to an event bus and consumed by a dashboard process, which broadcasts them over WebSocket to a browser-based monitoring UI.
- Wippy Runtime installed
- A Telegram bot token (from @BotFather)
- A publicly accessible URL for the webhook (e.g., via ngrok, Cloudflare Tunnel, or a public server)
- An OpenAI-compatible LLM API key
wippy init
wippy installCreate a .env file in the project root:
cp .env.example .envOr create it manually:
# ── Telegram ───────────────────────────────────────────
TELEGRAM_BOT_TOKEN=<your-bot-token>
TELEGRAM_WEBHOOK_URL=<your-public-url>/telegram/webhook
TELEGRAM_WEBHOOK_SECRET=<any-random-string>
# ── LLM (OpenAI-compatible API) ───────────────────────
OPENAI_API_KEY=<your-api-key>
OPENAI_BASE_URL=<your-api-base-url>
| Variable | Required | Description |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Yes | Bot token obtained from @BotFather |
TELEGRAM_WEBHOOK_URL |
Yes | Full public URL where Telegram will send updates. Must end with /telegram/webhook |
TELEGRAM_WEBHOOK_SECRET |
No | Secret token for webhook request validation. Recommended for production |
OPENAI_API_KEY |
Yes | API key for your OpenAI-compatible LLM provider |
OPENAI_BASE_URL |
Yes | Base URL of the LLM API (e.g., https://api.openai.com/v1 or any compatible endpoint) |
The app reads these from a
.envfile via theenv.storage.fileentry. The file is auto-created on first run if it doesn't exist and is set to mode0600for security. The.envfile is git-ignored.
wippy run register-webhookThis verifies your bot token (via getMe) and registers the webhook URL with Telegram.
To remove the webhook later:
wippy run delete-webhookwippy run -cThe HTTP server starts on port 8080 with the following endpoints:
| Endpoint | Description |
|---|---|
POST /telegram/webhook |
Telegram webhook receiver |
GET /dashboard |
Web monitoring dashboard (SPA) |
GET /api/sessions |
REST API — list active sessions |
GET /api/sessions/{chat_id} |
REST API — session detail |
WS /ws/dashboard |
WebSocket — real-time session updates |
Send a message to your bot in Telegram. The router will classify it and respond or propose spawning a child process.
The default configuration in src/_llm.yaml registers a model via the OpenAI-compatible provider. You can change the
model name and provider settings there:
# src/_llm.yaml
entries:
- name: glm
kind: registry.entry
meta:
type: llm.model
name: glm-4.7
title: GLM 4.7
class: [ dev, chat, smart, fast ]
capabilities: [ generate, tool_use, structured_output ]
priority: 50
providers:
- id: wippy.llm.openai:provider
provider_model: "glm-4.7"To use a different model (e.g., GPT-4o), change name, title, and provider_model accordingly.
The router can spawn three types of child processes:
| Template | Description | Behavior |
|---|---|---|
| chat | Conversational AI session | 30-minute inactivity TTL, maintains conversation history |
| task | One-shot task runner | Executes a single LLM task, reports result, auto-exits |
| reminder | Scheduled notification | Supports once, daily, and interval schedules with timezone support |
Templates are registered in src/templates/_index.yaml and discovered at runtime via the registry.
Open http://localhost:8080/dashboard in your browser to access the monitoring UI.
Features:
- Sessions sidebar — lists all active user sessions with message/child counts
- Messages tab — chat-style view of all messages with routing decisions
- Children tab — active child processes with their template, description, and status
- Events tab — real-time event log (session, message, child, router events)
- Live updates over WebSocket with auto-reconnect
src/
├── _index.yaml # App infrastructure (HTTP server, process host, env)
├── _llm.yaml # LLM model registry entry
├── _telegram.yaml # Telegram package dependency wiring
├── router/
│ ├── _index.yaml # Text handler + router process definitions
│ ├── entry.lua # Per-user router spawning on incoming text
│ └── router.lua # LLM-based message classifier and child manager
├── templates/
│ ├── _index.yaml # Template registry + agent specs + process defs
│ ├── chat.lua # Chat session process
│ ├── task.lua # One-shot task process
│ └── reminder.lua # Scheduled reminder process
└── web/
├── _index.yaml # API routes, WebSocket, dashboard service
├── api.lua # REST API handlers
├── ws_connect.lua # WebSocket connection handler
├── dashboard.lua # Dashboard event aggregator process
└── public/
└── index.html # SPA dashboard UI