torrent_grain is a long-running Node.js + TypeScript service that watches HagiCode release indexes, selects cache targets with a recent versions + pinned policy, downloads assets with torrent-first hybrid transfer, verifies sha256, persists a local catalog, exposes read-only status APIs, and now includes a React dashboard for local viewing.
- Polls the default desktop and server indexes:
https://index.hagicode.com/desktop/index.jsonhttps://index.hagicode.com/server/index.json
- Normalizes
versions[].assets[]and only keeps torrent-capable assets with complete hybrid metadata:torrentUrlinfoHashwebSeedssha256directUrl
- Selects cache targets by
source + channel, then keeps the most recent versions in each channel window - Includes all torrent-capable assets inside the selected versions and preserves extra pinned versions
- Downloads through
webtorrentfirst, then falls back towebSeedsordirectUrl - Verifies
sha256before promoting files into cache - Persists catalog state in
catalog.jsonand restores verified cache on restart - Applies cleanup rules by capacity, entry count, retention window, and pinned whitelist
- Exposes read-only JSON endpoints for health, cumulative traffic, and cache visibility
- Ships a React dashboard that visualizes service mode, active transfers, cumulative traffic, targets, cache entries, and diagnostics
- Node.js 20+
- npm 10+
cd repos/torrent_grain
npm install
npm run build
npm run startAfter npm run start, open:
http://127.0.0.1:32101/- React dashboardhttp://127.0.0.1:32101/healthhttp://127.0.0.1:32101/statushttp://127.0.0.1:32101/targets
For development:
cd repos/torrent_grain
npm run dev:server
npm run dev:uiOr start both together:
cd repos/torrent_grain
npm run dev:allDevelopment URLs:
http://127.0.0.1:32101- backend APIhttp://127.0.0.1:32102- React dashboard via Vite proxy
Use the built-in peer scripts when you want to verify that node 2 can fetch from node 1 with separate ports and cache directories.
Reset the demo cache first:
cd repos/torrent_grain
npm run dev:peer:resetStart both peers with one command:
cd repos/torrent_grain
npm run dev:peer:demoOr start them separately:
cd repos/torrent_grain
npm run dev:peer:1
npm run dev:peer:2Demo layout:
peer1-http://127.0.0.1:32101, data dir./.data-peer-1peer2-http://127.0.0.1:32111, data dir./.data-peer-2, with HTTP fallback disabled
npm run dev:peer:demo starts peer1 first, waits until it becomes live, then tries to wait for the first verified cache entry before launching peer2. peer2 is started with TORRENT_GRAIN_HTTP_FALLBACK_ENABLED=false, so if it cannot get data from the torrent swarm it will fail instead of silently downloading from the origin.
Recommended checks:
- open
http://127.0.0.1:32111/statusand confirmpeerCountgrows above0 - open
http://127.0.0.1:32101/statusand confirmuploadRaterises above0 - open
http://127.0.0.1:32101/targetsandhttp://127.0.0.1:32111/targetsto compare cache states
If you want the built React dashboard on /, run npm run build once before the peer demo so the static UI is available from each peer port.
Configuration is environment-variable driven.
| Variable | Default | Description |
|---|---|---|
TORRENT_GRAIN_HOST |
0.0.0.0 |
HTTP bind host |
TORRENT_GRAIN_PORT |
32101 |
HTTP bind port |
TORRENT_GRAIN_DATA_DIR |
.data under project root |
Root directory for cache, temp files, quarantine, and catalog |
TORRENT_GRAIN_POLL_INTERVAL_MS |
300000 |
Source polling interval |
TORRENT_GRAIN_STALL_TIMEOUT_MS |
45000 |
Torrent stall timeout before fallback |
TORRENT_GRAIN_CONCURRENCY |
2 |
Maximum parallel cache jobs |
TORRENT_GRAIN_CACHE_CAPACITY |
50 GiB |
Cache capacity limit, supports b/kb/kib/mb/mib/gb/gib suffixes |
TORRENT_GRAIN_MAX_ENTRIES |
20 |
Maximum retained cache entries |
TORRENT_GRAIN_RETENTION_DAYS |
30 |
Maximum retention window for non-pinned entries |
TORRENT_GRAIN_SHARING_ENABLED |
true |
Whether verified cache should reseed |
TORRENT_GRAIN_HTTP_FALLBACK_ENABLED |
true |
Whether webSeeds / directUrl fallback is allowed when torrent transfer stalls or fails |
TORRENT_GRAIN_UPLOAD_LIMIT_KIB |
20480 |
Upload cap for the embedded torrent runtime |
TORRENT_GRAIN_VERSION |
0.1.0 |
Service version string |
TORRENT_GRAIN_SOURCES |
built-in desktop + server list | JSON array of source objects |
[
{
"id": "desktop",
"kind": "desktop",
"label": "HagiCode Desktop",
"indexUrl": "https://index.hagicode.com/desktop/index.json",
"enabled": true,
"latestPerGroup": 2,
"pinnedVersions": ["v1.0.0"]
},
{
"id": "server",
"kind": "server",
"label": "HagiCode Server/Web",
"indexUrl": "https://index.hagicode.com/server/index.json",
"enabled": true,
"latestPerGroup": 2,
"pinnedVersions": []
}
]latestPerGroup means: keep the most recent N versions for each source + channel, and include all torrent-capable assets in those versions.
Inside TORRENT_GRAIN_DATA_DIR the service creates:
cache/- verified cache files that may be reseededtemp/- transfer scratch spacequarantine/- files that failsha256catalog.json- persistent catalog and service state
All endpoints are read-only GET requests.
Returns liveness, readiness, last successful scan time, enabled source count, and recent source failures.
Returns normalized service mode, current-run cumulative traffic, and active task snapshots:
idlediscoveringdownloadingfallbackverifyingsharederror
Each active task may include progress bytes, download rate, upload rate, peer count, and last error.
The top-level payload also includes:
totalDownloadedBytes- cumulative download traffic since the current service run startedtotalUploadedBytes- cumulative upload traffic since the current service run startedtrafficStartedAt- timestamp for the current traffic statistics windowtrafficUpdatedAt- most recent timestamp when traffic totals changed
Returns:
- current target plan
- local cache entries
- retention decision per target
- metadata validation state
- stable diagnostic codes
The dashboard is implemented with React + Vite and lives under ui/.
npm run buildbuilds both the Node service and the dashboardnpm run startserves the built dashboard from/npm run dev:uiruns Vite with API proxying for/health,/status, and/targets
The dashboard focuses on:
- service readiness and current mode
- active transfer throughput and peer count
- cumulative downloaded/uploaded traffic with the current service-run window
- planner targets and metadata readiness
- cache catalog state
- recent diagnostics and source failures
Build the production image:
cd repos/torrent_grain
docker build -t torrent-grain:local .Run it with a persistent volume mounted to /data:
docker run --rm \
-p 32101:32101 \
-e TORRENT_GRAIN_DATA_DIR=/data \
-v $(pwd)/.data:/data \
torrent-grain:localValidate the health probe:
curl http://127.0.0.1:32101/healthThe production image contract is:
- container port
32101 - persistent data mount
/data TORRENT_GRAIN_DATA_DIR=/dataas the documented container defaultGET /healthas the readiness/liveness probe target- runtime entrypoint
node dist/index.js
The container startup path is automatic:
- load config
- recover
catalog.json - restore verified cache entries
- run an immediate scan
- continue background monitoring
- serve the built dashboard at
/
Torrent Grain ships two independent registry workflows:
- DockerHub:
.github/workflows/docker-build-dockerhub.yml - Aliyun ACR:
.github/workflows/docker-build-aliyun-acr.yml
Both workflows support:
- tag push release
workflow_dispatchrepository_dispatch- platform selector:
all,linux-amd64,linux-arm64
Release drafting is handled separately:
.github/workflows/release-drafter.ymlrefreshes the GitHub draft release onmainpushes and PR updates.github/release-drafter.ymlresolves the nextv*draft version from PR labels- the release draft updates notes only; Docker publish still happens on version tag push
- DockerHub image:
docker.io/newbe36524/torrent-grain - Aliyun ACR image:
<ALIYUN_ACR_REGISTRY>/<ALIYUN_ACR_NAMESPACE>/torrent-grain
If your namespace differs, override it through repository secrets without changing the workflow file.
Stable versions such as 1.2.3 publish:
1.2.31.21latest
Pre-release versions such as 1.2.3-rc.1 publish only:
1.2.3-rc.1
latest is never updated by a pre-release build.
DockerHub workflow:
| Secret | Required | Description |
|---|---|---|
DOCKERHUB_USERNAME or DOCKER_USERNAME |
yes | DockerHub login user |
DOCKERHUB_TOKEN or DOCKER_PASSWORD |
yes | DockerHub access token |
DOCKERHUB_NAMESPACE |
no | Image namespace, defaults to login user and falls back to newbe36524 |
Aliyun ACR workflow:
| Secret | Required | Description |
|---|---|---|
ALIYUN_ACR_REGISTRY |
yes | Registry endpoint, for example registry.cn-hangzhou.aliyuncs.com |
ALIYUN_ACR_NAMESPACE |
yes | Target namespace / repository group |
ALIYUN_ACR_USERNAME |
yes | Registry login user |
ALIYUN_ACR_PASSWORD |
yes | Registry login password or token |
Build all platforms for the package version declared in package.json:
- open the workflow in GitHub Actions
- leave
versionempty - keep
platform=all
Rebuild only linux/arm64 for a specific release:
- set
version=1.2.3 - set
platform=linux-arm64
For automation, repository_dispatch may send client_payload.version and client_payload.platform, and the workflows resolve tags with the same rules as tag-triggered releases.
- The current server/web cache target is discovered from
https://index.hagicode.com/server/index.json. - Hybrid cache eligibility depends on complete metadata in the index; incomplete assets are ignored entirely and do not appear in the dashboard.
- The fallback path currently refetches the whole asset from
webSeedsordirectUrlinstead of resuming piece-level gaps. - The service is designed for a single persistent instance with a mounted volume; shared multi-instance catalog coordination is out of scope.