___ _ ___ ___
| \ ___ __ ___ _ _| |_| \| _ )
| |) / -_) _/ -_) ' \ _| |) | _ \
|___/\___\__\___|_||_\__|___/|___/
DecentDB is an embedded relational database engine built with Nim, focused on durable ACID writes, fast reads, and predictable correctness.
It targets a single process with one writer and many concurrent readers under snapshot isolation, implementing a PostgreSQL-like SQL dialect (via libpg_query) on top of a fixed-page B+Tree storage engine and a write-ahead log (WAL) for durability.
- 🔒 ACID Transactions - Write-ahead logging with crash-safe recovery
- 🌳 B+Tree Storage - Efficient tables and secondary indexes with page caching
- 🐘 PostgreSQL-like SQL - Familiar DDL/DML syntax with JOINs (INNER, LEFT, RIGHT, FULL OUTER, CROSS, NATURAL), CTEs (including WITH RECURSIVE), subqueries, window functions, and rich types (UUID, DECIMAL, native TIMESTAMP)
- 🕒 Native TIMESTAMP Type - DATE/TIMESTAMP columns stored as int64 microseconds since Unix epoch (UTC); correct
ORDER BYandEXTRACT(YEAR|MONTH|DAY|HOUR|MINUTE|SECOND FROM ...), with native bind/read in all bindings - 👥 Concurrent Reads - Snapshot isolation allows multiple readers with one writer
- 🔎 Trigram Index - Fast text search for
LIKE '%pattern%'queries - 🧪 Comprehensive Testing - Unit tests, property tests, crash injection, and differential testing
- 🔄 Foreign Key Constraints - Automatic indexing and referential integrity enforcement
- 📊 Rich Query Support - Aggregates (including DISTINCT), subqueries (FROM, EXISTS, scalar), UPSERT, set operations, generated columns, and scalar functions (string, math, UUID, JSON)
- ⚡ Triggers - AFTER and INSTEAD OF triggers for complex logic
- 💾 Savepoints - Nested transaction control with SAVEPOINT, RELEASE, and ROLLBACK TO
- 🧠 In-Memory Database - Ephemeral
:memory:databases for caching and testing, withsave-as(CLI) /saveAs(embedded API) to snapshot to disk - 📦 Single-file DB + WAL sidecar - Primary
.ddbfile with a-walsidecar log for durability - 🌐 Cross-Platform - Runs on Linux, macOS, and Windows
- 🚀 Bulk Load Operations - Optimized high-performance data loading
- 🛠️ Rich CLI Tool - Unified command-line interface for all database operations
- 📁 Import/Export Tools - CSV and JSON data import/export capabilities
- 🧩 Parameterized Queries - Safe parameter binding to prevent SQL injection
- 🧾 Transaction Support - BEGIN, COMMIT, ROLLBACK for atomic operations
- 📋 Temporary Objects - Session-scoped TEMP tables and views
- 🏗️ EF Core Provider - Full Entity Framework Core integration with LINQ translation, migrations, and NodaTime support
- 🔌 DBeaver Support - Connect to
.ddbfiles from DBeaver via the in-process JNI-backed JDBC driver; browse tables, run queries, and render ER diagrams. See the DBeaver guide.
| Language | Toolkit | Description | Documentation |
|---|---|---|---|
| C# | ADO.NET + Dapper + MicroOrm (LINQ) + EF Core | Embedded ADO.NET provider, LINQ Micro-ORM, and EF Core integration with DbContext, migrations, and NodaTime support | decentdb.org/api/dotnet |
| Java | JDBC (JNI-backed, in-process) | JDBC driver for connecting to .ddb files from Java and tools like DBeaver |
decentdb.org/api/jdbc |
| Go | database/sql driver |
Embedded database/sql driver with $N positional parameters |
decentdb.org/api/go |
| Nim | Native Nim API | Native embedded API for Nim applications | decentdb.org/api/nim-api |
| Node.js | N-API + Knex | Embedded native addon + Knex client for building/issuing queries | decentdb.org/api/node |
| Python 3 | SQLAlchemy | Embedded DB-API driver + SQLAlchemy dialect | decentdb.org/api/python |
MehSQL - Native cross platform DecentDB SQL Query tool.
How this chart is produced
- The chart is generated from benchmark runs using
nimble bench_embedded_chart. - Values are normalized vs SQLite (baseline = 1.0).
- For "lower is better" metrics (latency, DB size), the score is inverted so higher bars mean better.
- Full methodology and raw results live in
benchmarks/embedded_compare/.
Supported engines
- DecentDB (native API)
- SQLite (via C API)
- DuckDB (via C API) - when library is available
Regenerate
# Run full benchmark pipeline (run + aggregate + chart)
nimble bench_embedded_pipeline
# Or generate just the chart from existing results:
nimble bench_embedded_chart- Nim (includes
nim+nimble) - libpg_query (C library + headers)
- Python 3 (optional; for test harness)
nimble build
# Optionally: install into ~/.nimble/bin
nimble install# Create and query a database
# Note: DecentDB auto-assigns an id when you omit a single INT64 PRIMARY KEY column on INSERT.
decentdb exec --db ./my.ddb --sql "CREATE TABLE users (id INT PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE)"
decentdb exec --db ./my.ddb --sql "INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com') RETURNING id"
decentdb exec --db ./my.ddb --sql "SELECT * FROM users"decentdb repl --db ./my.ddb# Create tables with constraints
decentdb exec --db ./my.ddb --sql "CREATE TABLE orders (
id INT PRIMARY KEY,
user_id INT REFERENCES users(id),
amount FLOAT64,
created_at TIMESTAMP
)"
# Insert data
decentdb exec --db ./my.ddb --sql "INSERT INTO orders VALUES (1, 1, 99.99, TIMESTAMP '2025-01-01 00:00:00')"
# Query with JOINs
decentdb exec --db ./my.ddb --sql "SELECT u.name, SUM(o.amount) AS total
FROM users u
JOIN orders o ON u.id = o.user_id
GROUP BY u.name"
# Text search with trigram index
decentdb exec --db ./my.ddb --sql "CREATE INDEX idx_users_name ON users USING trigram(name)"
decentdb exec --db ./my.ddb --sql "SELECT * FROM users WHERE name LIKE '%ali%'"# Import CSV data
decentdb import --table users --input data.csv --db ./my.ddb
# Export to JSON
decentdb export --table users --output users.json --db ./my.ddb --format=json
# Bulk load large datasets
decentdb bulk-load --table users --input large_dataset.csv --db ./my.ddbThere are several tools for DecentDB that provide importing/converting from other databases, read more here
# Force WAL checkpoint
decentdb checkpoint --db ./my.ddb
# View database statistics
decentdb stats --db ./my.ddb
# Collect planner statistics (row counts / index cardinality)
decentdb exec --db ./my.ddb --sql "ANALYZE"
# Rebuild an index
decentdb rebuild-index --index idx_users_name --db ./my.ddb
# Rebuild all indexes
decentdb rebuild-indexes --db ./my.ddbDecentDB provides a unified CLI tool. See decentdb --help for all commands.
Common commands:
exec- Execute SQL statementsrepl- Interactive SQL shellimport/export- Data transferbulk-load- High-performance data loadingcheckpoint- WAL maintenancesave-as- Snapshot backup to a new on-disk filelist-tables/describe- Schema introspectionrebuild-index/rebuild-indexes- Index maintenancedump- Export database as SQL
- User Guide - SQL reference, tutorials, and examples
- Nim API - Embedded API documentation
- Architecture - Design and implementation details
- Contributing - Development guidelines
DecentDB is organized into focused modules:
- VFS - OS I/O abstraction with fault injection support
- Pager - Fixed-size pages, LRU cache, and freelist management
- WAL - Append-only log, crash recovery, and checkpointing
- B+Tree - Table storage and secondary indexes
- Record - Typed value encoding with overflow pages
- Catalog - Schema metadata management
- SQL/Planner/Exec - Query parsing, planning, and execution
- Search - Trigram inverted index for text search
# Run tests
nimble test
# Run benchmarks
nimble bench
# Lint code
nimble lintDecentDB can generate a unit test coverage report using gcov.
# Generate coverage (requires gcov)
bash scripts/coverage_nim.sh
# Alternative: run coverage in smaller batches
bash scripts/coverage_batch.shOutputs:
- build/coverage/summary.txt (human-readable summary)
- build/coverage/summary.json (machine-readable summary)
- build/coverage/gcov/ (raw per-test
.gcovfiles)
See Contributing Guide for development workflow and guidelines.
Apache-2.0. See LICENSE.

