14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis, intelligent content routing. Zero LLM inference cost. MIT licensed.
python tree-sitter developer-tools llm-tools llm-compression prompt-compression context-compression ai-infrastructure ai-agent-tools token-compression context-pruning context-window-optimization openclaw llm-token-compression llm-cost-reduction llm-context-compression claw-compactor reversible-compression ast-code-analysis fusion-pipeline
-
Updated
Mar 19, 2026 - Python