KnowBear is a search-driven AI app that explains any topic in multiple readability levels. It supports two runtime modes:
fastfor low-latency responsesensemblefor higher-quality synthesis
Both modes use the same retrieval entry point and enrich prompts with live web context before generation.
- Search-driven explanation workflow
fastmode: quick response pathensemblemode: multi-model generation with judge selection- Streaming responses over SSE
- Export as
.txtor.md - Rate limiting: 5 requests/hour per IP
- Frontend: React + Vite UI for search, mode selection, streaming, and export
- Backend: FastAPI API for query, stream, export, and health endpoints
- LLM routing layer: provider abstraction for model routing/judging
- LiteLLM proxy: optional external gateway integration point (not bundled in this repo)
- Search API integration: Exa, Tavily, and Serper for retrieval context
GET /api/pinned-> curated starter topicsPOST /api/query-> generate one or more levelsPOST /api/query/stream-> stream generated textPOST /api/export-> export astxtormdGET /api/health-> service status
npm install
pip install -r api/requirements.txtCopy .env.example to .env and set required keys.
Required:
GROQ_API_KEYVITE_API_URL
Recommended:
GEMINI_API_KEY(judge/fallback path)TAVILY_API_KEYSERPER_API_KEYEXA_API_KEY
Backend:
python3 -m uvicorn main:app --reloadFrontend:
npm run dev- Open the app at
http://localhost:5173/app(or your configured frontend URL). - Enter a topic.
- Choose mode:
fastfor speedensemblefor stronger quality
- Read the streamed response and export if needed.
npm run type-check
npm test -- --run
python3 -m compileall -q api
python3 -c "import main; print(bool(main.app))"