Quick reference
The tables below summarise key dimensions. Direct alternatives compete in the same category. Complementary tools solve adjacent problems and work alongside jCodeMunch.
Direct Alternatives — tools in the same category
| jCodeMunch + jDocMunch | Raw File Tools (Read/Grep/Glob/Bash) |
mcp-server-filesystem | RepoMapper | Pharaoh | GitNexus | Serena | GrapeRoot (Dual-Graph) | vexp | code-review-graph | cymbal | Context+ | Axon | SocratiCode | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Token reduction on code exploration | ✓ ~95 % | ✗ 0 % (baseline) | ✗ 0 % | ~ Token-budgeted map (not retrieval) | ~ Graph queries replace file reads (no benchmark published) | ~ Graph queries; no benchmarks published | ~ Symbol-level tools reduce reads; no token benchmarks published | ~ 30–45% cost reduction (80-prompt benchmark); pre-loads context, not symbol-level retrieval | ~ 65–70% claimed; no published methodology or reproducible benchmark | ~ 8.2× avg on commit-scoped reviews (6 repos, 13 commits, published raw data); 0.7× on small single-file express changes (graph context exceeds raw file); 49× claimed for large monorepos | ~ 17–100% fewer tokens vs ripgrep (self-reported); baseline is ripgrep, not raw file reads — not directly comparable to jCodeMunch's 58–100× benchmark; no published methodology or reproducible test harness | ~ "99% accuracy" claimed; no token-reduction benchmark published; no reproducible methodology | ~ Precomputed graph returns "complete context in one tool call"; claims token efficiency via fewer agent hops; no published benchmark or reproducible methodology | ~ 61% less tokens, 84% fewer calls, 37× faster than standard AI grep (self-reported benchmark); baseline is grep, not raw file reads |
| Symbol-level extraction (functions, classes) | ✓ 70+ languages (incl. YAML/Ansible, Razor/Blazor, SQL/dbt, Erlang, Fortran, Pascal, MATLAB, Ada, COBOL, Zig, PowerShell) | ✗ Whole-file only | ✗ Whole-file only | ~ Signatures only, no retrieval | ~ Signatures + graph nodes; TypeScript & Python only | ✓ 12 languages; graph nodes + call edges | ✓ 30+ languages via LSP; type-aware cross-file references | ~ Symbols & imports extracted for graph ranking; no on-demand per-symbol retrieval | ~ 30 languages via tree-sitter; skeleton generation strips bodies (70–90%); no named on-demand per-symbol retrieval | ~ 19 languages + Jupyter/Databricks notebooks via tree-sitter; graph nodes (functions, classes, imports) + edges (calls, inheritance, test coverage); no named on-demand per-symbol retrieval | ✓ 22 languages via tree-sitter; named on-demand per-symbol retrieval (cymbal show); Go binary, no Python runtime required |
✓ 43 languages via tree-sitter; AST extraction with semantic search; spectral clustering groups related files | ~ 3 languages (Python, JavaScript, TypeScript) via tree-sitter; graph nodes for functions, classes, imports; KuzuDB graph storage with Cypher queries | ~ 18+ languages via ast-grep; AST-aware chunking at function/class boundaries; Qdrant vector store |
| Doc section search | ✓ via jDocMunch | ✗ Whole-file only | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Requires pre-indexing | ✓ One-time, incremental; SHA-based freshness managed automatically via freshness_mode config (relaxed/strict); list_repos exposes git_head for agent-side freshness reasoning; index_file for single-file surgical updates; watch-claude auto-discovers Claude Code worktrees |
✓ None needed | ✓ None needed | ~ Per-query map generation | ~ Hosted backend; auto-updates on push via webhook | ~ One-time + auto-reindex on git commit via hook | ~ LSP servers spin up on first use; indexing latency per language | ~ One-time graph build; real-time watcher keeps index fresh | ~ One-time + real-time AST diff watcher; cross-repo tracking; session memory persists across restarts | ~ One-time build (~10s / 500 files); incremental re-index on file save + git commit (<2s on 2,900-file repos via SHA-256 diff | ~ One-time index; JIT freshness — mtime+size fast path auto-detects changed files before every query; no watch daemon needed | ~ One-time indexing; embedding cache on disk; no incremental update details published | ~ One-time axon analyze . (~5s); real-time watcher (--watch) keeps index fresh; no incremental partial re-index — full re-analyze on change |
~ Auto-index on first use; per-branch separate vector collections; requires Docker (Qdrant + Ollama containers) |
| Works with AI agents (MCP) | ✓ Native MCP server (stdio, SSE, streamable-http); Claude Code hook integration (PreToolUse/PostToolUse → index-file); 5 built-in MCP prompt templates (workflow, explore, assess, triage, trace) |
~ Via MCP tool calls | ✓ Native MCP server | ✓ Native MCP server | ✓ Native MCP server (SSE) | ✓ MCP + Claude Code PreToolUse/PostToolUse hooks | ✓ Native MCP server; also OpenAPI for non-MCP clients | ✓ Native MCP server; supports 6 AI assistants (Claude Code, Codex CLI, Gemini CLI, Cursor, OpenCode, GitHub Copilot) | ✓ Native MCP; auto-generates config files; VS Code extension + npm CLI; 12 AI agents supported | ✓ Native MCP server; auto-configures Claude Code, Cursor, Windsurf, Zed, Continue, OpenCode on install; 5 built-in MCP prompt templates | ✗ CLI subprocess, not an MCP server; agent calls via shell-out or Docker; ships a CLAUDE.md policy block instructing agents to prefer cymbal over Read/Grep/Glob/Bash |
✓ Native MCP server; 17 tools across discovery, analysis, code ops, version control, and memory/RAG; supports 12 platforms including Claude Code, Cursor, VS Code Copilot | ✓ Native MCP server (axon serve --watch); also exposes REST API + interactive web dashboard at localhost:8420 |
✓ Native MCP server (stdio); Cursor, VS Code, and Windsurf integration; Docker-based multi-container setup |
| Import graph / reference tracing | ✓ find_importers (with has_importers flag), find_references, check_references, get_blast_radius (depth-scored risk + has_test_reach per file), get_changed_symbols, get_dependency_graph; get_untested_symbols (import-graph test reachability); TS/SvelteKit path alias resolution; cross-repo via cross_repo=true on find_importers, get_blast_radius, get_dependency_graph + dedicated get_cross_repo_map |
~ Manual grep | ✗ | ~ Dependency graph for ranking only | ✓ Blast Radius, Reachability, Dependency Paths (graph-native) | ✓ impact, detect_changes, call chain tracing, Cypher queries | ✓ find_referencing_symbols via LSP (type-aware, cross-file) | ~ Import relationships in semantic graph; file + symbol level; no cross-repo call tracing | ~ LSP bridge for type-resolved call graphs; no dedicated blast-radius scoring or git-diff-to-symbol mapping | ✓ Blast-radius with 100% recall (F1 0.54, precision ~0.38 — deliberately conservative); call chain tracing; test coverage gap detection; detect_changes maps diffs to affected functions and flows |
✓ cymbal refs, cymbal importers, cymbal impact (transitive callers, depth cap 5); cymbal trace (downward call graph) |
~ get_blast_radius tool; call-site tracing maps symbol usage; no dedicated import graph or cross-repo tracing |
✓ axon_impact with depth grouping (will break / may break / review) and confidence scores; call chain tracing via KuzuDB graph; Cypher queries for ad-hoc traversal; no cross-repo support |
~ Dependency visualization via Mermaid diagrams; cross-project search; no dedicated blast-radius or import graph tracing tool |
| Write / modify files | ✗ Read-only by design | ✓ | ✓ | ✗ | ✗ Read-only by design | ~ rename tool for coordinated refactoring | ✓ replace_symbol_body, insert_after_symbol, rename (codebase-wide) | ✗ Read-only by design | ✗ Read-only | ✗ Read-only | ✗ Read-only | ~ propose_commit and shadow restore points; undo support without git; not direct file writes |
✗ Read-only by design | ✗ Read-only by design |
| Runs fully offline / local | ✓ Local index, no backend | ✓ | ✓ | ✓ | ✗ Requires hosted Neo4j + OAuth | ✓ Local LadybugDB; browser WASM option | ~ Local; requires language server binaries installed per language | ✓ Fully local; code never leaves machine | ✓ Fully local; no code leaves machine; no account required for Starter | ✓ Local SQLite in .code-review-graph/; no external database; no cloud dependency |
✓ Go binary; local .cymbal/index.db; no external services; no account required |
~ Local index + disk-cached embeddings; requires Ollama or OpenAI-compatible API for embeddings | ✓ Fully local; KuzuDB + local embeddings (BAAI/bge-small-en-v1.5); no API keys; no data leaves machine | ~ Local processing but requires Docker (Qdrant + Ollama containers); code stays on machine; no cloud dependency |
| Commercial use permitted | ✓ Paid license available | ✓ Built-in tools | ✓ MIT | ✓ MIT | ~ Parser MIT; MCP server paid tier | ✗ PolyForm Noncommercial — commercial use prohibited | ✓ MIT | ~ Launchers: Apache 2.0; Graph engine: Proprietary (PyPI-distributed) | ~ Starter free but capped (2,000 nodes, 8 calls/day); commercial scale requires Pro ($19/mo) | ✓ MIT — no node caps, no call limits | ✓ MIT | ✓ MIT | ✓ MIT | ~ AGPL-3.0 — copyleft; commercial license available separately |
| License | Free non-commercial; paid commercial | N/A (built-in tools) | MIT | MIT | Parser: MIT; MCP server: free / $27/mo Pro | PolyForm Noncommercial 1.0.0 | MIT | Launchers: Apache 2.0; Engine: Proprietary | Proprietary SaaS; Starter free (capped); Pro $19/mo; Team $29/user/mo | MIT | MIT | MIT | MIT | AGPL-3.0 (commercial license available) |
| Dead code detection | ✓ find_dead_code — free; confidence-scored; cascading dead-code chains; entry-point heuristics |
✗ | ✗ | ✗ | ~ Pro tier ($27/mo) | ✗ | ✗ | ✗ | ✗ | ~ Refactoring tools include dead code detection; no dedicated confidence-scored cascading analysis or entry-point heuristics | ✗ | ~ run_static_analysis tool; no dedicated dead code detection |
✓ Multi-pass dead code: zero callers → framework exemptions → override pass → Protocol conformance → Protocol stubs; 3 languages only | ✗ |
| Semantic / hybrid search | ✓ Opt-in BM25+vector (embed_repo); 3 providers: sentence-transformers, Gemini (task-aware), OpenAI; pure BM25 when disabled |
✗ | ✗ | ✗ | ✗ | ✓ BM25 + embeddings + RRF — native | ~ LSP type inference (not embedding-based) | ✗ | ~ FTS5 full-text + TF-IDF; no BM25+vector hybrid mode | ~ Optional vector embeddings via sentence-transformers, Gemini, or MiniMax; FTS5 keyword+vector hybrid; enabled separately from core graph | ✗ FTS5 keyword search only; no vector or embedding layer | ✓ Embeddings via Ollama or OpenAI-compatible APIs with disk caching; semantic search across file headers and identifiers | ✓ BM25 (KuzuDB FTS) + 384-dim vector (BAAI/bge-small-en-v1.5) + Levenshtein fuzzy; fused via Reciprocal Rank Fusion; results grouped by execution flow | ✓ Dense vector (Qdrant) + BM25 sparse; fused via Reciprocal Rank Fusion; Ollama embeddings (local); per-branch separate collections |
| Token-budgeted retrieval | ✓ get_ranked_context (BM25 + PageRank strategies) + get_context_bundle budget params |
✗ | ✗ | ~ Map-based (not retrieval) | ✗ | ✗ | ✗ | ~ Pre-loading (not retrieval) | ✗ | ✗ Graph returns blast-radius context set; no token-budget parameter on retrieval | ✗ | ✗ No token-budget parameter on retrieval | ✗ No token-budget parameter on retrieval | ✗ No token-budget parameter on retrieval |
| Works alongside the others | ✓ Complements all of them | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Complementary Tools — different problems, same ecosystem
| jCodeMunch + jDocMunch | RTK | lean-ctx | Context Mode | OpenViking | ClawMem | mem0 | LanceDB | QMD | Obsidian | chonkify | Aegis | Caliber | Citadel | codesight | repowise | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Token reduction on code exploration | ✓ ~95 % | ~ N/A (different problem) | ~ Entropy filtering, signature mode, and aggressive AST stripping reduce read tokens; no symbol index — code exploration still requires file reads | ~ BM25 text search over intercepted output; no structured code retrieval | ✗ Agent memory system; no code exploration tools | ✗ Agent memory system; not designed for code exploration | ✗ Memory & personalization layer; no code navigation | ✗ Vector database infrastructure; no code-specific tooling | ✗ Doc/notes search only; no code navigation or symbol extraction | ✗ Note-taking app; no code navigation or symbol extraction | ✗ Document compression library; no code exploration tools | ✗ Architecture governance layer; no code exploration or symbol extraction | ✗ Config management layer; no code exploration or symbol extraction | ✗ Orchestration harness; no code exploration or symbol extraction | ~ Architecture-level scan (routes, schemas, middleware chains, import graphs) — not symbol-level retrieval; no token benchmarks published | ~ LLM-generated wiki articles answer high-level questions without file reads; no on-demand symbol retrieval or published token benchmarks | |
| Token reduction on terminal output | ~ Not the focus | ✓ ~89 % avg | ✓ 60–95% via shell hook (90+ patterns, 34 command categories) | ✓ ~98% on shell/log/web output (their primary feature) | ✗ Not the focus | ✗ Not the focus; reduces session bloat via decay & dedup | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus | ✗ Not the focus |
| Agent memory / cross-session continuity | ✗ Not the focus | ✗ | ~ ctx_session + ctx_knowledge provide cross-session task/decision persistence (CCP protocol) | ~ Session state snapshot via PreCompact hook | ✓ L0/L1/L2 tiered memory; skill library; auto session compression | ✓ Hybrid search vault; typed decay; causal links; cross-session handoffs | ✓ Multi-level adaptive memory (user / session / agent state) | ✗ Storage primitive; no memory semantics | ✗ Knowledge base retrieval, not session memory | ~ Vault functions as persistent knowledge store; no agent memory API | ✗ Not the focus | ~ Observation layer learns from agent edits and PR merges over time; not traditional session memory | ✓ Session learning hooks capture corrections, gotchas, and patterns into CALIBER_LEARNINGS.md |
✓ Campaign persistence — phases, decisions, and continuation state survive across sessions | ✗ Per-session in-memory scan; no cross-session persistence | ~ Wiki articles persist across sessions; no agent memory API | |
| Requires pre-indexing | ✓ One-time, incremental | ✓ None needed | ✓ None needed; compression is stateless per-call; session knowledge accumulates automatically | ~ No upfront step; auto-indexes tool output on flow-through via hooks | ~ LLM-driven; organized on first ingest, updated as agent works | ~ No upfront step; memory captured automatically via hooks | ~ No upfront step; memories accumulate as the agent interacts | ~ Vectors must be pre-computed externally and loaded | ~ One-time embed step; re-run after adding new docs | ✗ No indexing API; files are created and read via the GUI or filesystem | ~ Embedding pass required per compression call; local model ~419 MB or cloud API | ~ Knowledge base must be manually populated via aegis_import_doc; no auto-scan |
~ One-time caliber init scan; re-run caliber refresh as codebase evolves; auto-refresh hooks available |
~ No indexing; /do setup scaffolds per-project config on first run |
~ One-shot scan per session (~2s startup); no persistent index between sessions | ~ One-time LLM-assisted wiki generation; re-run repowise refresh as codebase evolves |
|
| Works with AI agents (MCP) | ✓ Native MCP server | ~ Hook-based, not MCP | ✓ 24 MCP tools; lean-ctx init --agent claude-code one-command setup |
✓ Native MCP server + PreToolUse/PostToolUse/PreCompact/SessionStart hooks | ~ Python SDK + agent framework; MCP integration not documented | ✓ 28 MCP tools + Claude Code hooks + native OpenClaw plugin | ~ Python + TypeScript SDK; LangGraph & CrewAI integrations; no native MCP server | ~ REST API + Python/TS/Rust SDKs; LangChain & LlamaIndex integrations; no native MCP server | ✓ Native MCP server (query, get, multi_get, status) | ~ Community MCP plugins available; no official MCP server from Obsidian | ✗ No MCP server; standalone library and CLI only | ✓ Native MCP server; dual-surface (agent read-only + admin approval-gated) | ~ CLI tool, not an MCP server; auto-discovers and configures MCP servers for your project | ~ Claude Code plugin, not an MCP server; orchestrates agents and hooks within Claude Code | ✓ Native MCP server; zero-install via npx codesight |
✓ Native MCP server; pip install repowise |
|
| Runs fully offline / local | ✓ Local index, no backend | ✓ | ✓ Single Rust binary; zero dependencies; no network calls | ✓ Local SQLite index; no network calls | ✗ Requires external LLM provider; network required | ~ Fully local but requires 4–11 GB VRAM; WSL2 on Windows | ✗ Self-hosted requires vector DB + PostgreSQL + LLM API keys | ✓ Embedded library; no external services required | ~ Local GGUF models via node-llama-cpp; VRAM required for semantic reranking | ✓ Core app fully local; Sync is optional paid cloud add-on | ~ Local SentenceTransformers supported; requires ~419 MB model download + VRAM | ✓ Fully local SQLite; optional SLM (llama.cpp) runs locally; no external services | ~ Scoring is fully local; generation requires your LLM provider (Claude Code seat, Cursor seat, or API key) | ✓ Fully local Node.js plugin; no external services or API keys required beyond Claude Code itself | ✓ Fully local TypeScript; zero dependencies; no network calls at runtime | ~ Local SQLite + LanceDB; wiki generation requires an LLM API key (Anthropic, OpenAI, or local Ollama) | |
| Commercial use permitted | ✓ Paid license available | ✓ MIT | ✓ MIT | ~ Internal & commercial use OK; SaaS/managed service prohibited (ELv2) | ✓ Apache 2.0 | ✓ MIT | ~ Apache 2.0 self-hosted (free); hosted platform = paid (pricing undisclosed) | ✓ Apache 2.0 (OSS free; cloud/enterprise paid) | ✓ MIT | ✓ Core app free including commercial; commercial license $50/user/yr (voluntary) | ✗ Evaluation-only; commercial use requires paid license from author | ✓ ISC license — permissive, commercial use permitted | ✓ MIT | ✓ MIT | ✓ MIT | ~ AGPL-3.0 — commercial use permitted, but any hosted derivative must be open-sourced | |
| License | Free non-commercial; paid commercial | MIT (free); $15/dev/mo cloud | MIT (free) | Elastic License 2.0 (ELv2) | Apache 2.0 | MIT | Apache 2.0 (self-hosted free); hosted platform paid | Apache 2.0 (OSS free); cloud/enterprise paid | MIT | Proprietary freeware; Sync $4/mo; Publish $8/mo; Commercial license $50/user/yr (optional) | Proprietary (evaluation-only); commercial license contact: th@chonkydb.com | ISC (open source, permissive) | MIT | MIT | MIT | AGPL-3.0 | |
| Works alongside jCodeMunch | ✓ | ✓ Covers terminal output; jCodeMunch covers code reads | ✓ Compresses file reads + terminal output; jCodeMunch adds the semantic indexing lean-ctx lacks | ✓ Covers session output bloat; jCodeMunch covers code reads | ✓ Agent memory layer; jCodeMunch is code navigation layer | ✓ Agent memory layer; jCodeMunch is code navigation layer | ✓ Agent memory layer; jCodeMunch is code navigation layer | ✓ Vector search infrastructure; jCodeMunch is structured code navigation | ✓ Doc/notes knowledge search; jCodeMunch + jDocMunch handle code and structured docs | ✓ Obsidian vault .md files are directly indexable by jDocMunch for agent retrieval | ✓ PDF compression upstream of jDocMunch; fills jDocMunch's PDF gap | ✓ Architecture governance layer; jCodeMunch is live code structure layer — natural pairing | ✓ Caliber configures jCodeMunch as an MCP server; jCodeMunch is its recommended code exploration piece | ✓ Citadel orchestrates the workflow; jCodeMunch powers the code reads — /review and /refactor skills get dramatically cheaper | ✓ Architectural orientation layer; jCodeMunch provides the symbol-level retrieval codesight lacks — orient with codesight, then drill with jCodeMunch | ✓ Wiki Q&A and doc generation layer; jCodeMunch delivers precise live symbol retrieval alongside the static wiki |
Raw file tools — Read, Grep, Glob, Bash
Every AI coding environment ships with tools to read files and search text. They work. They just cost a lot of tokens — because they return entire files when you only needed one function.
- Read a file → get the entire file (even if you need 10 lines)
- Grep returns lines but no surrounding structure or type info
- No symbol index — agent must re-read files each session
- No import graph — tracing call chains requires many tool calls
- No section-level doc access — doc files read in full
- Token cost scales with codebase size, not query complexity
- search_symbols returns matching symbols with signatures — no file read needed
- get_symbol returns the exact implementation, nothing more
- Index is built once and reused — incremental updates on change
- find_importers and find_references trace the call graph in one call
- jDocMunch delivers section-level doc retrieval across .md, .rst, .ipynb, HTML
- Token cost is flat and tiny regardless of codebase size
Express.js (34 files) — ~58× efficiency | FastAPI (156 files) — ~100× efficiency | Gin (40 files) — ~66× efficiency
Workflow measured:
search_symbols (top 5) + get_symbol ×3 vs. concatenating all source files.
Full methodology and raw data: benchmarks/
mcp-server-filesystem
Anthropic ships an official mcp-server-filesystem that exposes file system operations — read, write, list, search — as MCP tools. It is the "default" MCP option for many Claude Desktop users.
- read_file returns the full file content — same token cost as native Read
- search_files does regex over raw text — no structural awareness
- No symbol index, no AST parsing, no language awareness
- write_file and edit_file are available — it is a read/write tool
- No import graph, no reference tracing, no doc section search
- Zero setup — ships with Claude Desktop, no indexing step
- get_symbol returns the exact function body — not the whole file
- search_symbols understands types, signatures, and language constructs
- AST-based parsing for 70+ languages — finds things grep cannot
- Read-only by design — predictable, safe for agent use
- Import graph and reference tracing built into the index
- Requires one-time
index_folderorindex_repocall
RepoMapper
RepoMapper is an open-source Python MCP server that generates a token-budgeted
"map" of a repository by applying PageRank to a dependency graph built with
Tree-sitter — the same algorithm Aider uses internally. Given a token budget
(e.g. --map-tokens 2048), it selects the most important files and
surface-level signatures to fill that window.
- PageRank over a dependency graph identifies the most-referenced files
- Binary search fills the token budget to within 15% of the specified limit
- Tree-sitter extracts signatures — surfaces class/function names in the map
- Prioritises "chat files" (active) then "mentioned files" then everything else
- Single
repo_maptool — simple API, low learning curve - MIT-licensed, free for all uses; based on Aider's proven RepoMap algorithm
- search_symbols finds a function by name — no map to scan, no signatures to skim
- get_symbol returns the complete implementation body, not just the signature
- Index is built once; subsequent queries are O(1) and sub-millisecond
- find_importers and find_references trace call graphs across the whole repo
- get_symbol_importance ranks symbols by full PageRank or in-degree — on demand, without generating a static map
- get_ranked_context assembles a token-budgeted context bundle ranked by BM25 + PageRank combined score
- jDocMunch handles documentation — section search across .md, .rst, .ipynb, HTML
- 67 tools covering outlines, content, search, context bundles, import graphs, and dead-code detection
search_symbols("authenticate")) and gets a precise answer.
Summarisers are great for "What matters here?" — retrievers are great for
"Where is this, exactly?" Both questions arise in a real coding session;
they are not in competition.
get_symbol_importance,
get_ranked_context, and the sort_by="centrality" param on
search_symbols), so the algorithmic distinction has narrowed. The
interface difference remains: RepoMapper produces a pre-generated map; jCodeMunch
answers on-demand queries.
search_symbols call costs a fraction of any map-based approach.
RepoMapper shines at the beginning of a session when the agent needs a ranked
overview before it knows what to ask for. The two tools are complementary:
RepoMapper to orient, jCodeMunch to navigate.
Pharaoh
Pharaoh is a two-layer system: an open-source AST parser (pharaoh-parser,
MIT-licensed) that extracts structural metadata from TypeScript and Python using
tree-sitter, and a hosted MCP server (pharaoh-mcp) that loads that
metadata into a Neo4j knowledge graph and exposes 13 architectural tools.
The central design principle: "no source code is ever captured" —
only signatures, hashes, and graph edges.
- Neo4j knowledge graph enables Blast Radius, Reachability, and Dependency Path queries
- Regression Risk Scoring and Dead Code Detection on Pro tier ($27/mo) — jCodeMunch now ships free dead-code detection, narrowing this advantage
- Parser is fully open source (MIT) — "the exact code that runs in production"
- Security-first: no source code captured; constants with secret-like names are skipped
- Auto-updates via GitHub webhook on every push — no manual re-indexing
- TypeScript decorator extraction for DI containers and controller analysis
- 70+ languages vs. Pharaoh's TypeScript and Python only
- Runs entirely offline — local index, no OAuth, no hosted backend required
- get_symbol returns the full function body; Pharaoh intentionally omits source code
- Published benchmarks: 58–100× token efficiency on real production repos
- find_dead_code detects unreachable files and symbols with confidence scoring — free, no Pro tier required
- get_blast_radius (depth-scored) and get_changed_symbols close the gap on impact analysis
- jDocMunch covers the documentation layer — Pharaoh has no equivalent
- v1.44.3 with 2,400+ tests; Pharaoh-Parser launched March 2026 (early stage)
trusted_folders allowlist that restricts
which directories the indexer may read — suitable for multi-tenant or data-residency
environments.
pharaoh-mcp, which connects to a
hosted Neo4j instance at mcp.pharaoh.so via OAuth. There is no
local or self-hosted option documented. For teams with air-gap or data-residency
requirements, the open-source parser alone is available — but the MCP tools that
make it useful are cloud-only. jCodeMunch runs entirely on your machine with no
external calls except optional AI summaries.
get_blast_radius),
dead-code detection (find_dead_code), and changed-symbol mapping
(get_changed_symbols) without a paid tier, closing much of the gap.
For teams that need broad language support, offline operation,
full source retrieval, or documentation search, jCodeMunch is the stronger choice.
Note that Pharaoh is very early stage (launched March 2026); the comparison may
look different in six months.
GitNexus
GitNexus bills itself as the "nervous system for agent context." It builds a full knowledge graph from your codebase — call edges, inheritance chains, execution flows, functional clusters via Leiden community detection — stored in a local LadybugDB instance and queryable via 7 MCP tools including raw Cypher. A browser-based WebAssembly version requires zero installation. As of early 2026 it has over 15,000 GitHub stars and an active release cadence.
- Full knowledge graph: call edges, inheritance, type references, execution flows
impacttool gives blast radius with depth grouping and confidence scoresdetect_changesmaps a git diff to affected execution flowsrenameplans coordinated multi-file refactoring safely- Hybrid search: BM25 + semantic embeddings + reciprocal rank fusion
- Browser WASM UI — full analysis without installing anything
- PostToolUse hook auto-reindexes after every git commit in Claude Code
- 70+ languages vs. GitNexus's 12 — covers Erlang, Fortran, SQL, Assembly, XML, and more
- Commercial use permitted — GitNexus's PolyForm NC license prohibits it
- Published token efficiency benchmarks: 58–100× on real production repos
- Opt-in hybrid BM25 + vector search via
search_symbols(semantic=true)— local sentence-transformers, Gemini, or OpenAI; zero overhead when disabled get_changed_symbolsmaps any git diff to added/removed/modified/renamed symbols with optional blast-radius depth- Simpler architecture — no graph database, no native binary crashes, no ONNX runtime
- jDocMunch covers documentation — GitNexus has no equivalent for .md/.rst/.ipynb
- Stable v1.44.3 with 2,400+ tests; no open SIGSEGV or stale-data issues
rename tool (coordinated multi-file refactoring) has no direct
equivalent in jCodeMunch. GitNexus's hybrid search is native and always-on;
jCodeMunch's is opt-in and requires a one-time embed_repo warm-up step.
The browser WASM option is also unique — useful for exploring a repo before
committing to installing anything. These are real strengths worth acknowledging.
Note that get_changed_symbols and get_blast_radius now
cover the "what breaks?" and "map this diff" workflows jCodeMunch previously lacked.
Serena
Serena is an open-source coding agent toolkit that exposes IDE-level semantic code tools to LLMs via MCP and OpenAPI. Rather than static AST parsing, it spins up real language servers (Pyright, rust-analyzer, typescript-language-server, gopls, etc.) and routes tool calls through them — giving it type-aware cross-file reference resolution, rename-across-codebase, and symbol-level code editing. It also ships memory management, onboarding workflows, and shell execution as first-class tools. With over 21,000 GitHub stars it has attracted strong community attention.
- Type-aware cross-file reference tracking via real language servers (Pyright, rust-analyzer, gopls, etc.)
rename_symbolpropagates renames across the entire codebase correctlyreplace_symbol_body,insert_after_symbol— LLM-driven IDE refactoring- Memory system: project-scoped and global markdown memory files
- Onboarding, task adherence, and conversation preparation workflow tools
execute_shell_command— shell access without leaving the agent- Compatible with Claude Code, Cursor, Cline, Roo Code, Codex, Gemini CLI, JetBrains IDEs
- Zero external binaries — tree-sitter grammars bundled; works instantly in CI, containers, unfamiliar machines
- Published token efficiency benchmarks: 58–100× on real production repos (Express, FastAPI, Gin)
- Python ≥3.10; Serena requires exactly Python 3.11 (pins <3.12)
- No per-language install burden — 70+ languages work out of the box
- Lightweight: no background language server processes, no tmpfs fill, no RAM pressure
- Fast startup — on-demand tree-sitter parsing, no LSP indexing wait
- jDocMunch covers documentation — Serena has no equivalent for .md/.rst/.ipynb search
- Stable v1.44.3 with 2,400+ tests; Serena is v0.1.4 (pre-stable)
rustup; PHP needs Phpactor; Kotlin's language server spawns zombie processes;
Julia's has documented initialization failures; PHP reference finding breaks on Windows.
The LSP approach is only as reliable as the language server ecosystem. In CI, containerized,
or ephemeral environments this operational cost is significant. jCodeMunch requires no external
binaries — tree-sitter grammars are bundled and indexing is self-contained.
replace_symbol_body, codebase-wide rename_symbol) and the built-in
memory + onboarding system have no direct equivalent in jCodeMunch. For long-running interactive
sessions on a single configured codebase, Serena's depth is a genuine advantage.
Dual-Graph (a.k.a. GrapeRoot)
Dual-Graph (v3.9.60) is a local CLI context engine that makes AI coding assistants
cheaper and faster by pre-loading the right files into every prompt. It now supports
six AI assistants: Claude Code, Codex CLI, Gemini CLI, Cursor, OpenCode, and GitHub Copilot.
It builds two data structures: an info_graph.json (a semantic graph of files,
symbols, and import relationships) and a chat_action_graph.json (session memory
recording reads, edits, and queries). Before each turn the graph ranks relevant files and packs
them into the prompt automatically — no extra tool calls required. A persistent
context-store.json carries decisions, tasks, and facts across sessions. The tool
is activated with dgc . (Claude Code), dg . (Codex CLI), or
graperoot . --cursor/--gemini/--opencode/--copilot and runs entirely offline.
Launcher scripts are Apache 2.0; the graph engine (graperoot) is proprietary,
distributed via PyPI.
- Semantic graph extracts files, symbols, and import relationships at project scan time; 11 languages (TS, JS, Python, Go, Swift, Rust, Java, Kotlin, C#, Ruby, PHP)
- Session memory (
chat_action_graph.json) tracks reads, edits, and queries — context compounds across turns - Auto pre-loads relevant files before the model sees the prompt — no tool calls needed for basic navigation
- Persistent
context-store.json: decisions, tasks, and facts carried across sessions CONTEXT.mdsupport for free-form session notes- MCP tools for deeper exploration:
graph_read,graph_retrieve,graph_neighbors - Benchmarked: 30–45% cheaper, 16/20 prompts win on cost, quality equal or better at all complexity levels
- Supports 6 AI assistants: Claude Code, Codex CLI, Gemini CLI, Cursor, OpenCode, GitHub Copilot
- Token tracking dashboard (
localhost:8899); configurable via env vars (DG_HARD_MAX_READ_CHARS, etc.) - Fully local; all data in
<project>/.dual-graph/(gitignored automatically) - Launcher scripts: Apache 2.0; graph engine: proprietary (PyPI-distributed)
- Tree-sitter AST parsing — retrieves individual functions and classes, not file blocks
search_symbols+get_symbol_source: find any function by name and return its full body in one callfind_importers/find_references/get_blast_radius: trace call graphs and impact chains across the entire repo- Published benchmarks: 58–100× token reduction on Express, FastAPI, and Gin repos
- 45+ MCP tools; tool profiles (core/standard/full) +
compact_schemasto control context budget plan_refactoring— edit-ready instructions for rename, move, extract, signature changesaudit_agent_config— scans CLAUDE.md/.cursorrules for stale references and token waste- jDocMunch covers documentation — .md, .rst, .ipynb, and HTML section search
- Zero extra dependencies: tree-sitter grammars bundled, no Node.js required; optional native Rust backend (jmunch-core)
- Paid commercial license; v1.44.3 with 2,400+ tests; 238 releases
search_symbols("authenticate")) and gets the exact symbol body back.
Pre-loading works well when the right files are predictable; retrieval wins when the codebase
is large and the agent knows exactly what it needs. The two strategies are genuinely
complementary — Dual-Graph to orient, jCodeMunch to pinpoint.
context-store.json — persisting decisions, tasks, and facts
between conversations — is a feature jCodeMunch does not offer. The automatic pre-loading
also means the model starts each turn with relevant code already in context, eliminating the
need for an explicit retrieval call in straightforward sessions. The broad AI assistant support
(6 tools including Cursor, Gemini CLI, OpenCode, and Copilot) and the built-in token tracking
dashboard are practical workflow additions. For users who want session continuity out of the box
across multiple AI assistants, this is a meaningful workflow advantage.
search_symbols call
returns exactly that body without injecting anything else — and structural queries like
get_blast_radius, find_dead_code, and plan_refactoring
have no Dual-Graph equivalent. The licensing split (Apache 2.0 launchers, proprietary
engine) is clearer than the previous unlicensed state, but the proprietary engine still
limits forkability and auditability.
Running both is practical: Dual-Graph to pre-load context and persist session memory,
jCodeMunch to answer precise symbol and cross-reference queries that the graph pre-loader
would miss.
Note: GrapeRoot's benchmark scores jCodeMunch on code-generation tasks
it was never designed to perform — those numbers do not reflect retrieval quality.
vexp
vexp is a local-first code context engine for AI coding agents — it parses codebases into ASTs, builds dependency graphs, and serves only the relevant code to the agent's context window. Positioned as a privacy-first, zero-network-call alternative to cloud code intelligence, vexp claims 65–70% token reduction and supports 30 programming languages. It ships a VS Code extension, a standalone npm CLI, and auto-generates MCP configuration files for 12 AI coding agents. Paid SaaS pricing (Starter free with hard caps / Pro $19/mo / Team $29/user/mo) puts it in a different economic category from open-source alternatives.
- Claims 65–70% token reduction; no published benchmark methodology or raw data
- LSP bridge for type-resolved call graphs — deeper semantic reference resolution when language servers are installed
- Cross-repository dependency tracking — follows imports across repo boundaries
- Session memory: persists agent observations and context across coding sessions
- Intent detection adapts search strategy by task type (debug, refactor, modify)
- Skeleton generation strips function bodies, retaining signatures (claims 70–90% body reduction)
- Starter plan: 2,000 node limit, 8 calls/day — Pro ($19/mo) required for real workloads
- Proprietary SaaS — not open source; no published test suite or reproducible benchmarks
- No documentation search equivalent
- No dead code detection, no token-budgeted retrieval, no PageRank/centrality ranking
- Benchmarked ~95% token reduction: 58–100× on Express.js, FastAPI, Gin — tiktoken-measured with published methodology and raw data
- 70+ languages (tree-sitter bundled, zero binary installs) — YAML/Ansible, Razor/Blazor, SQL/dbt/SQLMesh, Erlang, Fortran, and more
find_dead_code— free; confidence-scored with cascading dead-code chains and entry-point auto-detectionget_ranked_context+get_context_bundle— true token-budgeted retrieval with BM25, PageRank, and hybrid strategiesget_symbol_importance— PageRank / in-degree centrality across the full import graph- Opt-in hybrid BM25 + vector search (
embed_repo); 3 providers: sentence-transformers (local), Gemini (task-aware), OpenAI - AI summaries via 6 providers: Anthropic, Gemini, OpenAI-compat, MiniMax, GLM-5, OpenRouter (free model); circuit-breaker protection
get_changed_symbols— maps a git diff to affected symbols + downstream blast radius in one callget_blast_radius— depth-scored risk scoring; per-hopimpact_by_depthbreakdown;has_test_reachper confirmed filefind_importerswithhas_importersflag;check_references;get_dependency_graph; TypeScript/SvelteKit path alias resolution; dynamicimport()detection; cross-repo:cross_repo=trueon find_importers / get_blast_radius / get_dependency_graph + dedicatedget_cross_repo_map- Fuzzy symbol search (trigram Jaccard + Levenshtein) — catches typos and partial names without extra config
index_file— surgical single-file reindex; Claude Code PostToolUse hook triggers it automatically after every editwatch-claude— auto-discovers Claude Code worktrees via hook events;freshness_mode: strictblocks on stale index- jDocMunch: section-level search across .md, .rst, .ipynb, HTML — no vexp equivalent
suggest_queries,get_related_symbols,get_class_hierarchy,get_symbol_diff,search_text,search_columns(dbt/SQLMesh)get_untested_symbols— import-graph test reachability; finds functions with no evidence of being exercised by any test file- 5 built-in MCP prompt templates: workflow, explore, assess, triage, trace
- Open source; 2,400+ tests; supply-chain integrity check at startup;
trusted_foldersallowlist
get_cross_repo_map, cross_repo=true on find_importers / get_blast_radius / get_dependency_graph).
Intent detection (search strategy adapts by task type) is a novel UX idea without a direct equivalent.
The VS Code extension lowers setup friction compared to a raw MCP server configuration.
If these specific capabilities are blockers, vexp is worth evaluating — at its Pro pricing.
find_dead_code, get_ranked_context, get_changed_symbols,
get_blast_radius depth scoring, hybrid semantic search, 6-provider AI summaries),
or its open-source economics. vexp's Starter plan — 2,000 node cap and 8 calls/day — is
unusable for real codebases without a $19/mo subscription. jCodeMunch has no equivalent caps.
For teams that need LSP-level semantic analysis on top of jCodeMunch,
pairing with Serena is a better path than switching to vexp.
code-review-graph
code-review-graph is an open-source MCP server that builds a persistent SQLite knowledge graph of your
codebase using Tree-sitter, tracks changes incrementally, and surfaces blast-radius context to AI coding
assistants at review time. It auto-configures Claude Code, Cursor, Windsurf, Zed, Continue, and OpenCode
on a single install command, and updates the graph automatically on every file save and git
commit. With 4,300+ GitHub stars, it is one of the highest-visibility tools in this space.
Its benchmarks are published and reproducible: 8.2× average token reduction on commit-scoped reviews
across 6 real repositories — though performance varies significantly by change type, dropping below 1×
on small single-file edits in compact packages like Express.
- 8.2× avg token reduction on commit-scoped reviews; 49× claimed on large Next.js monorepo (27,732 → ~15 files)
- Blast-radius with 100% recall — never misses an impacted file; F1 0.54 / precision ~0.38 (deliberately conservative over-prediction)
- 0.7× on small single-file Express changes — graph context exceeds raw file; acknowledged in their published benchmarks
- 5 built-in MCP prompt templates: review, architecture, debug, onboard, pre-merge (jCodeMunch also ships 5: workflow, explore, assess, triage, trace)
- D3.js interactive force-directed graph visualisation with edge-type toggles
- Community detection via Leiden algorithm + auto-generated Markdown wiki
- Architecture overview map with coupling warnings
- Test coverage gap detection embedded in blast-radius analysis (jCodeMunch now has
get_untested_symbols+has_test_reachin blast radius) - Incremental re-index in <2s on 2,900-file repos via SHA-256 diff
- Multi-repo registry — search across registered repos
- 19 languages; no YAML/Ansible, Razor/Blazor, SQL/dbt/SQLMesh, Erlang, or Fortran
- No documentation search (no .md/.rst/.ipynb section search equivalent)
- No named per-symbol retrieval — context is always blast-radius sets, not individual symbols
- No token-budget parameter on retrieval
- MRR 0.35 on keyword search; flow detection 33% recall outside Python repos (acknowledged)
- 58–100× token efficiency on full-codebase exploration tasks (Express, FastAPI, Gin — tiktoken-measured, published raw data and harness)
- 70+ languages — YAML/Ansible, Razor/Blazor, SQL/dbt/SQLMesh, Erlang, Fortran, and more
- Named per-symbol retrieval:
get_symbol_source,get_symbol_diff,get_context_bundle— read exactly what you need, nothing more get_ranked_context— token-budgeted retrieval with BM25, PageRank, and hybrid strategiesget_blast_radius— depth-scored risk with per-hopimpact_by_depthbreakdown +has_test_reachper confirmed file;get_changed_symbolsmaps a git diff to affected symbols in one callfind_dead_code— free; confidence-scored; cascading dead-code chains; entry-point auto-detection;get_untested_symbols— import-graph test reachability analysis- Opt-in hybrid BM25 + vector search (
embed_repo); 3 providers: sentence-transformers, Gemini (task-aware), OpenAI - AI summaries via 6 providers with circuit-breaker protection;
suggest_queriesfor unfamiliar repos audit_agent_config— flags stale symbol references in CLAUDE.md/.cursorrules to prevent token waste- jDocMunch: section-level search across .md, .rst, .ipynb, HTML — no code-review-graph equivalent
- jDataMunch: database schema exploration, schema drift, data hotspots — no code-review-graph equivalent
- TypeScript/SvelteKit path alias resolution; dynamic
import()detection watch-claudeauto-discovers Claude Code worktrees;freshness_mode: strictblocks on stale index- 5 built-in MCP prompt templates: workflow, explore, assess, triage, trace — guided workflows for onboarding, review, quality triage, and debugging
- 2,400+ tests; supply-chain integrity check at startup;
trusted_foldersallowlist
audit_agent_config, jDocMunch for documentation search, and
jDataMunch for data-layer exploration — none of which code-review-graph covers.
The two tools are complementary rather than strictly substitutable: code-review-graph's
visualisation layer pairs naturally with jCodeMunch's retrieval layer for teams that want both.
cymbal
cymbal is a Go CLI for code symbol navigation. It parses your repo with tree-sitter,
stores symbols and imports in a local SQLite/FTS5 database, and answers queries in 9–27ms.
It ships a CLAUDE.md policy block that instructs agents to call
cymbal instead of Read, Grep, Glob, or Bash — the same agent-integration
approach jCodeMunch pioneered. The core commands — search,
show, refs, importers, impact,
context, outline — map closely to jCodeMunch's
search_symbols, get_symbol_source, find_references,
find_importers, get_blast_radius, get_context_bundle,
and get_file_outline. The meaningful difference is delivery: cymbal is a
CLI subprocess; jCodeMunch is a native MCP server. For teams already using Python,
that's a footnote. For teams on Go stacks, it's a real advantage.
- tree-sitter AST → SQLite/FTS5 index; 9–27ms query latency
- Named on-demand symbol retrieval:
cymbal show <symbol> - Call graph traversal:
cymbal trace(down) +cymbal impact(up, depth cap 5) - Go binary — no Python runtime; Homebrew / PowerShell / Docker install
- JIT freshness: auto-detects changed files via mtime+size before every query — no watch daemon
- 22 languages; local
.cymbal/index.db; fully offline - CLI subprocess — agent shell-outs via Docker or bash; not an MCP server
- FTS5 keyword search only; no vector/embedding layer
- ~10 commands; no doc section search, no dead code detection, no session analytics
- Native MCP server (stdio, SSE, streamable-http) — tools appear directly in Claude, Cursor, Windsurf, Zed, Continue
- 50+ tools covering symbol retrieval, session context, architectural health, data-layer exploration, and doc search
- Opt-in BM25 + vector hybrid search (
embed_repo) — 3 embedding providers; FTS5 when disabled - 70+ languages incl. YAML/Ansible, Razor/Blazor, SQL/dbt, Erlang, Fortran
- Published, reproducible benchmarks: 58–100× token reduction (tiktoken-measured, 3 production repos)
find_dead_code,get_hotspots,get_churn_rate,audit_agent_config- jDocMunch for section-level doc retrieval; jDataMunch for tabular data
pip install jcodemunch-mcp— works in any MCP-compatible client
search_symbols and
get_symbol_source appear alongside the agent's built-in tools
with full type signatures and structured return values.
On benchmarks: cymbal reports 17–100% fewer tokens compared to ripgrep. jCodeMunch's 58–100× figure uses a different baseline (concatenating all source files) and a different measurement method (tiktoken against 3 production repos with published raw data). The claims are not directly comparable — ripgrep already reduces context vs. full-file reads, so the baselines diverge significantly.
cymbal investigate adaptive command
are genuine additions.
For most teams, jCodeMunch's advantages are decisive: it's a native MCP server (no subprocess), covers 70+ languages vs. 22, offers hybrid semantic search, ships 50+ tools including dead code detection and session analytics, and provides tiktoken-measured production benchmarks rather than self-reported comparisons against ripgrep. If Python is already in your stack, jCodeMunch is the stronger choice. If your team is Go-only and subprocess orchestration is acceptable, cymbal is a credible alternative worth evaluating.
Context+
Context+ (github.com/ForLoopCodes/contextplus) is a TypeScript MCP server that transforms codebases into searchable, hierarchical feature graphs for AI coding assistants. It combines tree-sitter AST parsing (43 languages), embedding-based semantic search (via Ollama or OpenAI-compatible APIs), spectral clustering, and Obsidian-style wikilink hubs. It also includes shadow restore points for undo functionality and a memory graph for RAG-style context retrieval. At 1.8k stars, it has meaningful adoption.
- 43 languages via tree-sitter; AST extraction with embedding-based search
- Spectral clustering groups semantically related files into navigable clusters
- Obsidian-style wikilink hubs connect features to code locations
get_blast_radiusand call-site tracing for impact analysis- Shadow restore points — undo changes without git involvement
- Memory graph with RAG:
upsert_memory_node,search_memory_graph,retrieve_with_traversal - Requires Ollama or OpenAI-compatible API for embeddings — not fully offline
- No token-reduction benchmarks published; "99% accuracy" claim without methodology
- 17 tools total; no doc section search, no dead code detection, no churn/hotspot analysis
- 70+ languages via tree-sitter — including YAML/Ansible, Razor/Blazor, SQL/dbt, Erlang, Fortran, COBOL
search_symbols+get_symbol_sourcereturn exact implementations, not graph summaries- Published, reproducible benchmarks: 58–100× token reduction (tiktoken-measured, 3 production repos)
find_importers,find_references,get_blast_radiuswith depth-scored risk andhas_test_reachfind_dead_codewith confidence scoring;get_hotspots;get_churn_rate;audit_agent_config- Opt-in BM25+vector hybrid search with 3 embedding providers — works fully offline with BM25 alone
get_ranked_contextassembles token-budgeted context bundles ranked by BM25 + PageRank- jDocMunch for section-level doc retrieval; jDataMunch for tabular data exploration
- 50+ tools covering symbols, context, architecture health, session analytics, and cross-repo maps
propose_commit) that jCodeMunch intentionally excludes as a read-only tool.
upsert_memory_node, create_relation,
retrieve_with_traversal) gives agents persistent, cross-session knowledge that
survives context compaction — a capability jCodeMunch does not offer. The spectral clustering
and wikilink hubs provide a "feature map" view of a codebase that is useful for orientation.
The shadow restore points are a creative alternative to git stash for quick undo.
jCodeMunch's advantages are measurable: 58–100× token reduction (published, reproducible), 70+ languages vs. 43, 50+ tools vs. 17, token-budgeted retrieval, dead code detection, architectural health metrics, and a fully offline mode that requires no external embedding API. For teams that want precise, benchmarked code retrieval with the broadest language and tooling coverage, jCodeMunch is the stronger choice. For teams that want graph-based navigation with persistent memory and don't mind an embedding dependency, Context+ is worth evaluating.
Axon — Knowledge-Graph Code Intelligence
Axon indexes codebases into a KuzuDB knowledge graph with community detection (Leiden algorithm),
execution flow tracing, and hybrid search (BM25 + vector + fuzzy). It also ships an interactive
web dashboard with force-directed graph visualization at localhost:8420.
662 stars, MIT licensed, Python/JS/TS only (3 languages via tree-sitter).
- KuzuDB graph backend with Cypher query console — powerful for ad-hoc exploration
- Leiden community detection auto-discovers architectural clusters
- Execution flow tracing: detects entry points, traces BFS paths from each
- Multi-pass dead code with Protocol conformance and override awareness
- Hybrid search (BM25 + 384-dim vector + Levenshtein) fused via RRF
- Interactive web UI (Sigma.js + WebGL): force-directed graph, health dashboard, Cypher console
- Python, JavaScript, TypeScript only — 3 languages total
- Heavy dependency footprint: kuzu, igraph, leidenalg, fastembed, fastapi, uvicorn
- No token-budgeted retrieval, no doc search, no cross-repo support
- No published token-reduction benchmark or reproducible methodology
- 70+ languages (incl. YAML, Razor, SQL/dbt, Erlang, Fortran, COBOL, Zig, PowerShell)
- 58–100× token reduction, published with reproducible methodology and raw data
- 50+ tools: blast radius, hotspots, coupling metrics, tectonic plates, signal chains, refactoring planner
- Signal chain discovery: traces gateway-to-leaf pathways with rich labels (POST /api/users, cli:seed-db)
- Tectonic map: 3-signal fusion (structural + behavioral + temporal) community detection
- Token-budgeted retrieval: get_ranked_context packs results into a token budget
- Doc section search via jDocMunch (.md, .rst, .ipynb, HTML)
- Cross-repo dependency tracing via get_cross_repo_map
- Claude Code hook integration (PreToolUse/PostToolUse auto-reindex)
- Lightweight: pure Python, SQLite-backed, no graph database dependency
The web dashboard is a real differentiator for developers exploring code visually — force-directed graphs, community hull overlays, and the Cypher console are features no other MCP tool in this space offers. The Protocol-aware dead code analysis is also more sophisticated than most competitors. If your team is Python/JS/TS-only and wants a visual exploration layer alongside MCP, Axon is worth trying.
jCodeMunch covers 70+ languages, provides 58–100× measured token reduction (published methodology), offers 50+ tools including signal chain discovery (our answer to execution flow tracing — with richer gateway labeling and tectonic plate integration), and requires no graph database runtime. For teams that need broad language support, benchmarked efficiency, and the deepest tooling surface, jCodeMunch is the stronger choice. For Python/JS/TS teams that want a visual graph dashboard, Axon is a credible alternative.
SocratiCode — Docker-Based Vector Code Intelligence
SocratiCode indexes codebases into per-branch Qdrant vector collections with AST-aware chunking via ast-grep. It uses Ollama for local embeddings and combines dense vector + BM25 retrieval via Reciprocal Rank Fusion. Requires Docker (Qdrant + Ollama containers). 641 stars, AGPL-3.0 licensed, 18+ languages.
- Per-branch separate Qdrant vector collections — true branch isolation
- Hybrid search: dense vector (Ollama) + BM25 sparse, fused via RRF
- AST-aware chunking via ast-grep — splits at function/class boundaries
- Cross-project/repo search across multiple indexed codebases
- DB/API/infra knowledge discovery from config and schema files
- Mermaid dependency visualization diagrams
- Self-reported: 61% less tokens, 84% fewer calls, 37× faster than standard AI grep
- Heavy infra: Docker required (Qdrant + Ollama + app containers)
- AGPL-3.0 copyleft — commercial use requires separate license
- No token-budgeted retrieval, no dead code detection, no import graph tracing
- Benchmark baseline is grep, not raw file reads — not directly comparable to AST-based tools
- 70+ languages (incl. YAML, Razor, SQL/dbt, Erlang, Fortran, COBOL, Zig, PowerShell)
- 58–100× token reduction, published with reproducible methodology and raw data
- Branch-aware delta indexing: O(delta) storage per branch, not full collection duplication
- 50+ tools: blast radius, hotspots, coupling metrics, tectonic plates, signal chains, refactoring planner
- Token-budgeted retrieval: get_ranked_context packs results into a token budget
- Doc section search via jDocMunch (.md, .rst, .ipynb, HTML)
- Dead code detection with confidence scoring and cascading chain analysis
- Zero Docker, zero external databases — pure Python, SQLite-backed, pip install
- Claude Code hook integration (PreToolUse/PostToolUse auto-reindex)
- Cross-repo dependency tracing via get_cross_repo_map
The per-branch vector collection approach gives complete branch isolation with no index composition overhead at query time. The Qdrant backend is battle-tested for large-scale vector search, and Ollama integration means embeddings stay fully local. If your team already runs Docker and wants production-grade vector search with branch isolation, SocratiCode is a credible option.
cat / Read (which is what agents actually do).
The Docker requirement (3 containers) also adds significant operational complexity vs. pip-install tools.
However, jCodeMunch covers 4× more languages, achieves 58–100× measured token reduction (published methodology, benchmarked against actual file reads), requires zero Docker infrastructure, and provides a dramatically deeper tool surface: 50+ tools including dead code detection, import graph tracing, blast radius, signal chains, and token-budgeted retrieval — none of which SocratiCode offers. For teams that want the deepest analysis tools with zero-infra setup, jCodeMunch is the stronger choice. For teams already running Docker that prioritize vector search with branch isolation, SocratiCode is worth evaluating.
RTK — Rust Token Killer
RTK is a Rust-based CLI proxy that intercepts terminal command output — pytest, cargo test, git diff — and compresses it before it reaches the AI's context. It claims ~89% average noise removal across 30+ development commands.
- Installs a PreToolUse hook — works transparently with any agent
- Excellent for test runners: pytest output drops from 756 to 24 tokens
- Excellent for git output: git diff drops from ~21,500 to ~1,259 tokens
- Written in Rust — single binary, <10ms overhead, zero dependencies
- MIT-licensed, free for individuals; $15/dev/mo cloud analytics tier
- Does not help with code reading — only with command output
- Answers "where is authenticate()?" without reading a single source file
- Symbol index persists across sessions — no re-reading on restart
- Structured MCP tool responses — agent gets typed results, not filtered text
- Import graph, reference tracing, file outlines all in one index
- jDocMunch handles the documentation side (RTK has no equivalent)
- Does not compress terminal output — that is RTK's lane
RTK cuts the noise from commands the agent runs (
git status, pytest, docker logs).
jCodeMunch cuts the noise from code the agent reads (get_symbol vs. reading 50 files).
A developer using both would eliminate the two biggest sources of context bloat in a typical coding session.
lean-ctx
lean-ctx is a Rust binary that acts as a token-compression layer between your
shell/editor and the LLM. It attacks the problem from two sides: a shell hook that
intercepts CLI output (git, npm, cargo, docker, k8s, and 30+ more) before it
reaches the model, and a 24-tool MCP server that serves files through seven
compression modes — map, signatures, diff,
aggressive (syntax-stripped), entropy-filtered, and range-limited
(lines:N-M). A published real-world session shows 89,800 tokens
compressed to ~10,620 — 88% reduction. It also ships three AI protocols: CEP
(adaptive communication), CCP (cross-session task/decision memory), and TDD
(token-dense shorthand). One-command agent integration:
lean-ctx init --agent claude-code.
- Shell hook: intercepts CLI output and strips noise before it enters context
- MCP file modes:
signaturesstrips bodies,aggressivestrips syntax,entropydrops low-information lines - ctx_delta, ctx_dedup, ctx_fill — cache-aware dedup and delta delivery
- Cross-session memory via ctx_session + ctx_knowledge (CCP protocol)
- Single Rust binary, zero dependencies, <10ms overhead, MIT-licensed
- Does not build a symbol index — it compresses files but can't answer "where is this function referenced?"
- One-time AST index — never reads the same function body twice
- Answers "where is
authenticate()used?" in one MCP call, no file reads - Blast radius, dead code, import graphs — structural queries lean-ctx has no equivalent for
- 70+ languages vs. lean-ctx's 14 (tree-sitter); YAML, SQL/dbt, Razor, Erlang included
- jDocMunch covers doc section retrieval; lean-ctx has no doc equivalent
- Does not compress terminal output — that is lean-ctx's lane (and RTK's)
lean-ctx compresses what flows into the context window on every tool call — file bytes, shell output, git diffs. jCodeMunch eliminates the need to make most of those file reads in the first place — index once, retrieve by symbol forever. Together they attack context bloat from both ends: lean-ctx cuts the fat from reads you do have to make; jCodeMunch eliminates the reads you don't.
ctx_read, ctx_smart_read, ctx_search) overlap superficially with jCodeMunch,
but the underlying approach is completely different: lean-ctx compresses file reads; jCodeMunch replaces them with indexed lookups.
lean-ctx wins on terminal output compression and file-read token density — it does things jCodeMunch doesn't try to do.
jCodeMunch wins on semantic code navigation — symbol search, reference tracing, blast radius, dead code — none of which lean-ctx provides.
A developer using both would eliminate context bloat at every layer.
Context Mode
Context Mode (github.com/mksglu/context-mode) is not a GitHub product — it's a
third-party MCP server by Mert Köseoğlu. Its tagline: "MCP is the protocol for tool
access. We're the virtualization layer for context." It tackles a real problem: every
tool call in a long agent session dumps raw output — bash commands, log files, web
fetches, GitHub API responses — directly into the context window. After 30 minutes
of work, 40%+ of your 200K token budget is consumed by noise. Context Mode installs
PreToolUse/PostToolUse hooks that intercept this output before it enters context,
routes anything over ~5 KB into a local SQLite FTS5 index, and exposes a
ctx_search tool so the model queries structured results instead of
receiving raw blobs. Sessions that previously hit limits in 30 minutes can run
for ~3 hours on the same budget.
- Intercepts bash, Read, WebFetch, Grep, Task calls via PreToolUse/PostToolUse hooks — output never enters context raw
- SQLite FTS5 index with BM25 ranking, Porter stemming, trigram fallback, and Levenshtein fuzzy correction
- PreCompact hook captures session state into a priority-tiered XML snapshot (≤2 KB) before auto-compaction fires
- SessionStart hook restores the snapshot — session continuity across context resets
- Hook-enforced: the agent cannot drift back to raw tool output even without explicit instructions
- Language-agnostic — works equally well on logs, web pages, git output, and source files
- Structured symbol extraction: the agent calls
search_symbols+get_symbol— raw file content never enters context - Published, reproducible benchmarks: 58–100× token efficiency on Express, FastAPI, and Gin production repos
- 70+ languages with AST-level understanding — not text search over raw bytes
search_symbols(fuzzy=true)— trigram Jaccard + Levenshtein fallback withmatch_type,fuzzy_similarity, andedit_distancefields; no FTS5 requiredfind_importers,find_references— structural code navigation, not BM25 approximation- jDocMunch for documentation — the same philosophy applied to .md/.rst/.ipynb/OpenAPI files
- PyPI package, Python ≥3.10, zero external binaries
OpenViking — by Volcengine (ByteDance)
OpenViking (github.com/volcengine/OpenViking) is an open-source context database for AI agents, built by ByteDance's Volcengine team. Its core idea: instead of dumping all agent memory into a flat vector database, organise it with a filesystem metaphor — hierarchical directories of memories, resources, and skills — with a three-tier loading model. L0 delivers one-sentence summaries (~100 tokens) so the agent decides whether to go deeper; L1 provides planning-level detail (~2 K tokens); L2 loads the full content on demand. The result is an agent that remembers across sessions, learns from past interactions, and avoids context explosion on long tasks.
- L0/L1/L2 tiered loading keeps long-running sessions from exhausting context on memory recall
- Filesystem directory metaphor organises memories, resources, and skills into navigable hierarchy
- Auto session management: compresses conversations and extracts durable long-term memories
- Multi-provider LLM support (Volcengine/Doubao, OpenAI, LiteLLM for Claude/Gemini/DeepSeek/Ollama)
- Embedding search via Volcengine, OpenAI, or Jina — semantic retrieval over stored context
- Retrieval trajectory visualization for debugging and optimisation
- Requires Python 3.10+, Go 1.22+, and a C++ compiler — non-trivial setup
- Depends on an external LLM provider; not offline-capable
- Structured symbol extraction: the agent queries
search_symbols+get_symbolrather than reading files - 70+ languages via tree-sitter AST — not text search, not LLM-driven; deterministic and reproducible
- No external LLM required; AI summaries are optional — core indexing and retrieval is pure local computation
- Zero runtime dependencies beyond Python 3.10+ and bundled tree-sitter grammars
- jDocMunch: section-level retrieval across .md, .rst, .adoc, .ipynb, HTML, OpenAPI, XML
- Published benchmarks: 58–100× token efficiency on real production repos (Express, FastAPI, Gin)
- Does not manage agent memory, learned facts, or cross-session agent state — that is OpenViking's lane
In multi-agent systems, OpenViking provides the persistent memory and skill library while jCodemunch + jDocMunch provide token-efficient access to the live code and documentation. They are complementary infrastructure at different layers — not alternatives to each other.
ClawMem — by yoloshii
ClawMem (github.com/yoloshii/ClawMem) is a local, on-device memory system and context engine for AI agents. It targets the same "agent amnesia" problem as OpenViking but takes a different approach: hybrid BM25 + vector search + cross-encoder reranking over a SQLite vault, all running on local GGUF models with no cloud dependency. It ships 28 MCP tools, Claude Code hooks (SessionStart, UserPromptSubmit, Stop, PreCompact), and — notably — a native OpenClaw ContextEngine plugin. Memories have typed lifecycles: decisions and knowledge hubs persist forever; progress notes decay after 45 days; handoffs after 30. Causal links between decisions are discovered automatically.
- Hybrid search: BM25 keyword + vector semantic matching + reciprocal rank fusion + cross-encoder reranking
- Self-evolving memory (A-MEM): automatic keyword extraction, tagging, and causal link discovery
- Typed content lifecycle: decisions/hubs = ∞, handoffs = 30 days, progress notes = 45 days
- Cross-session continuity via automatic handoff generation at session end
- PreCompact hook captures session state into a priority-tiered XML snapshot (≤2 KB) before context resets
- Native OpenClaw ContextEngine plugin — first-class integration, not a workaround
- Requires Bun v1.0+, 3 local GGUF models, 4–11 GB VRAM; WSL2 required on Windows
- Early-stage project (14 stars); API surface may evolve rapidly
- Answers structural questions: "Where is this function?" "What imports this module?" "What symbols changed?"
- Tree-sitter AST extraction across 70+ languages — deterministic, reproducible, no inference required
- No VRAM, no local model downloads, no Bun runtime — pip install and go
- Works on Windows natively (no WSL2 requirement)
- Published benchmarks: 58–100× token reduction on real production repos
- jDocMunch: section-level retrieval across .md, .rst, .adoc, .ipynb, HTML, OpenAPI
- Does not store agent decisions, session history, or cross-session memory — that is ClawMem's domain
mem0 — by mem0ai (YC S24)
mem0 (github.com/mem0ai/mem0) is the most widely adopted AI agent memory layer on GitHub, with 50K+ stars and Y Combinator S24 backing. It maintains multi-level memory — user preferences, session state, and agent-specific knowledge — that persists across interactions and adapts over time. Integrations exist for LangGraph, CrewAI, and other major agent frameworks. It ships as a self-hostable Python/TypeScript library and as a managed hosted platform. The library is open source under Apache 2.0; the hosted platform is a paid commercial product with undisclosed pricing.
- Multi-level memory: user-scoped preferences, session state, and agent-specific knowledge
- Adaptive personalization — memory evolves as the agent interacts, not just static storage
- Claims +26% accuracy, 91% faster responses, 90% fewer tokens vs. naive full-context approaches
- Python + TypeScript SDKs; integrates with LangGraph, CrewAI, and most major agent frameworks
- Self-hostable (Apache 2.0 library) or managed platform for production workloads
- Mandatory external LLM provider (defaults to OpenAI gpt-4.1-nano)
- Self-hosted production setup requires vector DB (Qdrant/Pinecone/Milvus), PostgreSQL, and LLM API keys
- Hosted platform pricing not publicly listed; requires signup or sales contact
- No external LLM required — tree-sitter AST parsing is pure local computation
- No vector database, no PostgreSQL, no infrastructure to manage beyond a pip install
- Published, reproducible benchmarks: 58–100× token efficiency on real production repos
- Works on Windows natively (no WSL2, no Docker, no managed service)
- 25+ programming languages via deterministic AST parsing, not probabilistic LLM memory extraction
- jDocMunch: section-level retrieval across .md, .rst, .adoc, .ipynb, HTML, OpenAPI
- Does not store user preferences, personalization data, or cross-session interaction history — that is mem0's domain
pip install mem0ai) is free under Apache 2.0.
What costs money is the managed hosted platform — automatic updates, analytics dashboards,
enterprise security, and operational overhead handed off to mem0ai's team.
For developers comfortable running their own infrastructure, self-hosted mem0 is free.
The real cost is the LLM API calls required for memory extraction and retrieval,
and the infrastructure burden of provisioning a vector store and database for production use.
LanceDB
LanceDB (github.com/lancedb/lancedb) is an open-source embedded vector database built on the Lance columnar format (Rust core). It handles multimodal data — text, images, video, point clouds, structured metadata — and delivers vector similarity search, full-text search, and SQL queries on the same table. It runs embedded (no server process) or as a managed cloud service. It is infrastructure: a high-performance storage and retrieval layer that other tools — mem0, OpenViking, RAG pipelines — might use as their backend.
- Embedded library — runs in-process, no server to manage; zero-copy architecture
- Vector similarity search + full-text search + SQL on the same table
- Multimodal: text, images, video, point clouds, structured metadata
- Automatic data versioning and schema evolution built in
- GPU-accelerated indexing; handles billions of vectors at petabyte scale
- Python, TypeScript, Rust SDKs; LangChain and LlamaIndex integrations
- Requires external embeddings — LanceDB stores and searches vectors but does not generate them
- No code understanding, no AST parsing, no symbol extraction — code is raw text
- Tree-sitter AST extraction — understands code structure, not just text similarity
- Zero mandatory embedding infrastructure — works out of the box with no vector DB, no cloud account, no embedding budget
- Optional hybrid semantic search via
search_symbols(semantic=true)— embeddings stored directly in the existing SQLite index; no separate vector store required - Symbol lookup is O(1) by name — deterministic exact retrieval, with optional semantic reranking when needed
- Structured results: function signatures, qualified names, parent/child hierarchy, import graphs
- jDocMunch preserves document heading hierarchy — sections are navigated structurally, not just by cosine distance
- One pip install; add
[semantic]extra only if you want embedding search — no Rust toolchain, no external DB - Not a general-purpose data store — purpose-built for code and documentation, nothing else
pip install jcodemunch-mcp[semantic]),
embeddings stored directly in SQLite alongside the existing index, and exact structural retrieval
as the default with no approximate-search false positives. The tools that use LanceDB as a backend (mem0, custom RAG
pipelines) sit at a higher layer than LanceDB itself and are closer comparisons to jCodeMunch.
QMD
QMD (github.com/tobi/qmd) is an on-device CLI search engine for markdown notes,
meeting transcripts, documentation, and knowledge bases. It combines BM25 full-text search,
vector semantic search, and LLM re-ranking — all running locally via node-llama-cpp and GGUF
models. Collections are indexed once; search runs with qmd search (fast BM25),
qmd vsearch (semantic), or qmd query (hybrid + reranking, best quality).
It also exposes a native MCP server with four tools — query, get,
multi_get, and status — making it suitable for agentic workflows.
A key feature is the context tree: hierarchical metadata attached to collections that gives
LLMs richer signals when selecting which documents to retrieve.
- Collections-based: index any folder of markdown files, meeting notes, or docs
- Three search modes: BM25 keyword (fast), vector semantic, hybrid + LLM reranking (best)
- Context tree: attach hierarchical metadata to collections for richer agent document selection
- Native MCP server: query, get, multi_get, status — designed for agentic flows
- All local: node-llama-cpp with GGUF models; no cloud calls; VRAM required for semantic modes
- CLI-first:
qmd search,qmd vsearch,qmd query,qmd get - Indexes unstructured prose — does not parse code structure, extract symbols, or understand imports
- Requires a one-time embed step; re-run after adding new documents
- Tree-sitter AST parsing — understands code structure, not just text similarity
- Symbol lookup is deterministic and O(1) by name — no approximate nearest-neighbor
- jDocMunch preserves document heading hierarchy — sections are navigated structurally, not by cosine distance
- No GGUF model, no VRAM required — works on any hardware; optional semantic search uses lightweight
sentence-transformersor a cloud API key, not a local inference server - Structured results: function signatures, qualified names, parent/child hierarchy, import graphs
- One pip install; no Node.js toolchain, no model download
- Not a general knowledge base tool — purpose-built for code repos and technical documentation
search_symbols(semantic=true)) uses lightweight sentence-transformers
or a cloud API key — no local inference server, no VRAM.
Obsidian
Obsidian is a personal knowledge management (PKM) application built on local plain-text
markdown vaults. Notes link to each other via [[wikilinks]], forming a
navigable graph of ideas. It runs entirely on your device, supports thousands of community
plugins, and optionally syncs across devices via Obsidian Sync. It is a human-facing writing
and thinking tool — not an indexing library or an MCP server. There is no official MCP
integration; community plugins can bridge the gap, but agent access to vault content is
not a first-class feature of Obsidian itself. Where jDocMunch fits is here: Obsidian vaults
are ordinary folders of .md files, and jDocMunch can index them directly —
making the vault's content searchable to AI agents at section granularity without any
Obsidian-specific tooling.
- Local markdown vault: plain
.mdfiles, no proprietary format lock-in - Bidirectional
[[wikilinks]]and graph view — navigate your knowledge visually - Canvas for infinite freeform brainstorming boards
- 1,000+ community plugins for tasks, spaced repetition, Dataview queries, diagrams, and more
- Obsidian Sync: E2E encrypted cross-device sync ($4/mo); Publish: instant web publishing ($8/mo)
- No native MCP server; community plugins provide partial agent access
- No indexing API for agents — content is authored via the GUI or filesystem writes
- Not a retrieval library; search is built for humans using the app, not for programmatic agent calls
- Points directly at an Obsidian vault folder — no format conversion, no plugin needed
- Section-level retrieval: returns the specific heading and its content, not the whole file
- Preserves document heading hierarchy — structural navigation, not approximate keyword match
- Native MCP server: agents call
search_sections,get_section,get_toc - No GUI, no sync, no visual graph — purely a retrieval layer for AI agents
- Incremental re-index: run again when vault files change; no continuous background process
- jCodeMunch indexes code repos in the same agent session — one MCP config covers both knowledge and code
.md
files, jDocMunch requires no Obsidian-specific knowledge — the vault is just a folder of markdown.
.md files in the vault are always plain text and fully portable.
chonkify
chonkify is an extractive document compression library aimed at fitting maximum signal into a token budget. Where jDocMunch indexes structured docs for on-demand section retrieval, chonkify compresses entire documents — particularly PDFs, which jDocMunch doesn't handle — before they reach an LLM. The two tools operate at different layers: chonkify is a preprocessing step; jDocMunch is a live retrieval layer.
- → Extractive compression — shrinks documents to fit a token budget
- → Supports .txt, .md, and .pdf (PDF is a genuine differentiator)
- → +59–84% better information recovery than LLMLingua in benchmarks
- ✗ Lossy — some content is discarded in the compression pass
- ✗ No MCP server — standalone library and CLI only
- ✗ Requires embedding model (~419 MB local or cloud API)
- ✗ Python 3.11 only — not available on 3.10 or 3.12+
- ✗ Proprietary license — evaluation-only; commercial use requires paid license
- ✗ Not on PyPI — wheel files only, distributed via GitHub
- ✓ Section-level indexing — AI retrieves only the relevant sections
- ✓ Lossless — returns exact source text, nothing discarded
- ✓ Native MCP server — works in Claude Code, Cursor, OpenCode, and any MCP client
- ✓ No embedding model needed — zero ML dependencies
- ✓ Python 3.10+ — broad compatibility
- ✓ .md, .rst, .adoc, .ipynb, .html, .txt, .yaml/.json (OpenAPI)
- ✓ Open source —
pip install jdocmunch-mcp - ✗ No PDF support — chonkify fills this gap
chonkify and jDocMunch are genuinely complementary. jDocMunch handles your structured documentation corpus (Markdown, RST, OpenAPI specs, notebooks) with zero token waste via live MCP retrieval. chonkify handles PDFs and long unstructured documents before they enter the context window. Together they cover the full document landscape — and chonkify's compressed output can itself be indexed by jDocMunch if you save it as Markdown.
chonkify launched this week. The proprietary license, the Python 3.11-only constraint, and the not-on-PyPI distribution model all add friction. The benchmark numbers are compelling but the test suite is small (5 documents, 2 token budgets). Worth watching — not yet worth building a production pipeline around.
Aegis
Aegis is a DAG-based Deterministic Context Compiler for AI coding agents. It stores your architecture documents in a SQLite knowledge base, maps them to file paths via dependency edges, and when an agent is about to edit code it returns exactly which guidelines apply — deterministically, with no search or RAG ranking. jCodeMunch answers “what does the code do”; Aegis answers “what rules must the code follow.” The two tools operate at different layers and pair naturally.
- → DAG of dependency edges maps architecture docs to file paths
- →
aegis_compile_contextreturns relevant guidelines before an edit - → Human-approval-gated knowledge base — agents cannot silently change rules
- → Observation layer learns from agent mistakes and PR merges over time
- → Optional SLM (llama.cpp) for intent tagging — off by default
- ✗ Knows nothing about live code structure — only the docs you feed it
- ✗ Requires manual population of the knowledge base to be useful
- ✗ TypeScript / npm only — no Python client
- ✓ Tree-sitter AST — live symbol extraction across 70+ languages
- ✓ Blast radius, dependency graph, class hierarchy, import tracing
- ✓ Zero setup —
index_folderonce, query immediately - ✓ 58–100× token reduction vs. raw file reads (real production benchmarks)
- ✓ No knowledge base to maintain — always reflects current code
- ✓
pip install jcodemunch-mcp— works in any MCP client - ✗ No architecture governance — Aegis fills this gap
Run both. Before an edit, call
aegis_compile_context to get the architectural constraints,
then get_blast_radius or get_context_bundle to understand the live code impact.
Aegis governs intent; jCodeMunch maps reality. Neither tool overlaps — together they give the agent
the full picture before a single line is written.
Caliber
Caliber is an AI tooling config manager. It scans your codebase, scores your existing AI setup
(deterministically, no LLM), and generates tailored CLAUDE.md, Cursor rules, AGENTS.md,
MCP server configs, and agent skills. It also detects config drift as your code evolves and updates
everything to match. jCodeMunch is one of the MCP servers Caliber discovers and configures —
the two tools operate at completely different layers.
- → Scans repo fingerprint (languages, frameworks, deps) and generates tailored configs
- → Deterministic config scoring — no LLM, no API key needed for
caliber score - → Auto-discovers and configures MCP servers (including jCodeMunch)
- → Session learning hooks capture agent corrections into
CALIBER_LEARNINGS.md - → Auto-refresh on git commit or session end keeps configs current
- → Supports Claude Code, Cursor, and Codex simultaneously
- ✗ Not a code exploration tool — no symbol extraction or AST parsing
- ✗ Generation requires an LLM (your existing seat or API key)
- ✓ Tree-sitter AST — live symbol extraction across 70+ languages
- ✓ 58–100× token reduction vs. raw file reads (real production benchmarks)
- ✓ Blast radius, dependency graph, class hierarchy, import tracing
- ✓ Zero LLM needed — pure deterministic AST parsing
- ✓ Native MCP server — plug into any MCP-compatible client
- ✓
pip install jcodemunch-mcp— no config scaffolding required - ✗ No config generation or setup management — Caliber fills this gap
Run
caliber init once to get a high-quality CLAUDE.md, MCP config, and skills scaffolded
for your project — including jCodeMunch auto-configured as your code exploration server. Then let
jCodeMunch handle every code query at runtime. Caliber sets the table; jCodeMunch does the work.
One tip: if you use Caliber's
CLAUDE.md regeneration, pin the jCodeMunch code
exploration policy block in CALIBER_LEARNINGS.md so it survives refreshes.
Citadel
Citadel is an agent orchestration harness for Claude Code. Its /do router classifies your
intent and dispatches it to the cheapest capable path — from a one-line fix to a multi-session
parallel campaign with persistence, quality gates, and a circuit breaker. jCodemunch is not an
orchestration tool; it is the retrieval layer those agents read through. The framing is simple:
Citadel tells Claude how to work; jCodeMunch tells Claude what the code is.
- →
/doroutes any task to the right tier automatically - → Campaign persistence — work survives session endings and restarts
- → Parallel agents in isolated git worktrees with discovery relay between waves
- → Circuit breaker: 3 failures → forced strategy change
- → 25 skills: review, test-gen, refactor, debug, research, QA, postmortem
- → 10 hooks: per-file typecheck, quality gate, pre-compaction save, external action gate
- ✗ No code retrieval — agents still read files via Read/Grep/Glob by default
- ✗ Claude Code only — not portable to Cursor or Codex
- ✓ Tree-sitter AST — exact symbols, not whole files
- ✓ 58–100× token reduction on code reads (real production benchmarks)
- ✓ Blast radius, dependency graph, class hierarchy — in one call
- ✓ Works in any MCP client — Claude Code, Cursor, Codex, Windsurf
- ✓ Zero workflow opinions — pure retrieval primitive
- ✓
pip install jcodemunch-mcp - ✗ No orchestration, routing, or campaign management — Citadel fills this gap
Citadel's most expensive skills —
/review, /refactor,
/systematic-debugging — involve reading large amounts of code. By default those reads
go through raw Read / Grep / Glob calls. Drop jCodeMunch into
your MCP config and those same skills consume a fraction of the tokens. Citadel handles the campaign;
jCodeMunch handles the reads. The combination stretches your Claude session limit further than either
tool can alone — especially relevant after Anthropic's March 2026 peak-hour throttle.
codesight — by Houseofmvps
codesight is a TypeScript MCP server that scans your project once per session and
compiles a high-level architectural map: routes, schemas, middleware chains, component
relationships, and import graphs. Its 8 tools answer questions like “what does this
service do?” and “where does this route flow?” — not “show me the implementation of
authenticate().” There is no persistent index; each session starts
from a fresh zero-dependency npx codesight scan.
A Reddit user summed up the distinction well: “codesight for orientation and
architecture, jCodeMunch for precise symbol retrieval.”
- → One-shot scan compiles routes, schemas, middleware chains, and import graphs per session
- →
codesight_get_routes,codesight_get_schema,codesight_get_wiki_articleanswer high-level structural questions fast - → Zero-install TypeScript CLI —
npx codesight, no setup - →
codesight_get_blast_radiustraces architectural-level dependency paths - ✗ No persistent index — re-scans from scratch each session
- ✗ No named symbol extraction or on-demand implementation retrieval
- ✗ No import-level call graph tracing or reference search
- ✗ No doc section search
- ✓ AST-extracted symbols —
search_symbols+get_symbol_sourcereturn exact implementations - ✓ Persistent SQLite index with SHA-256 freshness — zero re-scan cost per session
- ✓
find_importers,find_references,get_blast_radius— precision import-graph tracing - ✓ 70+ languages including YAML/Ansible, Razor/Blazor, SQL/dbt, Erlang, Fortran
- ✓ 58–100× token reduction vs. raw file reads (real production benchmarks)
- ✓ jDocMunch for section-level doc retrieval alongside code
- ✗ No architectural overview or wiki article generation — codesight fills this gap
codesight_get_overview
to build the mental map, then search_symbols + get_symbol_source
to retrieve the specific implementation you want to read or change.
Neither tool overlaps the other — they address different questions in the same workflow.
repowise
repowise is a Python MCP server that uses an LLM to generate and maintain a structured
wiki from your codebase — domain articles, architecture summaries, risk assessments,
and dependency paths. Its 8 tools answer natural-language questions about what the
codebase does at a conceptual level. The wiki is built once, stored in SQLite + LanceDB,
and can be refreshed incrementally. jCodeMunch answers “give me the implementation
of AuthMiddleware.handle”; repowise answers “explain what the
authentication flow does and why it was built this way.”
- →
get_overview,get_context,get_whyanswer conceptual questions via pre-generated wiki articles - →
get_risksurfaces architectural risk areas;get_architecture_diagramgenerates visual maps - →
search_codebaseruns semantic search over the generated wiki corpus - → SQLite + LanceDB persistent storage; web dashboard for browsing
- →
get_dependency_pathtraces high-level module relationships - ✗ Wiki content is LLM-generated — can drift from code reality between refreshes
- ✗ No on-demand symbol extraction; no call-graph tracing at the AST level
- ✗ AGPL-3.0 — hosted derivatives must be open-sourced
- ✓ AST-extracted, byte-offset–indexed symbols — always reflects current code, no LLM in the retrieval path
- ✓
get_symbol_sourcereturns the exact implementation, not a wiki approximation - ✓ SHA-256 incremental indexing — never stale; one-command re-index on change
- ✓
find_importers,find_references,get_blast_radius— AST-level import graph - ✓ 70+ languages; no LLM API key required for indexing or retrieval
- ✓ 58–100× token reduction vs. raw file reads (real production benchmarks)
- ✗ No natural-language “why was this built this way” answers — repowise fills this gap
get_overview for context, then let jCodeMunch handle
all the precise symbol lookups from there.
LangChain RAG
LangChain is an open-source Python/TypeScript framework for building LLM-powered applications. Its Retrieval-Augmented Generation (RAG) pattern is the most common approach developers reach for when they want an LLM to answer questions about a codebase: chunk the files, embed the chunks, store vectors in a database, then retrieve the closest chunks at query time. LangChain provides the glue — loaders, splitters, embedding wrappers, vector store integrations, and retrieval chains — that wires all of this together.
- Embeds raw file text into vectors — code is treated as prose
- Chunk boundaries are heuristic (character count, line count) — frequently split functions mid-body
- Retrieves the n nearest chunks by cosine similarity — approximate, not exact
- Requires an embedding model, a vector database, a chunking strategy, and a retrieval chain — real infrastructure overhead
- Index goes stale the moment a file changes; re-embedding is non-trivial at scale
- No understanding of code structure — a function and its docstring may land in separate chunks
- Rich ecosystem: 300+ integrations, chains, agents, and evaluation tools
- Great for semantic search over prose docs; less well-suited to precise code navigation
- Tree-sitter parses source files into an AST — functions, classes, and imports are atomic units, never split mid-body
search_symbols("authenticate")returns the exact implementation body, not the nearest chunk- Opt-in hybrid BM25 + vector search —
search_symbols(semantic=true)combines structural BM25 with cosine similarity;semantic_weightcontrols the blend; zero overhead when disabled (default) - Semantic embeddings are stored in the existing SQLite index — no separate vector DB, no pipeline to wire up
- Three embedding providers: local
sentence-transformers, Gemini, or OpenAI; pure-Python cosine similarity, no numpy required find_references/find_importerstrace call graphs precisely — RAG cannot do this at all- Token usage is deterministic and minimal — you get exactly the symbol you asked for, not the n nearest chunks
- jDocMunch handles documentation (section-level search across .md, .rst, .ipynb, HTML) with the same zero-infra model
- Works natively as an MCP server — Claude, Cursor, Windsurf, Codex call it directly; no chain wiring required
search_symbols(semantic=true) delivers hybrid BM25 + vector search
over AST-extracted symbols with no separate vector DB required
(pip install jcodemunch-mcp[semantic]).
The structural advantage remains decisive: jCodeMunch's embeddings are computed
over complete, syntactically valid symbols — never arbitrary text chunks —
so semantic similarity operates on meaningful code units rather than truncated fragments.
find_references and find_importers,
and deterministic token cost with no re-embedding pipeline to maintain.
A LangChain RAG pipeline that chunks your repo will cost more to set up, more to maintain,
more tokens per query, and still return semantically approximate results over
partial code fragments. jCodeMunch returns exact symbols — and now also ranks them
by semantic similarity when you want it.
For teams already running a LangChain stack, jCodeMunch MCP drops in alongside it;
use RAG for unstructured non-code corpora, jCodeMunch for all code and documentation lookups.
Ready to cut your token bill?
Free for non-commercial use. Paid licenses for commercial teams.