Every new release — Claude Opus 4.7, GPT, Gemini — makes agentic coding more expensive. jCodeMunch is the model-agnostic MCP server that indexes your codebase once and feeds any AI agent exact functions, classes, and constants instead of whole files.
Stop wasting tokens. Start munching code.
jCodeMunch serves anyone whose AI agents read code — from solo developers to platform teams managing token budgets across entire organizations.
A production Python web framework with 80k+ GitHub stars — routing, dependency injection, automatic OpenAPI generation, and security middleware — indexed directly from GitHub and queried verbatim for this benchmark.
Compare a standard agent's file-based exploration vs. jCodeMunch symbol retrieval for the query: "How does dependency injection work?"
These are verbatim results from the jCodeMunch MCP server querying the indexed fastapi/fastapi codebase.
Each file read floods the context window. jCodeMunch retrieves only the symbol requested.
| File | Lines | Tokens (Traditional) | Tokens (jCodeMunch) | Savings |
|---|---|---|---|---|
| fastapi/routing.py | 883 | 8,836 | ~0 (not needed) | 100% |
| fastapi/dependencies/utils.py | 580 | 5,218 | ~310 (one function) | 94.1% |
| fastapi/security/oauth2.py | 290 | 2,640 | ~90 (one helper) | 96.6% |
Calculating the actual dollar impact of context-window waste on a 214K token codebase.
Scale to multiple projects and more queries per day, the savings multiply accordingly.
A pre-built symbol index lets the MCP server answer code queries in milliseconds with surgical precision.
Run index_code_folder(path) to index code symbols, or index_doc_local(path) (jDocMunch) to index documentation sections. Both build a persistent local index — happens once per project.
Instead of reading files, the AI calls search_symbols(query) or get_symbol(id). The MCP server performs semantic + keyword search against the index in milliseconds.
Only the matching symbol's source code and metadata is returned — not the surrounding file, not unrelated classes. A 6,000-token file read is replaced by a 400-token symbol pull.
Symbol-level retrieval changes how AI agents interact with code. Here are the workflows where jCodeMunch and jDocMunch deliver the most value.
Pre-built symbol indexes for popular frameworks. A 932 MB React repo becomes a 3 MB pack. A 1.4 GB Node.js monorepo becomes 10.6 MB. Install a pack and your AI agent gets symbol-level access — without cloning the repo or waiting for an index build.
find_importers and get_blast_radius trace across the boundary into framework code.jcodemunch-mcp install-pack nodejs
—
Browse all Starter Packs ↗
jCodeMunch munches code. jDocMunch munches documentation — the same surgical retrieval approach, applied to Markdown, READMEs, specs, and any text-based docs in your repo.
search_sections(“auth flow”) — one call, the right section, nothing else.
pip install git+https://github.com/jgravelle/jdocmunch-mcp.git
—
Learn more about jDocMunch on GitHub ↗
No code required. jDataMunch indexes your spreadsheets, databases, and data files so AI assistants can answer data questions precisely — without reading entire files or guessing at column names.
Common questions about jCodeMunch, jDocMunch, and MCP-based code retrieval.
Short version: RepoMapper is a ranked repository "map" (great for orientation and "what matters?"), while jCodeMunch is symbol-accurate retrieval (great for "show me the exact code" with tiny token spend). They overlap, but they're optimized for different jobs.
jCodeMunch works with any client that supports the Model Context Protocol (MCP), including:
Antigravity uses a standard MCP config file — setup takes about a minute.
pip install git+https://github.com/jgravelle/jcodemunch-mcp.gitmcp_config.json{
"mcpServers": {
"jcodemunch": {
"command": "jcodemunch-mcp",
"env": {
"GITHUB_TOKEN": "ghp_...",
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
}
ANTHROPIC_API_KEY enables AI-generated symbol summaries; GITHUB_TOKEN raises GitHub API rate limits and unlocks private repos.Larger context windows have diminishing returns for code retrieval:
Absolutely — they're designed as complementary tools. A typical pairing:
Together, they give AI agents surgical access to both code and docs without reading entire files. When an agent needs to understand how authentication works, it can search code symbols for the implementation and search doc sections for the design rationale — all with minimal token spend.
In benchmarks measured with tiktoken cl100k_base across 15 tasks on 3 real repositories, jCodeMunch achieved a 95% average token reduction for code retrieval operations.
Choose a single-product license for code, docs, or data — or get all three in a Munch Trio bundle. All licenses are commercial-use licenses for the specified tier.