Direct Symbol Retrieval

Stop wasting tokens.
Start munching code.

jCodeMunch is an Model Context Protocol (MCP) server that indexes your codebases for surgical AI retrieval. No more flooding context windows with thousands of lines of unrelated files.

278,958,252
Tokens Saved by Participating Users
Since 3/3/2026
$6,973.96 SAVED!
~480
Tokens per Query
99.7%
Avg. Reduction
O(1)
Retrieval Speed
↓ See the Numbers
Target Codebase
C:\Web\APIs — Real .NET API Suite

A production .NET 10 API suite with multiple service endpoints, authorization middleware, health checks, FTP/email integrations, and data models — used verbatim for this benchmark.

📁
136
Source .cs files
Excluding build artifacts
📝
565 KB
Total source code
Raw characters
🪙
~141K
Tokens to read all files
@ 4 chars/token
🏗️
8
API sub-projects
Survey, CRUD, GiftCard, Auth…
⚙️
1,031
Lines — Program.cs alone
Massive boilerplate entry point
Benchmark
One Query. Two Worlds.

Compare a standard agent's file-based exploration vs. jCodeMunch symbol retrieval for the query: "How are survey submissions handled?"

Standard Agent (File-based)
// Iterating through likely files...
jCodeMunch MCP (Symbol-based)
// Querying symbol index...
0
Tokens Consumed (Old Way)
0
Tokens Consumed (jCodeMunch)
141,450
Tokens — Old Way
~480
Tokens — jCodeMunch
99.7%
Token Reduction
Q&A
How is this better or different from RepoMapper?

Short version: RepoMapper is a ranked repository “map” (great for orientation and “what matters?”), while jCodeMunch is symbol-accurate retrieval (great for “show me the exact code” with tiny token spend). They overlap, but they’re optimized for different jobs.

What RepoMapper does well (and when I’d use it)
  • Generates a “repo map” that highlights important files/definitions and relationships.
  • Prioritizes relevance using Tree-sitter parsing plus a PageRank-like importance ranking.
  • Best for first-pass orientation: “Which files matter for this task?” and “Where should I look next?”
If you want a fast, ranked breadcrumb trail across a new codebase, RepoMapper is a solid compass.
What jCodeMunch does differently (and why it can be “better” for agents)
  • Symbol-first, not file-first: agents search and retrieve functions/classes/methods/constants directly.
  • Byte-accurate retrieval: once indexed, pulling a symbol is O(1) via byte-offset seeking, not “read a file and hope.”
  • Stable symbol IDs: {file_path}::{qualified_name}#{kind} lets an agent “bookmark” code reliably across sessions.
  • More than a map: → agent picks files → then still needs to open/read them for details.
  • jCodeMunch flow: “Search symbols” → “Get symbol” → done. File reads become the exception, not the default.
  • Scaling behavior: maps get bigger as repos grow; targeted symbol pulls stay tiny even in massive repos.
Can I use both?
Absolutely. A nice pairing is:
  • RepoMapper for “what’s important?” and quick repo orientation.
  • jCodeMunch for “show me the exact code path” and repeatable, low-token retrieval while implementing changes.
How do I integrate jCodeMunch with Google Antigravity?

Antigravity uses a standard MCP config file — setup takes about a minute.

Step-by-step setup
  • Install the server: pip install git+https://github.com/jgravelle/jcodemunch-mcp.git
  • In Antigravity, open the Agent pane → click the menu → MCP ServersManage MCP Servers
  • Click View raw config to open mcp_config.json
  • Add the entry below, save, then restart the MCP server from the Manage MCPs pane
{
  "mcpServers": {
    "jcodemunch": {
      "command": "jcodemunch-mcp",
      "env": {
        "GITHUB_TOKEN": "ghp_...",
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}
Both env vars are optional. ANTHROPIC_API_KEY enables AI-generated symbol summaries; GITHUB_TOKEN raises GitHub API rate limits and unlocks private repos.
Actual Data from This Project

These are verbatim results from the jCodeMunch MCP server querying the indexed Bakery Deli Survey codebase.

Query search_symbols("survey submission handler")
Without MCP — Read Each File 141,450 tokens
READ: Authorization/AuthorizationModels.cs
→ 970 tokens consumed
READ: Authorization/AuthorizationService.cs
→ 2,840 tokens consumed
READ: BakeryDeliSurveyApi/Models/SurveyModels.cs
→ 890 tokens consumed
READ: Program.cs
→ 8,972 tokens consumed
... 128 more files ...
TOTAL: 141,450 tokens used
Time to first result: ~4.2 seconds
With jCodeMunch MCP ~480 tokens
→ code_search_symbols({
repo: "local/Bakery_Deli_Survey",
query: "survey submission handler"
})
✓ EditSurvey(surveyID, vendor_no, …) [Survey_Edit.js:52]
✓ DeleteSurvey_Confirm(surveyID) [Survey_Edit.js:43]
✓ Validate_Survey_Delete() [Survey_Edit.js:12]
✓ EditSurveyItems(surveyID) [Survey_Edit.js:17]
Symbol source (EditSurvey):
→ 396 chars, 13 lines retrieved
→ Exact function body, no noise
TOTAL: ~480 tokens used
Time to first result: 0.01 seconds
Token Usage by File
Where the Tokens Go

Each file read floods the context window. jCodeMunch retrieves only the symbol requested.

File Lines Tokens (Traditional) Tokens (jCodeMunch) Savings
Program.cs 1,031 8,972 ~0 (not needed) 100%
SurveyService.cs 520 6,102 ~340 (one method) 94.4%
AuthorizationService.cs 320 2,840 ~110 (one helper) 96.1%
Token Costs Add Up

Calculating the actual dollar impact of context-window waste on a 141K token codebase.

Traditional Way
$0.424
per query
141,450 tokens × $3.00/1M
100 queries/day = $42.40/day
Monthly cost = $1,272
Annual cost = $15,264
jCodeMunch MCP
$0.0014
per query
~480 tokens × $3.00/1M
100 queries/day = $0.14/day
Monthly cost = $4.32
Annual cost = $51.84
You Save
$0.4226
per query (99.7%)
At 100 queries/day:
Save $42.26/day
Save $1,267/month
Save $15,212/year
$15,212
Saved per year at 100 AI queries/day — on a single codebase

Scale to multiple projects and more queries per day, the savings multiply accordingly.

Architecture
How jCodeMunch Works

A pre-built symbol index lets the MCP server answer code queries in milliseconds with surgical precision.

1

Index Once

Run index_code_folder(path) — jCodeMunch parses every file, extracts symbols (functions, classes, methods), generates AI summaries, and stores them in a local vector index. Happens once per project.

2

AI Queries by Intent

Instead of reading files, the AI calls search_symbols(query) or get_symbol(id). The MCP server performs semantic + keyword search against the index in milliseconds.

3

Surgical Retrieval

Only the matching symbol's source code and metadata is returned — not the surrounding file, not unrelated classes. A 6,000-token file read is replaced by a 400-token symbol pull.