Self-hosted alternative to
Context7

Index your documentation from different sources.
Give your AI Agents up-to-date information.

$npx contextmcp init
Documentation
config.yaml
sources:
- name: dodo-docs
type: github
repository: dodopayments/dodo-docs
parser: mdx
skipDirs:
- .git
- node_modules
response.json
{
"score": 0.89,
"heading": "Quick Start",
"content": "To install Dodo Payments SDK...",
"metadata": {
"sourceUrl": ".../docs/quickstart.mdx"
}
}
Dodo Payments logo
CASE STUDY

Powering Sentra at Dodo Payments.

We built ContextMCP to solve a problem we faced ourselves. Sentra, our AI agent, needed reliable access to documentations spread across multiple repositories.

Context7 could not keep documentation in sync, which led to outdated context and unreliable answers.

ContextMCP indexes everything at set intervals so Sentra always works with up to date information.

Sentra AI Agent interface showing ContextMCP integration

Zero Config

Drop a config.yaml in your repo. ContextMCP handle the parsing, chunking, and indexing automatically.

AST-Aware Chunking

Our AST-based parsers understand code blocks, headers, and semantic boundaries to keep context intact.

Standard RAG
function payment(req) {
const {id} = req.body;
--- CHUNK BREAK ---
return stripe.charge(id);
}
ContextMCP
function payment(req) {
const {id} = req.body;
return stripe.charge(id);
}
✓ Full Context Preserved

Edge Native

Served from Cloudflare Workers. Ensuring fast latency for your AI Agents.

Open Source

Fork it. Self-host it. Own your data.

Frequently Asked Questions

Everything you need to know about the Context Engine.

Context7 often struggles with stale data. Standard RAG blindly chunks text, breaking code logic. ContextMCP solves both: it runs on a schedule to keep context fresh and uses AST-aware chunking to preserve function/class boundaries, ensuring your AI Agent never hallucinates due to missing context.