How No-Code MCP Works
When you create an MCP server through the wizard, here's what happens behind the scenes.
The pipeline
Crawl → Index → Search → Deliver
- Crawl — Your sources are visited and content is extracted (web pages, GitHub files, etc.)
- Index — Content is organized and optimized for fast, accurate retrieval
- Search — When an AI agent queries your MCP, hybrid search finds the most relevant results
- Deliver — Results are returned via the MCP protocol to any connected AI client
Indexing
After you deploy, your sources begin indexing automatically. The MCP server is usable during indexing — it uses live fetching as a fallback until indexing completes.
About indexing times
- 1-3 sources: Usually under 1 minute
- Large docs sites (50+ pages): 2-5 minutes
- GitHub repos: Under 30 seconds
- Sources auto-refresh daily at 7:00 AM Central US time
- You can manually refresh any source from the dashboard
MCP protocol
Your server exposes two transport options:
| Transport | Endpoint | Use case |
|---|---|---|
| SSE | GET /api/mcp/{slug}/sse | Most MCP clients (Cursor, Claude Desktop) |
| Streamable HTTP | POST /api/mcp/{slug} | JSON-RPC over HTTP |
Both follow the Model Context Protocol specification, so your server works with any compliant client.