Skip to main content

How No-Code MCP Works

When you create an MCP server through the wizard, here's what happens behind the scenes.

The pipeline

  Crawl  →  Index  →  Search  →  Deliver
  1. Crawl — Your sources are visited and content is extracted (web pages, GitHub files, etc.)
  2. Index — Content is organized and optimized for fast, accurate retrieval
  3. Search — When an AI agent queries your MCP, hybrid search finds the most relevant results
  4. Deliver — Results are returned via the MCP protocol to any connected AI client

Indexing

After you deploy, your sources begin indexing automatically. The MCP server is usable during indexing — it uses live fetching as a fallback until indexing completes.

About indexing times
  • 1-3 sources: Usually under 1 minute
  • Large docs sites (50+ pages): 2-5 minutes
  • GitHub repos: Under 30 seconds
  • Sources auto-refresh daily at 7:00 AM Central US time
  • You can manually refresh any source from the dashboard

MCP protocol

Your server exposes two transport options:

TransportEndpointUse case
SSEGET /api/mcp/{slug}/sseMost MCP clients (Cursor, Claude Desktop)
Streamable HTTPPOST /api/mcp/{slug}JSON-RPC over HTTP

Both follow the Model Context Protocol specification, so your server works with any compliant client.