Preview — full styling will appear after the next deploy completes.

agentic-ai-patterns

Custom MCP Server

Build and integrate your own tool servers

Build a domain-specific MCP (Model Context Protocol) server that exposes tools as a standardised service. A LangGraph ReAct agent connects to it via stdio transport and calls its tools exactly like local @tool functions.

flowchart TD
    S([__start__]) --> A[agent]
    A -->|tool_calls| T[tools / MCP client]
    T -->|stdio| M
    subgraph MCP Server
        M[search_books]
        MD[get_book_details]
        MA[analyze_genre]
        ML[list_genres]
    end
    M --> T
    T --> A
    A -->|no tool calls| E([__end__])

The Model Context Protocol (MCP) is an open standard that decouples tool implementation from the models that use them. Instead of decorating functions with `@tool` inside your agent code, you publish them as an MCP server — a standalone process that any MCP-compatible client (Claude Desktop, LangGraph, VS Code) can connect to and discover automatically.

Building an MCP server with FastMCP is straightforward: decorate Python functions with `@mcp.tool()` and type-annotate their arguments. FastMCP generates the full JSON Schema automatically from your type hints, validates inputs, and handles all serialisation over the chosen transport. The server in this lesson exposes four tools backed by the SQLite books database: `search_books`, `get_book_details`, `analyze_genre`, and `list_genres`.

On the agent side, `MultiServerMCPClient` from `langchain-mcp-adapters` spawns the server as a subprocess (stdio transport) and converts its tool schemas into standard LangChain `StructuredTool` objects. The LangGraph ReAct graph receives these tools via `llm.bind_tools()` — no changes to the graph required. Multiple servers (local stdio, remote SSE/HTTP) can be registered simultaneously, and the agent sees their tools as a single merged namespace.