The Model Context Protocol (MCP) is redefining how AI coding assistants interact with the outside world. Created by Anthropic and released as an open standard, MCP provides a universal way for AI tools like Claude Code, Cursor, and Windsurf to connect to external data sources, APIs, and developer tools through a standardized protocol.
If you have ever wished your AI assistant could query your database, check CI/CD status, or search your internal documentation without you having to copy-paste context, MCP is the answer. In this guide, we will build MCP servers from scratch in both TypeScript and Python, explore the protocol’s architecture, and show you how to connect everything to your AI workflow.
How MCP Works
MCP follows a client-server architecture built on top of JSON-RPC 2.0 messaging. The AI assistant (Claude Code, Cursor, etc.) acts as the MCP client, while your custom integrations run as MCP servers. The client discovers what capabilities a server offers, then invokes them as needed during a conversation.
There are two primary transport mechanisms:
- stdio (Standard I/O) — The client spawns the server as a subprocess and communicates over stdin/stdout. This is the most common approach for local development tools.
- Streamable HTTP — The server runs as an HTTP endpoint. The client sends requests via POST and can receive streaming responses via Server-Sent Events (SSE). This is ideal for remote servers and shared team tools.
A typical message exchange looks like this: the client sends an initialize request, the server responds with its capabilities (tools, resources, prompts), and then the client can call specific tools as the user interacts with the AI assistant.
MCP servers expose three types of primitives:
- Tools — Functions the AI can call (like API endpoints). The AI decides when to invoke them based on the user’s request.
- Resources — Data the AI can read (like files or database records). Think of these as context the AI can pull in.
- Prompts — Reusable prompt templates that users can invoke. These are predefined workflows the server offers.
Setting Up Your First MCP Server in TypeScript
Let us build a weather lookup tool server using the official @modelcontextprotocol/sdk package. This server will expose a single tool that fetches current weather data for any city.
First, set up the project:
mkdir weather-mcp-server && cd weather-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Now create src/index.ts with the full server implementation:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});
// Define a tool that fetches weather data
server.tool(
"get_weather",
"Get current weather for a city. Returns temperature, conditions, and humidity.",
{
city: z.string().describe("City name, e.g. San Francisco"),
units: z.enum(["celsius", "fahrenheit"]).default("celsius")
.describe("Temperature units"),
},
async ({ city, units }) => {
// Replace with your preferred weather API
const response = await fetch(
`https://wttr.in/${encodeURIComponent(city)}?format=j1`
);
if (!response.ok) {
return {
content: [{ type: "text", text: `Failed to fetch weather for ${city}` }],
isError: true,
};
}
const data = await response.json();
const current = data.current_condition[0];
const temp = units === "celsius"
? `${current.temp_C}°C`
: `${current.temp_F}°F`;
return {
content: [{
type: "text",
text: `Weather in ${city}: ${temp}, ${current.weatherDesc[0].value}. ` +
`Humidity: ${current.humidity}%. Wind: ${current.windspeedKmph} km/h.`,
}],
};
}
);
// Start the server with stdio transport
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP server running on stdio");
}
main().catch(console.error);
Add a build script to package.json and compile:
// In package.json, add:
// "scripts": { "build": "tsc", "start": "node dist/index.js" }
// "type": "module"
npx tsc
node dist/index.js
That is a fully functional MCP server. When an AI assistant connects to it, it will see the get_weather tool and can invoke it whenever a user asks about weather conditions.
Building a Python MCP Server
Python developers can use the official mcp package to build servers with equally clean syntax. Let us build a file search tool that helps the AI find files across a project directory.
Install the package:
pip install mcp
Create file_search_server.py:
import os
import fnmatch
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
server = Server("file-search-server")
@server.list_tools()
async def list_tools():
return [
Tool(
name="search_files",
description=(
"Search for files matching a glob pattern within a directory. "
"Returns file paths and sizes. Useful for finding source files, "
"configs, or any file by name pattern."
),
inputSchema={
"type": "object",
"properties": {
"directory": {
"type": "string",
"description": "Root directory to search in",
},
"pattern": {
"type": "string",
"description": "Glob pattern, e.g. *.py or **/*.ts",
},
"max_results": {
"type": "integer",
"description": "Maximum number of results to return",
"default": 20,
},
},
"required": ["directory", "pattern"],
},
),
Tool(
name="read_file_snippet",
description=(
"Read the first N lines of a file. Useful for quickly "
"inspecting file contents without loading everything."
),
inputSchema={
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "Path to the file to read",
},
"lines": {
"type": "integer",
"description": "Number of lines to read from the start",
"default": 50,
},
},
"required": ["file_path"],
},
),
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "search_files":
directory = arguments["directory"]
pattern = arguments["pattern"]
max_results = arguments.get("max_results", 20)
matches = []
for root, dirs, files in os.walk(directory):
# Skip hidden directories and node_modules
dirs[:] = [d for d in dirs if not d.startswith(".") and d != "node_modules"]
for filename in files:
if fnmatch.fnmatch(filename, pattern):
full_path = os.path.join(root, filename)
size = os.path.getsize(full_path)
matches.append(f"{full_path} ({size:,} bytes)")
if len(matches) >= max_results:
break
result = "\n".join(matches) if matches else "No files found matching pattern."
return [TextContent(type="text", text=result)]
elif name == "read_file_snippet":
file_path = arguments["file_path"]
lines = arguments.get("lines", 50)
try:
with open(file_path, "r", encoding="utf-8", errors="replace") as f:
content = "".join(f.readlines()[:lines])
return [TextContent(type="text", text=content)]
except FileNotFoundError:
return [TextContent(type="text", text=f"File not found: {file_path}")]
return [TextContent(type="text", text=f"Unknown tool: {name}")]
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
The Python SDK uses decorators to register tool handlers, making the code highly readable. The @server.list_tools() decorator defines what tools exist, while @server.call_tool() handles execution when the AI invokes them.
Adding Tools, Resources, and Prompts
We have focused on tools so far, but MCP servers can also expose resources and prompts. Here is how to add all three primitives to a TypeScript server:
// Tool — an action the AI can perform
server.tool(
"query_database",
"Run a read-only SQL query against the application database",
{ query: z.string().describe("SQL SELECT query to execute") },
async ({ query }) => {
const results = await db.query(query);
return { content: [{ type: "text", text: JSON.stringify(results, null, 2) }] };
}
);
// Resource — data the AI can read for context
server.resource(
"schema",
"db://schema",
"Current database schema including all tables and columns",
async () => {
const schema = await db.getSchema();
return { contents: [{ uri: "db://schema", text: schema, mimeType: "text/plain" }] };
}
);
// Prompt — a reusable template users can invoke
server.prompt(
"analyze_table",
"Generate a comprehensive analysis of a database table",
{ table: z.string().describe("Table name to analyze") },
async ({ table }) => ({
messages: [{
role: "user",
content: {
type: "text",
text: `Analyze the database table "${table}". Describe its schema, ` +
`row count, common query patterns, and suggest optimizations.`,
},
}],
})
);
Think of tools as actions (do something), resources as data (read something), and prompts as templates (start a workflow). A well-designed MCP server uses all three to give the AI assistant maximum capability.
Connecting to Claude Code
Once your server is built, you need to tell your AI assistant how to find it. For Claude Code, you configure MCP servers in your project’s .claude/settings.json file or in the global settings at ~/.claude/settings.json.
Here is the configuration format for a stdio-based server:
// .claude/settings.json
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/path/to/weather-mcp-server/dist/index.js"]
},
"file-search": {
"command": "python3",
"args": ["/path/to/file_search_server.py"]
},
"remote-tools": {
"type": "url",
"url": "https://my-mcp-server.example.com/mcp"
}
}
}
For the Claude Desktop app, the configuration lives in claude_desktop_config.json:
// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"]
}
}
}
For Cursor, you add MCP servers through the Settings UI under Features > MCP Servers, or by editing .cursor/mcp.json in your project root. The format is similar to Claude Code.
After adding the configuration, restart your AI assistant. It will automatically spawn the server process and discover available tools. You will see the tools listed when the assistant starts up.
Real-World MCP Server Ideas
The examples above are intentionally simple. Here are practical MCP servers that can dramatically improve your AI-assisted development workflow:
- Database Query Tool — Connect to PostgreSQL or MySQL. Expose a read-only query tool and a schema resource. The AI can inspect your database structure and run queries to answer questions about your data.
- API Gateway — Wrap your company’s internal APIs as MCP tools. The AI can fetch user data, check order status, or trigger deployments without you leaving the conversation.
- Documentation Search — Index your internal docs, READMEs, and wiki pages. Expose them as searchable resources so the AI has accurate, up-to-date context about your systems.
- CI/CD Status Checker — Connect to GitHub Actions, GitLab CI, or Jenkins. The AI can check build status, read logs from failed jobs, and suggest fixes.
- Slack Integration — Read messages from channels, post updates, and search conversation history. Useful for pulling in context from team discussions.
- Kubernetes Cluster Manager — Expose kubectl operations as tools. The AI can list pods, check logs, describe deployments, and help troubleshoot cluster issues.
The key insight is that MCP servers turn your AI assistant from a code-only tool into a full-stack development partner that can interact with your entire infrastructure.
Testing and Debugging
Debugging MCP servers requires understanding what is happening at the protocol level. Here are the essential tools and techniques:
MCP Inspector is the official debugging tool. Install and run it against your server:
# Install and run MCP Inspector
npx @modelcontextprotocol/inspector node dist/index.js
# For Python servers
npx @modelcontextprotocol/inspector python3 file_search_server.py
The Inspector provides a web UI where you can see all available tools, invoke them manually, and inspect the JSON-RPC messages flowing between client and server. This is invaluable for verifying your tool schemas and response formats.
Common pitfalls to watch for:
- Transport mismatch — If your server uses stdio but the client expects HTTP (or vice versa), you will get connection failures with no helpful error message. Double-check your configuration.
- Schema validation errors — MCP validates tool input against the schema you define. If the AI sends a parameter with the wrong type, the call will fail silently. Use Zod (TypeScript) or explicit JSON Schema (Python) to define strict types.
- stdout contamination — For stdio servers, anything printed to stdout is interpreted as a JSON-RPC message. Use
console.error()in Node.js orprint(..., file=sys.stderr)in Python for debug logging. - Missing tool descriptions — The AI relies on tool descriptions to decide when to use them. Vague descriptions like “does stuff” will result in the AI rarely or incorrectly invoking your tool. Write clear, specific descriptions.
Add structured logging to your server for production use:
import sys
import logging
# Configure logging to stderr (stdout is reserved for MCP messages)
logging.basicConfig(
stream=sys.stderr,
level=logging.DEBUG,
format="%(asctime)s [%(levelname)s] %(message)s"
)
logger = logging.getLogger("mcp-server")
@server.call_tool()
async def call_tool(name: str, arguments: dict):
logger.info(f"Tool called: {name} with args: {arguments}")
# ... tool implementation
logger.info(f"Tool {name} completed successfully")
return result
Best Practices
After building dozens of MCP servers and working with teams that deploy them in production, these practices consistently make the difference between a server that works and one that works well:
Keep tools focused and single-purpose. A tool called do_everything is useless to the AI. Instead of one massive tool, create several focused ones: search_users, get_user_details, update_user_email. The AI can chain them together intelligently.
Write descriptions for the AI, not for humans. Tool descriptions are the primary way the AI decides what to call. Include what the tool does, what it returns, and when to use it. Example: “Search for users by name or email. Returns up to 10 matching user records with ID, name, email, and creation date. Use this when you need to look up a specific user or find users matching criteria.”
Handle errors gracefully. Return structured error messages in your tool responses rather than throwing exceptions. The AI can read error messages and adjust its approach, but an unhandled exception just looks like a broken server.
// Good error handling
server.tool("query_api", "Query the API", { endpoint: z.string() },
async ({ endpoint }) => {
try {
const result = await apiClient.get(endpoint);
return { content: [{ type: "text", text: JSON.stringify(result) }] };
} catch (error) {
return {
content: [{
type: "text",
text: `API request failed: ${error.message}. ` +
`Status: ${error.status || "unknown"}. ` +
`Check that the endpoint exists and you have access.`
}],
isError: true,
};
}
}
);
Implement security boundaries. MCP servers can do anything your code can do. For database tools, enforce read-only access unless writes are explicitly needed. Validate and sanitize all inputs. Run servers with minimal permissions. Never expose credentials through tool responses.
Use resources for static context. If you have data that rarely changes (database schemas, API documentation, configuration references), expose it as a resource rather than a tool. Resources are loaded once for context, while tools are called repeatedly.
Version your servers. Include a version number in your server metadata. When you change tool schemas or behavior, bump the version so clients know to re-initialize their connection.
Conclusion
MCP is rapidly becoming the standard interface between AI coding assistants and the developer ecosystem. Major tools have already adopted it: Claude Code ships with MCP support built in, Cursor added MCP in early 2025, and Windsurf, VS Code, and other editors are following.
Building your own MCP server is straightforward with the official SDKs. Start with a simple tool that solves a real problem in your workflow, whether that is querying a database, checking build status, or searching documentation. Once you see how naturally the AI integrates with your custom tools, you will want to build more.
The protocol is still evolving. Anthropic maintains the specification at modelcontextprotocol.io, where you can find the full spec, SDK documentation, and a growing directory of community-built servers. The GitHub organization hosts the official SDKs for TypeScript, Python, Java, Kotlin, and C#.
Related Articles
Continue learning about AI-assisted development with these guides from RunAICode:
- Setting Up Claude Code: Complete Developer Guide — Get Claude Code running and configured for your workflow.
- Claude Code vs Cursor: Which AI Coding Tool Wins in 2026? — Compare the two leading AI coding assistants side by side.
- Best AI Coding Tools for Developers in 2026 — The definitive guide to every major AI coding tool available today.
Join the RunAICode Developer Community
Get help building MCP servers, share your projects, and connect with AI developers.
The future of AI-assisted development is not just smarter models. It is smarter connections between models and the tools developers already use. MCP is the bridge, and now you know how to build on it.