How to Build Your First MCP Server in Python
Before You Start
You need Python 3.10 or later and familiarity with writing Python functions. No prior MCP experience is required. The SDK handles all protocol details, so your code focuses entirely on the tool logic itself.
This guide builds a server with stdio transport, which means it runs as a local subprocess of the AI client. This is the simplest way to start. You can switch to HTTP transport later for remote deployment without changing your tool logic.
Step-by-Step Setup
Create a virtual environment and install the MCP package. The package includes the server framework, transport handlers, and the FastMCP convenience wrapper that simplifies tool registration.
python -m venv mcp-env
source mcp-env/bin/activate
pip install mcpCreate a new Python file for your server. Import FastMCP, create a server instance with a descriptive name, and add the main block to run it. This is the minimal skeleton that every MCP server starts with.
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-project-tools")
if __name__ == "__main__":
mcp.run()The name you pass to FastMCP is what clients display in their UI. Choose something descriptive that identifies what the server does, like "project-search" or "database-tools" rather than a generic name.
Register tools using the
@mcp.tool() decorator. Each decorated function becomes a tool that AI clients can invoke. The function name becomes the tool name, the docstring becomes the description the model reads, and the type hints become the parameter schema.
import json
from pathlib import Path
@mcp.tool()
def search_files(query: str, file_type: str = ".py") -> str:
"""Search project files for content matching a query.
Args:
query: Text to search for in file contents
file_type: File extension to filter by, defaults to .py
"""
results = []
project_dir = Path(".")
for filepath in project_dir.rglob(f"*{file_type}"):
try:
content = filepath.read_text()
if query.lower() in content.lower():
results.append(str(filepath))
except (UnicodeDecodeError, PermissionError):
continue
if not results:
return f"No {file_type} files found containing '{query}'"
return json.dumps(results, indent=2)
@mcp.tool()
def read_file(path: str) -> str:
"""Read the contents of a file.
Args:
path: Relative path to the file from the project root
"""
filepath = Path(path)
if not filepath.exists():
return f"File not found: {path}"
return filepath.read_text()Resources expose read-only data that the AI can pull into its context. Prompts provide reusable templates for common workflows. Both are optional but useful for servers that have reference data or structured workflows.
@mcp.resource("file://project-structure")
def get_project_structure() -> str:
"""Returns the directory structure of the project."""
tree = []
for path in sorted(Path(".").rglob("*")):
if path.is_file() and not str(path).startswith("."):
tree.append(str(path))
return "\n".join(tree)
@mcp.prompt()
def code_review(file_path: str) -> str:
"""Review a file for code quality, security, and performance.
Args:
file_path: Path to the file to review
"""
return f"""Please review the file at {file_path} for:
1. Security vulnerabilities
2. Performance issues
3. Code style and readability
4. Missing error handling
Read the file first, then provide your analysis."""The MCP Inspector is a debugging tool that connects to your server and lets you browse and test registered capabilities interactively. Run it against your server to verify everything works before connecting an AI client.
npx @modelcontextprotocol/inspector python server.pyThe Inspector opens a web interface showing your registered tools, resources, and prompts. Click any tool to see its schema, enter test parameters, and invoke it. Check that the tool descriptions are clear, the parameter schemas match your type hints, and the return values are formatted correctly. Fix any issues before moving to client integration, because debugging is much easier in the Inspector than through an AI client.
Add your server to the client's MCP configuration. For Claude Code, create a
.mcp.json file in your project root with the command to run your server. The client starts the server process automatically when you open the project.
{
"mcpServers": {
"my-project-tools": {
"command": "python",
"args": ["server.py"],
"cwd": "/path/to/your/project"
}
}
}After restarting the client, your tools appear in the available tools list. The AI model can now invoke them during conversations. Ask it to search files or read a specific file to verify the connection works end to end.
Adding Error Handling
Your tool functions should handle errors gracefully and return descriptive error messages rather than raising exceptions. The MCP SDK catches unhandled exceptions and returns a generic error to the client, but a specific error message helps the AI model understand what went wrong and try a different approach.
@mcp.tool()
def query_database(sql: str) -> str:
"""Run a read-only SQL query against the project database.
Args:
sql: The SQL query to execute. Must be a SELECT statement.
"""
if not sql.strip().upper().startswith("SELECT"):
return "Error: Only SELECT queries are allowed."
try:
results = execute_query(sql)
return json.dumps(results, indent=2, default=str)
except Exception as e:
return f"Query failed: {str(e)}"Switching to HTTP Transport
When you are ready to deploy your server remotely or share it across multiple clients, switch from stdio to HTTP transport. The tool logic stays exactly the same. Only the startup code changes.
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-project-tools")
# ... all the same tool, resource, and prompt definitions ...
if __name__ == "__main__":
mcp.run(transport="streamable-http", host="0.0.0.0", port=8080)Clients connect by URL instead of launching a subprocess. The configuration changes from a command to a URL entry with optional authentication headers.
What Comes Next
Once your basic server is working, explore adding authentication for shared deployments, persistent state across restarts, and more sophisticated tool logic. For a memory server that stores and retrieves information across sessions, look at how Adaptive Recall implements the full lifecycle through MCP tools: store, recall, update, forget, reflect, graph, and status.
Skip building a memory server from scratch. Adaptive Recall is a production MCP server with cognitive scoring, knowledge graphs, and memory lifecycle built in.
Get Started Free