MCP Architecture: Resources, Tools, and Prompts
The Protocol Flow
Every MCP interaction follows the same lifecycle. The client connects to the server and sends an initialization request. The server responds with its capabilities: the protocol version it supports, and the types of primitives it offers (tools, resources, prompts, or any combination). The client then queries for the specific capabilities in each category, receiving detailed descriptions and schemas. During the conversation, the AI model can invoke any discovered capability, and the client handles routing the request to the appropriate server.
The initialization phase is where the model learns what it can do. The quality of your tool descriptions, resource listings, and prompt templates directly determines how well the model uses your server. A server with excellent tool logic but poor descriptions will be underused because the model does not know when to call it. A server with mediocre tool logic but clear descriptions will be used appropriately because the model understands what each tool does.
Tools: Actions the AI Can Invoke
Tools are functions that perform actions when the AI calls them. Each tool has a name (the identifier the model uses to select it), a description (natural language explaining what it does and when to use it), and an input schema (a JSON schema defining the expected parameters). When the model invokes a tool, the client sends the name and parameters to the server, the server executes the handler function, and the result is returned as one or more content blocks.
Tools are the most frequently used primitive because they provide the most direct value. They map to concrete actions: query a database, store a memory, search files, create a resource, call an external API. The model's decision to use a tool is based on matching the user's request to the tool description, so well-written descriptions are the single most important factor in whether your tools get used correctly.
Writing Good Tool Descriptions
The description is the only thing the model reads to decide whether to use your tool. It should answer three questions: What does this tool do? When should it be used? What does it return? Avoid vague descriptions like "handles data" or "processes input." Be specific: "Search the codebase for files containing a text string. Returns a list of matching file paths. Use when the user asks to find where something is defined or used."
# Vague description (model will misuse this tool)
@mcp.tool()
def process(data: str) -> str:
"""Process the data."""
# Clear description (model uses this correctly)
@mcp.tool()
def search_codebase(query: str, file_extension: str = ".py") -> str:
"""Search source files for lines containing the query string.
Returns matching file paths and line numbers. Use when the user
asks to find where a function, variable, or string is defined
or referenced in the codebase.
Args:
query: Text to search for (case-insensitive)
file_extension: File type filter, defaults to .py
"""Return Value Design
Tool handlers return content blocks, typically text. The model reads the returned text as part of the conversation and uses it to formulate its response. Design your return values for the model to read, not for a human to read. Include the relevant data, omit decorative formatting, and structure the output so the model can extract the information it needs.
For tools that return multiple items, use a structured format like JSON or a simple list. For tools that return status information, include both the outcome and any relevant context. For tools that fail, return an error message that helps the model understand what went wrong so it can try a different approach or explain the issue to the user.
Resources: Data the AI Can Read
Resources are read-only data sources that the AI client can query to build context. Each resource has a URI (a unique identifier), a name (human-readable label), and a MIME type (indicating the content format). When the AI reads a resource, the client sends the URI to the server, and the server returns the content.
Resources differ from tools in that they do not perform actions or have side effects. Reading a resource is always safe; it returns data without changing any state. This makes resources appropriate for exposing reference information: configuration files, database schemas, documentation, system status, project structure, or any other data that the model might need as context.
@mcp.resource("file://database-schema")
def get_schema() -> str:
"""Returns the database schema for all tables."""
return read_schema_file()
@mcp.resource("file://env-config")
def get_config() -> str:
"""Returns the current environment configuration."""
return json.dumps(get_safe_config(), indent=2)Not all MCP clients use resources the same way. Some clients list resources in a sidebar that the user can browse. Others let the model request resources automatically during a conversation. Design your resources to be useful in both scenarios: descriptive names so users can find them, and complete content so the model gets full context when it reads them.
Prompts: Workflow Templates
Prompts are reusable message templates that structure the AI's approach to specific tasks. Each prompt has a name, a description, optional parameters, and a template that expands into one or more messages. When invoked, the prompt template is expanded with the provided parameters and inserted into the conversation.
Prompts encode institutional knowledge about how to approach specific tasks. A code review prompt ensures the reviewer checks security, performance, and readability in a consistent order. A data analysis prompt structures the exploration of a dataset. A debugging prompt walks through a systematic elimination process. By packaging these workflows as prompts, you ensure consistent quality across sessions and team members.
@mcp.prompt()
def review_pr(file_path: str, focus_areas: str = "security,performance") -> str:
"""Review a file for code quality with specific focus areas.
Args:
file_path: Path to the file to review
focus_areas: Comma-separated areas to focus on
"""
areas = focus_areas.split(",")
checklist = "\n".join(f"- {area.strip()}" for area in areas)
return f"""Review the file at {file_path}. Focus on:
{checklist}
Read the file, then provide:
1. A summary of what the code does
2. Issues found in each focus area
3. Specific suggestions for improvement"""Prompts are the least commonly used primitive, partly because their behavior varies across clients and partly because many developers achieve similar results through tool descriptions alone. However, for teams that want standardized workflows, prompts provide a clean way to encode and share those workflows through the MCP protocol.
Designing a Server: Choosing Primitives
When designing an MCP server, start by listing the capabilities you want to expose. For each capability, decide which primitive fits best:
- Actions that change state: tools (store data, execute queries, send notifications)
- Actions that compute results: tools (search, analyze, transform)
- Static or slowly changing reference data: resources (schemas, configs, docs)
- Multi-step workflows with consistent structure: prompts (reviews, analyses, debugging)
Most servers are tool-only. A server with three to seven well-designed tools covers most use cases. Add resources when you have reference data that multiple tools would otherwise need to include in their return values. Add prompts when you have workflows that benefit from standardization.
Adaptive Recall's MCP server exposes seven tools (store, recall, update, forget, reflect, graph, status), one resource (system status and configuration), and no prompts. The tools cover all memory operations, the resource provides context about the system state, and the tool descriptions are detailed enough that the model uses them correctly without prompt-level guidance.
See MCP architecture in action. Connect to Adaptive Recall and explore how a production server organizes its tools and resources.
Get Started Free