How to Add Tools to an AI Assistant
Before You Start
You need a working assistant with a model that supports function calling. Claude (Anthropic), GPT-4 (OpenAI), and Gemini (Google) all support tool use through their APIs. The exact syntax differs between providers, but the pattern is the same: you send tool definitions alongside your messages, the model returns structured tool calls when it wants to use a tool, and your application executes the call and returns the result.
Step-by-Step Setup
Each tool needs a JSON schema that tells the model what the tool does, what parameters it accepts, and what each parameter means. The quality of your schema directly determines how accurately the model uses the tool. Use clear, descriptive names that match the tool's action. Write parameter descriptions that specify types, constraints, and examples. Keep required parameters minimal and provide sensible defaults for optional ones.
# Example: tool schema for a database query tool
tools = [
{
"name": "query_database",
"description": "Execute a read-only SQL query against the application database. Returns up to 100 rows. Use this when the user asks about data, metrics, or records.",
"input_schema": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "A read-only SQL SELECT query. Do not include INSERT, UPDATE, DELETE, or DDL statements."
},
"limit": {
"type": "integer",
"description": "Maximum rows to return. Defaults to 25.",
"default": 25
}
},
"required": ["sql"]
}
},
{
"name": "send_email",
"description": "Send an email to a specified recipient. Requires explicit user confirmation before sending.",
"input_schema": {
"type": "object",
"properties": {
"to": {
"type": "string",
"description": "Recipient email address"
},
"subject": {
"type": "string",
"description": "Email subject line"
},
"body": {
"type": "string",
"description": "Email body in plain text"
}
},
"required": ["to", "subject", "body"]
}
}
]Each tool schema needs a corresponding function that actually performs the action. The handler receives the parameters from the model's tool call, executes the operation, and returns a result that gets fed back to the model. Keep handlers focused: they should do one thing, return structured results, and handle their own errors.
# Example: tool execution handlers
async def execute_tool(tool_name, tool_input):
handlers = {
"query_database": handle_database_query,
"send_email": handle_send_email,
}
handler = handlers.get(tool_name)
if not handler:
return {"error": f"Unknown tool: {tool_name}"}
try:
result = await handler(tool_input)
return {"success": True, "result": result}
except Exception as e:
return {"error": str(e), "tool": tool_name}
async def handle_database_query(params):
sql = params["sql"]
limit = params.get("limit", 25)
# Validate read-only query
if any(kw in sql.upper() for kw in ["INSERT", "UPDATE", "DELETE", "DROP"]):
return {"error": "Only SELECT queries are allowed"}
rows = await db.execute(sql, limit=limit)
return {"rows": rows, "count": len(rows)}The core loop detects when the model wants to use a tool, executes the tool, and continues the conversation with the tool's result. Most model APIs signal tool use through a specific response format: the model returns one or more tool call objects instead of (or alongside) text content. Your loop needs to detect these, execute each tool, collect the results, and make another model call with the results included.
# Example: tool routing loop with Anthropic SDK
import anthropic
client = anthropic.Anthropic()
async def assistant_turn(messages, tools):
while True:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=4096,
system=SYSTEM_PROMPT,
tools=tools,
messages=messages
)
# Check if the model wants to use tools
if response.stop_reason == "tool_use":
# Extract tool calls from response
tool_results = []
for block in response.content:
if block.type == "tool_use":
result = await execute_tool(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": json.dumps(result)
})
# Add assistant response and tool results to messages
messages.append({"role": "assistant", "content": response.content})
messages.append({"role": "user", "content": tool_results})
# Loop continues, model processes tool results
else:
# Model produced a text response, return it
return response.contentTools interact with external systems that can fail. Database connections drop, APIs return errors, rate limits are hit, and permissions are denied. Your error handling needs to catch these failures, format them into messages the model can understand, and give it enough context to either retry with different parameters or explain the issue to the user.
The key principle is: give the model useful error information. "Tool execution failed" tells the model nothing. "Database query failed: column 'user_name' does not exist. Available columns are: username, email, created_at" gives the model enough context to fix the query and retry. Format errors as structured JSON with an error type, a human-readable message, and any context that would help the model recover.
When the model emits multiple tool calls in a single response, check whether they are independent (they do not depend on each other's results). Independent calls can run simultaneously using asyncio.gather or Promise.all, reducing total latency from the sum of all calls to the duration of the longest single call. This optimization is particularly impactful for assistants that frequently query multiple data sources to answer complex questions.
# Example: parallel tool execution
import asyncio
async def execute_tool_calls(tool_calls):
tasks = [execute_tool(call.name, call.input) for call in tool_calls]
results = await asyncio.gather(*tasks, return_exceptions=True)
tool_results = []
for call, result in zip(tool_calls, results):
if isinstance(result, Exception):
content = json.dumps({"error": str(result)})
else:
content = json.dumps(result)
tool_results.append({
"type": "tool_result",
"tool_use_id": call.id,
"content": content
})
return tool_resultsTool Design Best Practices
Keep tools atomic. Each tool should do one thing well. A tool called "manage_user" that handles creation, deletion, and modification is harder for the model to use correctly than three separate tools: "create_user," "delete_user," and "update_user." The model makes fewer parameter errors when the tool's purpose is unambiguous.
Add safety constraints at the tool level, not the prompt level. If a tool should only execute read operations, enforce that in the handler, not just in the tool description. Models can misinterpret or ignore prompt instructions, but they cannot bypass code-level constraints. This is especially important for tools that modify data, send messages, or interact with production systems.
Use consistent return formats across all tools. If every tool returns {"success": true, "result": ...} on success and {"error": "..."} on failure, the model learns the pattern and handles responses more reliably than when every tool uses a different output format.
Add memory as a tool. Adaptive Recall provides seven tools (store, recall, update, forget, reflect, graph, status) that integrate directly with your assistant's tool layer through MCP or REST.
Get Started Free