Is MCP Only for Claude or Does It Work with GPT
Anthropic Created MCP, but It Is Open
Anthropic released MCP as an open specification in November 2024. The protocol definition, the reference SDKs for Python and TypeScript, and several example servers are all open source. Any developer can implement an MCP client or server without permission from Anthropic, without paying licensing fees, and without using Anthropic's models.
This is a deliberate design choice. A proprietary protocol that only worked with Claude would have limited adoption and fragmented the ecosystem. An open protocol creates network effects: more servers make the protocol more valuable for clients, and more clients make the protocol more valuable for server developers. Anthropic benefits from this because Claude clients are among the most popular MCP clients, but the protocol's value does not depend on any single client or model.
Which Clients Support MCP
The list of MCP-compatible clients has grown rapidly since the specification was released. Major clients include:
- Claude Code: Anthropic's CLI tool, supports both stdio and HTTP servers
- Claude Desktop: Anthropic's desktop application with MCP support
- Cursor: AI-powered IDE that uses MCP for tool integration in Agent mode
- Windsurf: IDE with built-in MCP client support
- Cline: VS Code extension with MCP integration
- Continue: Open-source IDE extension supporting MCP
- Zed: Editor with native MCP support
Each of these clients can use different AI models underneath. Cursor, for example, supports multiple model providers including Claude, GPT, and others. When Cursor connects to an MCP server, the MCP protocol handles the communication regardless of which model is generating the tool calls. The server sees a standard MCP request; it has no way to tell (and does not need to know) which model decided to make the call.
Model Independence
MCP operates at the protocol level, between the client application and the tool server. It is independent of the model layer. The model's role is to decide when to call a tool and what parameters to pass. The client's role is to translate between the model's tool call format (which is model-specific) and the MCP protocol (which is universal). The server's role is to execute the tool logic and return results.
This layered architecture means you can swap models without changing your MCP servers. If you start with Claude and later switch to GPT (or use both), your MCP servers continue to work unchanged. The client handles the translation between the model's native tool call format and MCP's JSON-RPC protocol.
It also means that improvements in any model's tool-calling capabilities benefit all MCP servers. When a model gets better at selecting the right tool or generating correct parameters, every MCP server it connects to produces better results, without any changes to the server code.
Building Model-Agnostic Servers
Because MCP servers do not know which model is calling them, they are model-agnostic by default. Your tool descriptions should be written for any competent language model, not optimized for a specific model's quirks. Use clear, direct language in descriptions. Specify exactly what the tool does, what each parameter means, and what the return value contains. This approach works well across all models because good descriptions are universally beneficial.
Some models are better at tool use than others. Claude, GPT-4, and Gemini all handle tool selection and parameter construction well for clearly described tools. Smaller or older models may struggle with complex schemas or ambiguous descriptions. If you need to support a wide range of models, keep your tool interfaces simple: fewer parameters, clearer descriptions, and more forgiving input handling.
GPT and MCP
OpenAI's native tool integration uses their function calling API, which is separate from MCP. However, MCP clients that use GPT models (like Cursor when configured to use GPT) translate between GPT's function calling format and MCP's protocol at the client level. The MCP server does not need any GPT-specific code.
OpenAI has also announced support for MCP in some of their products, which further validates the protocol as a cross-ecosystem standard. Whether you use OpenAI's native function calling or MCP depends on your architecture: function calling for direct API integration, MCP for tool interoperability across clients.
The Bottom Line
MCP is not a Claude-only technology. It is an open standard with broad adoption across AI clients and model providers. Building an MCP server gives your tools the widest possible reach, not just in Claude-based products, but across the growing ecosystem of MCP-compatible clients. The protocol is model-independent by design, and any server you build today works with any client that adds MCP support tomorrow.
Connect Adaptive Recall to any MCP client. One server, any AI assistant, persistent memory everywhere.
Get Started Free