Tutorial

Integrating MCP with Claude, GPT, and Other LLMs

Learn how to integrate MCP servers with popular Large Language Models including Claude, GPT-4, and open-source alternatives. Compare integration approaches and performance.

14 min readFebruary 5, 2024Intermediate

Integration Overview

This guide covers integrating MCP servers with popular Large Language Models. We'll explore different approaches, compare performance, and provide practical examples for each platform.

Why Integrate MCP with LLMs?

The Model Context Protocol (MCP) enables AI models to access external data sources, tools, and systems through a standardized interface. By integrating MCP servers with LLMs, you can:

  • Extend LLM capabilities with real-time data access
  • Enable AI models to perform actions and interact with systems
  • Provide structured access to databases, APIs, and file systems
  • Create more powerful and context-aware AI applications
  • Build consistent interfaces across different LLM providers

LLM Integration Approaches

1. Direct MCP Integration

Some LLMs have built-in MCP support or official SDKs:

  • Claude (Anthropic): Native MCP support through Claude Desktop and API
  • GPT-4 (OpenAI): Plugin system with MCP compatibility
  • Open Source LLMs: Various implementations and community tools

2. Client-Side Integration

Use MCP clients to bridge LLMs with MCP servers:

  • MCP client libraries for different programming languages
  • Custom integration layers
  • Middleware solutions

Integrating with Claude

Claude Desktop Integration

Claude Desktop provides native MCP support:

Claude Desktop Configuration

# Claude Desktop MCP Configuration
{
  "mcpServers": {
    "my-database-server": {
      "command": "python",
      "args": ["-m", "my_mcp_server"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost/db"
      }
    },
    "my-api-server": {
      "command": "node",
      "args": ["my-api-server.js"],
      "env": {
        "API_KEY": "your-api-key"
      }
    }
  }
}

Claude API Integration

For programmatic integration with Claude API:

Python Claude API Integration

import anthropic
import asyncio
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client

class ClaudeMCPIntegration:
    def __init__(self, api_key: str):
        self.client = anthropic.Anthropic(api_key=api_key)
        self.mcp_session = None
    
    async def setup_mcp(self, server_command: list):
        """Setup MCP server connection"""
        self.mcp_session = ClientSession()
        await self.mcp_session.connect(stdio_client(server_command))
    
    async def query_with_mcp(self, prompt: str, tools: list = None):
        """Query Claude with MCP tools"""
        # Get available tools from MCP server
        if self.mcp_session:
            tools_result = await self.mcp_session.list_tools()
            available_tools = tools_result.tools
        
        # Create Claude message with tool context
        message = self.client.messages.create(
            model="claude-3-sonnet-20240229",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}],
            tools=available_tools if self.mcp_session else None
        )
        
        return message

# Usage example
async def main():
    integration = ClaudeMCPIntegration("your-api-key")
    await integration.setup_mcp(["python", "-m", "my_mcp_server"])
    
    response = await integration.query_with_mcp(
        "Query the database for user information"
    )
    print(response)

Integrating with GPT-4

OpenAI Function Calling

GPT-4 supports function calling which can be adapted for MCP:

GPT-4 MCP Integration

import openai
import asyncio
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client

class GPT4MCPIntegration:
    def __init__(self, api_key: str):
        self.client = openai.OpenAI(api_key=api_key)
        self.mcp_session = None
    
    async def setup_mcp(self, server_command: list):
        """Setup MCP server connection"""
        self.mcp_session = ClientSession()
        await self.mcp_session.connect(stdio_client(server_command))
    
    def mcp_tools_to_openai_functions(self, mcp_tools):
        """Convert MCP tools to OpenAI function format"""
        functions = []
        for tool in mcp_tools:
            functions.append({
                "type": "function",
                "function": {
                    "name": tool.name,
                    "description": tool.description,
                    "parameters": tool.inputSchema
                }
            })
        return functions
    
    async def query_with_mcp(self, prompt: str):
        """Query GPT-4 with MCP tools"""
        # Get available tools from MCP server
        if self.mcp_session:
            tools_result = await self.mcp_session.list_tools()
            functions = self.mcp_tools_to_openai_functions(tools_result.tools)
        else:
            functions = []
        
        # Create GPT-4 message
        response = self.client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": prompt}],
            functions=functions,
            function_call="auto"
        )
        
        return response

# Usage example
async def main():
    integration = GPT4MCPIntegration("your-api-key")
    await integration.setup_mcp(["python", "-m", "my_mcp_server"])
    
    response = await integration.query_with_mcp(
        "Analyze the sales data from the database"
    )
    print(response)

OpenAI Plugins

OpenAI's plugin system can be used with MCP servers:

OpenAI Plugin Manifest

{
  "schema_version": "v1",
  "name_for_model": "MCP Database Server",
  "name_for_human": "Database Access",
  "description_for_model": "Access to database operations through MCP",
  "description_for_human": "Query and manipulate database data",
  "auth": {
    "type": "none"
  },
  "api": {
    "type": "openapi",
    "url": "https://your-mcp-server.com/openapi.json",
    "is_user_authenticated": false
  },
  "logo_url": "https://your-mcp-server.com/logo.png",
  "contact_email": "support@your-mcp-server.com",
  "legal_info_url": "https://your-mcp-server.com/legal"
}

Integrating with Open Source LLMs

Local LLM Integration

For local LLMs like Llama, Mistral, or other open-source models:

Local LLM MCP Integration

import requests
import json
import asyncio
from mcp.client import ClientSession
from mcp.client.stdio import stdio_client

class LocalLLMMCPIntegration:
    def __init__(self, api_url: str):
        self.api_url = api_url
        self.mcp_session = None
    
    async def setup_mcp(self, server_command: list):
        """Setup MCP server connection"""
        self.mcp_session = ClientSession()
        await self.mcp_session.connect(stdio_client(server_command))
    
    async def query_with_mcp(self, prompt: str):
        """Query local LLM with MCP tools"""
        # Get available tools from MCP server
        if self.mcp_session:
            tools_result = await self.mcp_session.list_tools()
            tools_context = f"Available tools: {json.dumps([t.dict() for t in tools_result.tools])}"
            enhanced_prompt = f"{prompt}\n\n{tools_context}"
        else:
            enhanced_prompt = prompt
        
        # Query local LLM
        response = requests.post(
            f"{self.api_url}/v1/chat/completions",
            json={
                "model": "llama-2-7b-chat",
                "messages": [
                    {"role": "user", "content": enhanced_prompt}
                ],
                "temperature": 0.7,
                "max_tokens": 1000
            }
        )
        
        return response.json()

# Usage example
async def main():
    integration = LocalLLMMCPIntegration("http://localhost:11434")
    await integration.setup_mcp(["python", "-m", "my_mcp_server"])
    
    response = await integration.query_with_mcp(
        "What data is available in the database?"
    )
    print(response)

Ollama Integration

Ollama provides easy local LLM deployment with MCP support:

Ollama MCP Configuration

# Ollama Modelfile with MCP support
FROM llama2:7b

# Add MCP server as a tool
SYSTEM You have access to a database through MCP. Use the available tools to query and manipulate data.

# Define available tools
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>"""

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER stop "<|im_end|>"

# MCP server configuration
TOOL mcp_database {
  "name": "query_database",
  "description": "Query the database",
  "parameters": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "SQL query to execute"
      }
    },
    "required": ["query"]
  }
}

Performance Comparison

Response Time Analysis

Different integration approaches have varying performance characteristics:

Performance Metrics

Integration MethodAvg Response TimeTool Call LatencySetup Complexity
Claude Desktop~2-3s~500msLow
GPT-4 Function Calling~3-5s~1-2sMedium
Local LLM + MCP~5-10s~2-3sHigh
Custom Integration~1-3s~200-500msVery High

Cost Analysis

Consider the cost implications of different approaches:

  • Claude API: Pay per token, competitive pricing
  • GPT-4 API: Higher cost but excellent performance
  • Local LLMs: One-time hardware cost, no API fees
  • Hybrid Approach: Balance cost and performance

Best Practices for LLM Integration

1. Tool Design

  • Design tools with clear, descriptive names and parameters
  • Provide comprehensive documentation for each tool
  • Use consistent parameter naming conventions
  • Implement proper error handling and validation

2. Performance Optimization

  • Cache frequently accessed data
  • Use connection pooling for database connections
  • Implement request batching where possible
  • Monitor and optimize tool call latency

3. Security Considerations

  • Validate all inputs from LLM tool calls
  • Implement proper authentication and authorization
  • Use parameterized queries to prevent injection attacks
  • Monitor and log all tool usage

4. Error Handling

  • Provide meaningful error messages to LLMs
  • Implement retry logic for transient failures
  • Use fallback mechanisms when tools are unavailable
  • Log errors for debugging and monitoring

Real-World Examples

Example 1: Data Analysis Workflow

Integrate MCP servers with LLMs for automated data analysis:

Data Analysis Integration

# MCP Server for Data Analysis
@server.list_tools()
async def handle_list_tools() -> ListToolsResult:
    return ListToolsResult(
        tools=[
            Tool(
                name="query_sales_data",
                description="Query sales data from database",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "date_range": {"type": "string"},
                        "product_category": {"type": "string"}
                    }
                }
            ),
            Tool(
                name="generate_report",
                description="Generate analysis report",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "data": {"type": "array"},
                        "report_type": {"type": "string"}
                    }
                }
            )
        ]
    )

# LLM Prompt Example
prompt = """
Analyze the sales data for Q1 2024 and generate a comprehensive report.
Include trends, top-performing products, and recommendations for Q2.
"""

Example 2: Customer Service Integration

Use MCP servers to provide customer data to LLMs:

Customer Service Tools

# Customer Service MCP Tools
@server.list_tools()
async def handle_list_tools() -> ListToolsResult:
    return ListToolsResult(
        tools=[
            Tool(
                name="get_customer_info",
                description="Retrieve customer information",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "customer_id": {"type": "string"}
                    },
                    "required": ["customer_id"]
                }
            ),
            Tool(
                name="update_ticket",
                description="Update support ticket",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "ticket_id": {"type": "string"},
                        "status": {"type": "string"},
                        "notes": {"type": "string"}
                    }
                }
            ),
            Tool(
                name="search_knowledge_base",
                description="Search knowledge base articles",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "query": {"type": "string"}
                    }
                }
            )
        ]
    )

# LLM Integration
async def handle_customer_inquiry(customer_id: str, inquiry: str):
    # Get customer context
    customer_info = await call_tool("get_customer_info", {"customer_id": customer_id})
    
    # Search knowledge base
    kb_results = await call_tool("search_knowledge_base", {"query": inquiry})
    
    # Generate response with context
    prompt = f"""
    Customer: {customer_info}
    Inquiry: {inquiry}
    Knowledge Base: {kb_results}
    
    Provide a helpful response to the customer inquiry.
    """
    
    return await llm.generate_response(prompt)

Monitoring and Debugging

1. Logging and Observability

Implement comprehensive logging for your MCP-LLM integrations:

Logging Configuration

import logging
import json
from datetime import datetime

class MCPLLMLogger:
    def __init__(self):
        self.logger = logging.getLogger("mcp-llm")
        self.logger.setLevel(logging.INFO)
        
        # Add handlers
        handler = logging.StreamHandler()
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        handler.setFormatter(formatter)
        self.logger.addHandler(handler)
    
    def log_tool_call(self, tool_name: str, arguments: dict, result: dict, duration: float):
        self.logger.info(json.dumps({
            "event": "tool_call",
            "tool": tool_name,
            "arguments": arguments,
            "result": result,
            "duration_ms": duration * 1000,
            "timestamp": datetime.utcnow().isoformat()
        }))
    
    def log_llm_query(self, prompt: str, response: str, tools_used: list, duration: float):
        self.logger.info(json.dumps({
            "event": "llm_query",
            "prompt_length": len(prompt),
            "response_length": len(response),
            "tools_used": tools_used,
            "duration_ms": duration * 1000,
            "timestamp": datetime.utcnow().isoformat()
        }))

2. Performance Monitoring

Monitor key metrics for your MCP-LLM integrations:

  • Tool Call Latency: Time taken for MCP tool execution
  • LLM Response Time: Time for LLM to generate responses
  • Tool Usage Patterns: Which tools are used most frequently
  • Error Rates: Frequency of tool call failures
  • Cost Metrics: API usage and associated costs

Future Trends

1. Native MCP Support

More LLM providers are expected to add native MCP support:

  • Direct integration without custom clients
  • Standardized tool calling interfaces
  • Better performance and reliability
  • Simplified development workflows

2. Advanced Tool Orchestration

Future developments in tool orchestration:

  • Multi-step tool execution workflows
  • Conditional tool calling based on context
  • Tool composition and chaining
  • Intelligent tool selection

3. Enhanced Security

Improved security features for MCP-LLM integrations:

  • Fine-grained access control
  • Tool execution sandboxing
  • Audit trails and compliance
  • Encrypted tool communication

Conclusion

Integrating MCP servers with Large Language Models opens up powerful possibilities for AI applications. By following the patterns and best practices outlined in this guide, you can create robust, scalable, and secure integrations that leverage the strengths of both MCP and modern LLMs.

Whether you're building data analysis tools, customer service applications, or creative AI assistants, MCP provides a standardized way to extend LLM capabilities with real-world data and actions.

Ready to Integrate?

Start building powerful MCP-LLM integrations and explore the possibilities of AI-powered applications with real-world data access.

Related Articles