Integrating MCP Servers with n8n
This guide provides detailed instructions for integrating your MCP servers with n8n, a powerful workflow automation platform, to create AI-powered automated workflows.
What is n8n?
n8n is an open-source workflow automation tool that enables you to connect different services and create automated workflows with a visual editor. It supports hundreds of integrations and can be self-hosted or used as a cloud service.
Integration Methods
There are three primary ways to integrate your MCP server with n8n:
- HTTP Request Nodes - Make API calls to your MCP server endpoints
- SSE (Server-Sent Events) - Handle streaming responses from your MCP server
- Custom n8n Node - Use or create a dedicated MCP server node for n8n
Prerequisites
Before you begin integrating with n8n, you'll need:
- An MCP server deployed via MCP-Cloud.ai
- Your MCP server API key
- n8n instance (self-hosted or cloud)
- Basic understanding of workflow automation concepts
Method 1: Using HTTP Request Nodes
The HTTP Request node is the simplest way to connect n8n to your MCP server. It allows you to make API calls to generate completions, manage contexts, and more.
Step 1: Locate Your API Key
Ensure you have your MCP server API key, which looks like: mcp_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Step 2: Create an n8n Workflow
- In n8n, create a new workflow
- Add a trigger node (e.g., Schedule, Webhook, Manual, etc.)
- Add an HTTP Request node
Step 3: Configure the HTTP Request Node
- Set Method to
POST
- Enter URL:
https://your-mcp-server.mcp-cloud.ai/v1/chat/completions
- Add Headers:
Authorization: Bearer mcp_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Content-Type: application/json
- Configure Body (JSON):
{ "model": "claude-3-5-sonnet", "messages": [ {"role": "user", "content": "{{$node['Input Data'].json.prompt}}"} ] }
- Save and run the workflow to test
Example: Text Generation Workflow
This workflow allows you to generate text using your MCP server from various inputs:
- Schedule Trigger: Runs workflow at specified intervals
- Function Node: Prepares input data
// Transform the input for the MCP server return [ { prompt: `Generate a blog post about ${$node['Topic Input'].json.topic}. Use a ${$node['Style Input'].json.tone} tone and include ${$node['Format Input'].json.sections} sections.` } ];
- HTTP Request Node: Sends the prompt to your MCP server
- Function Node: Processes the response
// Extract the generated text const responseData = $node['MCP Server Request'].json; const generatedText = responseData.choices[0].message.content; // Add metadata return [{ generated_text: generatedText, prompt: $node['Input Data'].json.prompt, model: responseData.model, timestamp: new Date().toISOString() }];
- Google Sheets Node: Saves the output to a spreadsheet
Method 2: Using SSE for Streaming Responses
MCP servers provide Server-Sent Events (SSE) for real-time token streaming. Here's how to use it with n8n:
Step 1: Create a Function Node for SSE Handling
Since n8n doesn't have a native SSE node, you'll use the Function node:
// In a Function node
const EventSource = require('eventsource');
// Set up authentication
const headers = {
'Authorization': 'Bearer ' + $env.MCP_SERVER_API_KEY
};
// Create EventSource
const eventSource = new EventSource(
'https://your-mcp-server.mcp-cloud.ai/v1/chat/completions?stream=true',
{
headers,
method: 'POST',
body: JSON.stringify({
model: "claude-3-5-sonnet",
messages: [{ role: "user", content: $input.item.json.prompt }]
})
}
);
// Handle events
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => {
eventSource.close();
resolve([{ status: 'timeout', fullResponse: collectedTokens.join('') }]);
}, 30000); // 30 second timeout
const collectedTokens = [];
eventSource.onmessage = (event) => {
if (event.data === '[DONE]') {
clearTimeout(timeout);
eventSource.close();
resolve([{
status: 'completed',
fullResponse: collectedTokens.join(''),
tokenCount: collectedTokens.length
}]);
return;
}
try {
const data = JSON.parse(event.data);
const token = data.choices[0]?.delta?.content || '';
if (token) {
collectedTokens.push(token);
}
} catch (error) {
console.error('Error parsing token:', error);
}
};
eventSource.onerror = (error) => {
clearTimeout(timeout);
eventSource.close();
reject({ error: 'SSE connection error', details: error });
};
});
Step 2: Process Streamed Tokens
After collecting streamed tokens, you can process them in subsequent nodes:
// In a subsequent Function node
const fullResponse = $node['SSE Handler'].json.fullResponse;
// Process the response (e.g., extract key information)
return [{
originalPrompt: $input.item.json.prompt,
response: fullResponse,
summary: fullResponse.substring(0, 100) + '...',
tokenCount: $node['SSE Handler'].json.tokenCount
}];
Method 3: Creating Multi-step AI Workflows
You can create sophisticated workflows that involve multiple interactions with your MCP server:
Example: Conversational Workflow with Context Management
- Webhook Trigger: Receives user message
- Function Node 1: Retrieves conversation history from database
- HTTP Request Node 1: Fetches or creates a context
// POST to /v1/contexts { "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello, who are you?"}, {"role": "assistant", "content": "I'm an AI assistant powered by Claude. How can I help you today?"} ] }
- HTTP Request Node 2: Sends user message to MCP server with context
// POST to /v1/chat/completions { "model": "claude-3-5-sonnet", "context_id": "{{$node['Create Context'].json.context_id}}", "messages": [ {"role": "user", "content": "{{$node['User Input'].json.message}}"} ] }
- Function Node 2: Processes response and updates database
- Respond to Webhook: Returns AI response to user
Best Practices for MCP Server Integration with n8n
Secure API Key Storage: Use n8n's credentials store for your MCP server API key
// In workflow, reference credential const apiKey = $credentials.mcpServerApi.apiKey;
Implement Error Handling: Add error handling for failed API calls
// Error Catcher node if ($node['MCP Server Request'].error) { // Log error or take alternate action return [{ error: $node['MCP Server Request'].error.message }]; }
Handle Rate Limits: Implement exponential backoff for rate limiting
// In a Function node let retries = 0; const maxRetries = 5; async function callWithRetry() { try { // Make the API call... } catch (error) { if (error.statusCode === 429 && retries < maxRetries) { retries++; const delay = Math.pow(2, retries) * 1000; // Exponential backoff await new Promise(r => setTimeout(r, delay)); return callWithRetry(); } throw error; } }
Optimize Token Usage: Construct effective prompts to minimize token usage
// More efficient prompt construction const prompt = `Summarize the following text in 3 bullet points: ${$node['Input Text'].json.text}`;
Streaming for Long Content: Use SSE streaming for generating long content
// Set streaming parameter { "model": "claude-3-5-sonnet", "messages": [{"role": "user", "content": prompt}], "stream": true }
Real-world Integration Examples
Example 1: Content Generation Pipeline
This workflow:
- Takes content briefs from a Google Sheet
- Generates first drafts using your MCP server
- Processes the drafts through a review step
- Publishes approved content to WordPress
- Logs the status back to the Google Sheet
Example 2: Customer Support Automation
This workflow:
- Receives customer inquiries from multiple channels
- Classifies the inquiry type using your MCP server
- Generates a personalized response draft
- Routes complex cases to human agents
- Sends responses back to customers via the original channel
Troubleshooting
Common Issues and Solutions
Issue | Solution |
---|---|
Authentication Failed | Verify your API key format and that it hasn't expired |
Timeout Errors | Increase timeout settings for long-running operations |
SSE Connection Dropped | Implement automatic reconnection logic |
JSON Parsing Errors | Validate JSON format in request/response bodies |
Rate Limiting | Implement exponential backoff and limit concurrent requests |
Debugging Tips
- Use the n8n Debug node to inspect data at each step
- Enable verbose logging in your workflow
- Test API calls directly using tools like Postman before implementing in n8n
- For streaming issues, test with smaller prompts first
Conclusion
Integrating your MCP server with n8n enables powerful AI-driven workflows that can automate content generation, customer interactions, data analysis, and more. By combining the AI capabilities of your MCP server with n8n's flexible workflow engine, you can create sophisticated automations that integrate with your existing systems and processes.
For more advanced integrations, consider exploring:
- Custom API Clients - Building dedicated client libraries
- Webhooks Integration - Event-based automation
- Zapier Integration - Alternative workflow automation platform