Home

Integrating MCP Servers with n8n

This guide provides detailed instructions for integrating your MCP servers with n8n, a powerful workflow automation platform, to create AI-powered automated workflows.

What is n8n?

n8n is an open-source workflow automation tool that enables you to connect different services and create automated workflows with a visual editor. It supports hundreds of integrations and can be self-hosted or used as a cloud service.

Integration Methods

There are three primary ways to integrate your MCP server with n8n:

  1. HTTP Request Nodes - Make API calls to your MCP server endpoints
  2. SSE (Server-Sent Events) - Handle streaming responses from your MCP server
  3. Custom n8n Node - Use or create a dedicated MCP server node for n8n

Prerequisites

Before you begin integrating with n8n, you'll need:

  1. An MCP server deployed via MCP-Cloud.ai
  2. Your MCP server API key
  3. n8n instance (self-hosted or cloud)
  4. Basic understanding of workflow automation concepts

Method 1: Using HTTP Request Nodes

The HTTP Request node is the simplest way to connect n8n to your MCP server. It allows you to make API calls to generate completions, manage contexts, and more.

Step 1: Locate Your API Key

Ensure you have your MCP server API key, which looks like: mcp_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Step 2: Create an n8n Workflow

  1. In n8n, create a new workflow
  2. Add a trigger node (e.g., Schedule, Webhook, Manual, etc.)
  3. Add an HTTP Request node

Step 3: Configure the HTTP Request Node

  1. Set Method to POST
  2. Enter URL: https://your-mcp-server.mcp-cloud.ai/v1/chat/completions
  3. Add Headers:
    Authorization: Bearer mcp_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Content-Type: application/json
    
  4. Configure Body (JSON):
    {
      "model": "claude-3-5-sonnet",
      "messages": [
        {"role": "user", "content": "{{$node['Input Data'].json.prompt}}"}
      ]
    }
    
  5. Save and run the workflow to test

Example: Text Generation Workflow

n8n MCP Text Generation Example

This workflow allows you to generate text using your MCP server from various inputs:

  1. Schedule Trigger: Runs workflow at specified intervals
  2. Function Node: Prepares input data
    // Transform the input for the MCP server
    return [
      {
        prompt: `Generate a blog post about ${$node['Topic Input'].json.topic}. 
                 Use a ${$node['Style Input'].json.tone} tone and include 
                 ${$node['Format Input'].json.sections} sections.`
      }
    ];
    
  3. HTTP Request Node: Sends the prompt to your MCP server
  4. Function Node: Processes the response
    // Extract the generated text
    const responseData = $node['MCP Server Request'].json;
    const generatedText = responseData.choices[0].message.content;
    
    // Add metadata
    return [{
      generated_text: generatedText,
      prompt: $node['Input Data'].json.prompt,
      model: responseData.model,
      timestamp: new Date().toISOString()
    }];
    
  5. Google Sheets Node: Saves the output to a spreadsheet

Method 2: Using SSE for Streaming Responses

MCP servers provide Server-Sent Events (SSE) for real-time token streaming. Here's how to use it with n8n:

Step 1: Create a Function Node for SSE Handling

Since n8n doesn't have a native SSE node, you'll use the Function node:

// In a Function node
const EventSource = require('eventsource');

// Set up authentication
const headers = {
  'Authorization': 'Bearer ' + $env.MCP_SERVER_API_KEY
};

// Create EventSource
const eventSource = new EventSource(
  'https://your-mcp-server.mcp-cloud.ai/v1/chat/completions?stream=true',
  { 
    headers,
    method: 'POST',
    body: JSON.stringify({
      model: "claude-3-5-sonnet",
      messages: [{ role: "user", content: $input.item.json.prompt }]
    })
  }
);

// Handle events
return new Promise((resolve, reject) => {
  const timeout = setTimeout(() => {
    eventSource.close();
    resolve([{ status: 'timeout', fullResponse: collectedTokens.join('') }]);
  }, 30000); // 30 second timeout
  
  const collectedTokens = [];
  
  eventSource.onmessage = (event) => {
    if (event.data === '[DONE]') {
      clearTimeout(timeout);
      eventSource.close();
      resolve([{ 
        status: 'completed', 
        fullResponse: collectedTokens.join(''),
        tokenCount: collectedTokens.length
      }]);
      return;
    }
    
    try {
      const data = JSON.parse(event.data);
      const token = data.choices[0]?.delta?.content || '';
      if (token) {
        collectedTokens.push(token);
      }
    } catch (error) {
      console.error('Error parsing token:', error);
    }
  };
  
  eventSource.onerror = (error) => {
    clearTimeout(timeout);
    eventSource.close();
    reject({ error: 'SSE connection error', details: error });
  };
});

Step 2: Process Streamed Tokens

After collecting streamed tokens, you can process them in subsequent nodes:

// In a subsequent Function node
const fullResponse = $node['SSE Handler'].json.fullResponse;

// Process the response (e.g., extract key information)
return [{
  originalPrompt: $input.item.json.prompt,
  response: fullResponse,
  summary: fullResponse.substring(0, 100) + '...',
  tokenCount: $node['SSE Handler'].json.tokenCount
}];

Method 3: Creating Multi-step AI Workflows

You can create sophisticated workflows that involve multiple interactions with your MCP server:

Example: Conversational Workflow with Context Management

  1. Webhook Trigger: Receives user message
  2. Function Node 1: Retrieves conversation history from database
  3. HTTP Request Node 1: Fetches or creates a context
    // POST to /v1/contexts
    {
      "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, who are you?"},
        {"role": "assistant", "content": "I'm an AI assistant powered by Claude. How can I help you today?"}
      ]
    }
    
  4. HTTP Request Node 2: Sends user message to MCP server with context
    // POST to /v1/chat/completions
    {
      "model": "claude-3-5-sonnet",
      "context_id": "{{$node['Create Context'].json.context_id}}",
      "messages": [
        {"role": "user", "content": "{{$node['User Input'].json.message}}"}
      ]
    }
    
  5. Function Node 2: Processes response and updates database
  6. Respond to Webhook: Returns AI response to user

Best Practices for MCP Server Integration with n8n

  1. Secure API Key Storage: Use n8n's credentials store for your MCP server API key

    // In workflow, reference credential
    const apiKey = $credentials.mcpServerApi.apiKey;
    
  2. Implement Error Handling: Add error handling for failed API calls

    // Error Catcher node
    if ($node['MCP Server Request'].error) {
      // Log error or take alternate action
      return [{ error: $node['MCP Server Request'].error.message }];
    }
    
  3. Handle Rate Limits: Implement exponential backoff for rate limiting

    // In a Function node
    let retries = 0;
    const maxRetries = 5;
    
    async function callWithRetry() {
      try {
        // Make the API call...
      } catch (error) {
        if (error.statusCode === 429 && retries < maxRetries) {
          retries++;
          const delay = Math.pow(2, retries) * 1000; // Exponential backoff
          await new Promise(r => setTimeout(r, delay));
          return callWithRetry();
        }
        throw error;
      }
    }
    
  4. Optimize Token Usage: Construct effective prompts to minimize token usage

    // More efficient prompt construction
    const prompt = `Summarize the following text in 3 bullet points:
    ${$node['Input Text'].json.text}`;
    
  5. Streaming for Long Content: Use SSE streaming for generating long content

    // Set streaming parameter
    {
      "model": "claude-3-5-sonnet",
      "messages": [{"role": "user", "content": prompt}],
      "stream": true
    }
    

Real-world Integration Examples

Example 1: Content Generation Pipeline

Content Generation Pipeline

This workflow:

  1. Takes content briefs from a Google Sheet
  2. Generates first drafts using your MCP server
  3. Processes the drafts through a review step
  4. Publishes approved content to WordPress
  5. Logs the status back to the Google Sheet

Example 2: Customer Support Automation

Customer Support Workflow

This workflow:

  1. Receives customer inquiries from multiple channels
  2. Classifies the inquiry type using your MCP server
  3. Generates a personalized response draft
  4. Routes complex cases to human agents
  5. Sends responses back to customers via the original channel

Troubleshooting

Common Issues and Solutions

Issue Solution
Authentication Failed Verify your API key format and that it hasn't expired
Timeout Errors Increase timeout settings for long-running operations
SSE Connection Dropped Implement automatic reconnection logic
JSON Parsing Errors Validate JSON format in request/response bodies
Rate Limiting Implement exponential backoff and limit concurrent requests

Debugging Tips

  1. Use the n8n Debug node to inspect data at each step
  2. Enable verbose logging in your workflow
  3. Test API calls directly using tools like Postman before implementing in n8n
  4. For streaming issues, test with smaller prompts first

Conclusion

Integrating your MCP server with n8n enables powerful AI-driven workflows that can automate content generation, customer interactions, data analysis, and more. By combining the AI capabilities of your MCP server with n8n's flexible workflow engine, you can create sophisticated automations that integrate with your existing systems and processes.

For more advanced integrations, consider exploring: