n8n + MCP: How to Use Model Context Protocol in Your Workflows
Model Context Protocol (MCP) is quickly becoming the standard way AI agents communicate with external tools — and n8n's native MCP support means you can wire up your AI workflows to virtually any MCP-compatible service without writing custom integration code. If you're building AI-powered automations, this changes how you think about connecting agents to the outside world.
What Is Model Context Protocol (MCP)?
MCP is an open protocol originally developed by Anthropic that standardizes how AI models interact with external tools and data sources. Think of it as a universal adapter: instead of every AI application building its own bespoke connections to databases, APIs, and file systems, MCP defines a common interface that any tool provider can implement.
The protocol uses a client-server architecture:
- MCP Clients — the AI agent or application that needs to use external tools (in this case, n8n's AI Agent node)
- MCP Servers — services that expose their capabilities through the MCP standard (databases, search engines, code interpreters, CRMs, and more)
An MCP server describes its available tools, their parameters, and what they return — all in a format the AI agent can understand and use autonomously. The agent reads the tool descriptions, decides which tools to call based on the user's request, and executes them through the standardized protocol.
Why MCP Matters for n8n Workflows
n8n already has 400+ built-in integrations, so why does MCP matter? Because MCP solves a different problem. Traditional n8n nodes are deterministic — you wire them together in a fixed sequence. MCP tools are agent-driven — the AI decides which tools to call and in what order, based on the context of the conversation or task.
Here's what MCP unlocks in your n8n workflows:
- Dynamic tool selection — your AI agent picks the right tool at runtime instead of following a hardcoded path
- Rapid expansion — hundreds of MCP servers already exist for services like GitHub, Slack, PostgreSQL, Google Drive, Jira, and more. Adding a new tool to your agent means pointing it at a server, not building a new workflow branch
- Interoperability — the same MCP server that works with Claude Desktop or Cursor also works with your n8n agent. Build once, use everywhere
- Community ecosystem — the MCP server ecosystem is growing fast, with community-built servers covering niche tools and internal services
For teams running n8n on n8nautomation.cloud, MCP is especially valuable because your dedicated instance has the stability and uptime needed for always-on AI agents that rely on persistent MCP connections.
Setting Up MCP in n8n: Step-by-Step
n8n supports MCP through its AI Agent node using the MCP Client Tool. Here's how to connect your agent to an MCP server:
- Create a new workflow and add an AI Agent node. Select your preferred LLM (the OpenAI or Anthropic sub-nodes work well here).
- Add a tool to the agent by clicking the "+" under Tools and selecting MCP Client Tool.
- Configure the MCP connection. You'll need to specify the transport type:
- Stdio — runs the MCP server as a local subprocess. Best for local development or self-hosted n8n instances where you can install server binaries.
- SSE (Server-Sent Events) — connects to a remote MCP server over HTTP. This is the typical choice for managed hosting environments.
- Streamable HTTP — the newer transport that's replacing SSE, offering better reliability and reconnection handling.
- Enter the server URL or command. For SSE/HTTP, paste the server endpoint URL. For Stdio, provide the command to launch the server (e.g.,
npx @modelcontextprotocol/server-github). - Test the connection. Once configured, n8n will fetch the list of available tools from the MCP server. You'll see them listed in the node configuration.
Tip: For managed n8n instances on n8nautomation.cloud, use SSE or Streamable HTTP transport to connect to remote MCP servers. This avoids the need to install server binaries on your instance and keeps your setup clean.
Practical MCP Server Examples for n8n
Here are real MCP servers you can connect to your n8n AI Agent today, along with what they enable:
PostgreSQL / Database Access
Connect your agent to a PostgreSQL MCP server and it can query your database conversationally. Ask the agent "How many orders came in last week?" and it writes and executes the SQL query, then returns a formatted answer. Use the @modelcontextprotocol/server-postgres package with your database connection string.
GitHub Repository Management
The GitHub MCP server lets your agent create issues, search code, review pull requests, and manage repositories. Combine this with a chat trigger in n8n and you have a Slack bot that can triage GitHub issues on demand.
File System and Google Drive
Give your agent access to read and write files. The filesystem MCP server is useful for document processing workflows — your agent can read uploaded files, extract information, and write summaries. The Google Drive MCP server extends this to cloud storage.
Web Search and Browsing
MCP servers like Brave Search or Tavily give your agent the ability to search the web in real time. This is powerful for research workflows: trigger the agent with a topic, let it search, synthesize findings, and output a structured report through n8n's downstream nodes.
Custom Internal Tools
You can build your own MCP server for internal APIs. Wrap your company's inventory system, billing API, or customer database in an MCP server, and any n8n AI agent can interact with it using natural language.
Using n8n As an MCP Server
n8n doesn't just consume MCP — it can also act as an MCP server. This means you can expose your n8n workflows as tools that other AI applications (Claude Desktop, Cursor, custom agents) can call.
To set this up, use the MCP Server Trigger node:
- Add an MCP Server Trigger as your workflow's trigger node.
- Define the tool name and description that will be exposed to MCP clients. Write clear descriptions — the AI agent on the other end uses these to decide when to call your tool.
- Define input parameters with types and descriptions.
- Build your workflow logic downstream — this is where n8n shines. Use any of n8n's 400+ nodes to process the request.
- End with a Respond to MCP node to return results to the calling agent.
This turns n8n into a powerful tool-building platform for AI agents. Your complex multi-step workflow becomes a single tool call from the agent's perspective. For example, you could build an n8n workflow that checks inventory across three warehouses, calculates shipping costs, and returns availability — all exposed as a single "check_product_availability" MCP tool.
Human-in-the-Loop: Keeping MCP Tools Safe
Giving AI agents access to external tools is powerful but comes with risk. You probably don't want your agent autonomously deleting database records or sending emails without oversight. n8n addresses this with its Human-in-the-Loop (HITL) feature for AI tool calls.
When you enable HITL on an MCP tool, the workflow pauses before the tool executes and waits for a human to approve or reject the action. Here's how it works in practice:
- The AI agent decides it needs to call a tool (e.g., "delete_user_record")
- n8n intercepts the call and sends a notification (via email, Slack, or the n8n UI) with the tool name and parameters
- A human reviews the request and approves or denies it
- The workflow resumes or aborts based on the decision
You can apply HITL selectively — let read-only tools like database queries execute freely, but gate write operations behind approval. This gives you the speed of AI automation with the safety of human oversight where it counts.
Best Practices for MCP with n8n
After working with MCP in n8n workflows, here are the patterns that work best:
Keep tool descriptions precise
The AI agent reads your MCP tool descriptions to decide when to use them. Vague descriptions lead to wrong tool calls. Instead of "Manages users," write "Creates a new user account with the given email address and role. Returns the user ID on success."
Limit the number of tools per agent
Giving an agent 50 tools sounds powerful, but it increases latency (more tokens in the system prompt) and error rates (the model gets confused). Start with 5-10 focused tools per agent. If you need more, split into specialized agents that hand off to each other.
Use environment variables for MCP credentials
Never hardcode API keys or database connection strings in your MCP server configuration. Use n8n's credential management or environment variables. This keeps secrets out of your workflow JSON and makes it easy to rotate keys.
Test tools individually before connecting to agents
Before wiring an MCP server into your AI Agent node, test each tool manually. Use a simple workflow with an HTTP Request node or the MCP Client Tool in isolation to verify the server responds correctly. Debugging tool failures inside an agent loop is much harder.
Monitor token usage
Every MCP tool description, every tool call, and every response consumes tokens. If your agent is calling tools in a loop or receiving large responses, costs can add up. Use n8n's execution log to monitor how many tool calls each workflow run triggers, and set reasonable iteration limits on your AI Agent node.
Version your MCP servers
If you build custom MCP servers for internal tools, version them. A breaking change to a tool's parameters will silently break your n8n agent — it'll keep calling the tool with the old parameter format and get errors. Pin your n8n workflows to specific server versions and upgrade intentionally.
MCP is still a fast-moving standard, but n8n's native support puts it in a strong position as the protocol matures. Whether you're connecting your agents to existing MCP servers or exposing your n8n workflows as tools for other AI applications, the combination opens up workflow patterns that weren't practical before. And with a managed instance on n8nautomation.cloud, you get the always-on reliability that MCP-powered AI agents need to run in production.