n8n + OpenAI Integration: 5 Powerful Workflows You Can Build Today
The n8n OpenAI integration is one of the most versatile nodes available in the platform. Instead of writing custom scripts to call the OpenAI API, you can visually wire GPT models into any workflow — feeding them data from databases, emails, webhooks, or any of n8n's 400+ integrations, then routing the output wherever it needs to go. Here are five workflows that demonstrate what's possible.
Why Combine OpenAI with n8n
Calling the OpenAI API directly works fine for one-off tasks. But real business automation requires more: you need to pull data from upstream systems, transform it, pass it to GPT, parse the response, and push results downstream. That's where n8n comes in.
With n8n, the OpenAI node sits inside a larger workflow. A webhook receives a form submission, a Schedule Trigger fires every morning, or a Gmail Trigger catches a new email — and the OpenAI node processes the data in context. You get retry logic, error handling, conditional branching, and logging without writing infrastructure code.
The OpenAI node in n8n supports the Chat Completions API (GPT-4o, GPT-4o-mini, and other models), the Assistants API, and text-to-image generation via DALL·E. You can also use it as a sub-node inside n8n's AI Agent for tool-calling workflows.
Setting Up Your OpenAI Credentials in n8n
Before building workflows, you need to connect your OpenAI account to n8n. Here's how:
- Go to Settings → Credentials in your n8n instance.
- Click Add Credential and search for OpenAI.
- Paste your API key from the OpenAI Platform dashboard (under API Keys).
- Optionally set an Organization ID if your API key belongs to a specific org.
- Click Save. n8n will test the connection automatically.
That's it. The credential is now available to any OpenAI node in your workflows. If you're running your instance on n8nautomation.cloud, your credentials are encrypted at rest on your dedicated instance — no shared infrastructure.
Tip: Set a spending limit on your OpenAI account before connecting it to automated workflows. A misconfigured loop can burn through your API budget fast.
Workflow 1: Automated Blog Draft Generation
This workflow generates first-draft blog posts from a list of topics stored in Google Sheets.
Nodes used: Schedule Trigger → Google Sheets (Read) → IF → OpenAI (Chat Completion) → Google Docs (Create)
How it works:
- A Schedule Trigger fires every Monday at 9 AM.
- The Google Sheets node reads rows where the "Status" column is "pending."
- An IF node checks that there's at least one pending topic.
- The OpenAI node receives the topic and a system prompt:
You are a blog writer. Write a 600-word draft about the following topic. Use subheadings, keep the tone professional but approachable.Set the model to gpt-4o and temperature to 0.7. - A Google Docs node creates a new document with the generated draft.
- A second Google Sheets node updates the row status to "drafted."
The result: every Monday, your content calendar advances automatically with drafts ready for human review.
Workflow 2: Intelligent Email Triage and Routing
Use OpenAI to classify incoming emails and route them to the right team member or channel.
Nodes used: Gmail Trigger → OpenAI (Chat Completion) → Switch → Slack (Send Message)
How it works:
- A Gmail Trigger watches your shared inbox for new messages.
- The OpenAI node receives the email subject and body with this system prompt:
Classify this email into exactly one category: sales, support, billing, partnership, spam. Respond with only the category name in lowercase.Use gpt-4o-mini to keep costs low and set temperature to 0 for consistent output. - A Switch node branches based on the OpenAI response.
- Each branch sends a formatted Slack message to the appropriate channel (#sales-leads, #support-queue, #billing, #partnerships). Spam gets silently archived.
This workflow handles hundreds of emails daily and costs pennies in API usage with gpt-4o-mini.
Workflow 3: Structured Data Extraction from Documents
Extract structured JSON from unstructured text — invoices, resumes, contracts, or any document.
Nodes used: Webhook → OpenAI (Chat Completion) → Code → Airtable (Create Record)
How it works:
- A Webhook receives document text (from a form, email parser, or OCR pipeline).
- The OpenAI node gets the text with a system prompt specifying the exact JSON schema you want back:
Extract the following fields from the document and return valid JSON: {vendor_name, invoice_number, date, line_items: [{description, quantity, unit_price}], total_amount}. If a field is not found, use null.Set the response format to JSON mode in the node's options. - A Code node parses the JSON response and validates required fields.
- An Airtable node creates a record with the extracted data.
This replaces hours of manual data entry. The JSON mode setting in the OpenAI node ensures you always get parseable output, which eliminates the most common failure point in LLM-based extraction workflows.
Workflow 4: Customer Support Auto-Responder
Draft personalized support replies based on your knowledge base, with a human review step before sending.
Nodes used: Webhook (from helpdesk) → HTTP Request (fetch KB article) → OpenAI (Chat Completion) → Wait → Gmail (Send)
How it works:
- Your helpdesk (Freshdesk, Zendesk, or a custom form) sends a Webhook when a new ticket arrives.
- An HTTP Request node searches your knowledge base API for relevant articles using the ticket's keywords.
- The OpenAI node receives the ticket text plus the KB context:
You are a helpful support agent. Using the provided knowledge base article, draft a reply to this customer question. Be specific, reference the article steps, and keep it under 150 words. If the KB doesn't cover the question, say: "I'll escalate this to our team." - A Wait node pauses the workflow and sends the draft to a reviewer via Slack or email. When the reviewer approves (clicks a webhook link), the workflow resumes.
- The Gmail node sends the approved reply to the customer.
Workflow 5: Social Media Content Pipeline
Turn one piece of content into platform-specific social media posts.
Nodes used: RSS Feed Trigger → OpenAI (Chat Completion) × 3 → Twitter (Post) + LinkedIn (Post) + Slack (Send Message)
How it works:
- An RSS Feed Trigger detects a new post on your company blog.
- Three OpenAI nodes run in parallel (use n8n's split-branch pattern), each with a platform-specific system prompt:
- Twitter:
Write a tweet (under 280 chars) about this blog post. Include one relevant emoji and a hook. - LinkedIn:
Write a LinkedIn post (3-4 short paragraphs) summarizing key takeaways from this article. Professional tone, include a question to drive engagement. - Internal Slack:
Write a short team announcement about this new blog post. Casual tone, 2 sentences max.
- Twitter:
- Each branch posts to the respective platform via the Twitter, LinkedIn, and Slack nodes.
One blog post, three platforms, zero manual copy-writing. Running this on a dedicated n8n instance on n8nautomation.cloud ensures the parallel API calls don't hit memory limits.
Tips for Running OpenAI Workflows in Production
Once you move past testing, a few practices will save you from common pitfalls:
- Use gpt-4o-mini for classification tasks. It's significantly cheaper and fast enough for routing, labeling, and yes/no decisions. Save gpt-4o for generation tasks where quality matters.
- Set temperature to 0 for deterministic tasks. Classification, extraction, and formatting workflows should produce consistent output. Temperature 0 gets you there.
- Always enable JSON mode for extraction. The OpenAI node's response format option eliminates malformed output that breaks downstream nodes.
- Add error handling on the OpenAI node. API rate limits and timeouts happen. Use n8n's Error Trigger or the node's built-in retry settings (under Settings → Retry on Fail) to handle transient failures gracefully.
- Monitor your token usage. Add a Code node after the OpenAI node to log
$json.usage.total_tokensto a Google Sheet or database. This gives you cost visibility per workflow execution. - Keep system prompts short and specific. Vague prompts produce vague output. Tell the model exactly what format, length, and constraints to follow.
Tip: If you're processing sensitive data through OpenAI, check their data usage policies. OpenAI's API does not use your inputs for training by default, but hosting your n8n instance on n8nautomation.cloud means your workflow data, credentials, and execution logs stay on your dedicated server — not shared infrastructure.
These five workflows cover the most common OpenAI use cases in business automation: generating content, classifying input, extracting structured data, drafting responses, and repurposing content across channels. Each one can be built in under 30 minutes using n8n's visual editor, and once deployed, they run hands-free on a schedule or trigger. Start with the one closest to a pain point you're dealing with today, and expand from there.