Ready-to-Use n8n Workflow Templates
Browse 307+ free automation templates. Import directly into your n8n instance.
Text to Speech (OpenAI)
Converts text into natural-sounding speech using OpenAI's Text-to-Speech API. It sends your input text to OpenAI and receives an audio file in return. This is useful for creating audio versions of articles, generating voiceovers for videos, or providing accessibility features for web content. Quickly transform written content into engaging audio.
Auto-create TikTok videos with VEED.io AI avatars, ElevenLabs & GPT-4
Automate the creation and distribution of trending TikTok videos using AI avatars. This workflow connects Telegram, Perplexity, OpenAI, ElevenLabs, VEED.io, and BLOTATO to generate scripts, synthesize voice, create video, and publish across multiple social platforms. Content creators and marketers can rapidly produce engaging short-form video content without manual editing.
modelo do chatbot
Automate a chatbot that responds to user queries using a knowledge base and external APIs. It connects OpenAI for AI processing, MySQL and PostgreSQL for data retrieval and chat memory, and external APIs for additional information. This workflow is ideal for customer support teams needing automated responses, developers building interactive AI agents, or businesses wanting to provide instant information to users.
LangChain - Example - Code Node Example
Explore a basic LangChain agent that answers questions using a custom tool. This workflow connects n8n's AI nodes and custom code nodes to OpenAI for language model interactions. It's useful for developers building custom AI assistants or researchers experimenting with agentic workflows. This saves development time by providing a ready-to-use example of a LangChain agent.
AI-Powered Candidate Shortlisting Automation for ERPNext
Automate AI-powered candidate shortlisting for ERPNext job applications. This workflow connects ERPNext, Google Gemini, WhatsApp, and Outlook to process resumes, evaluate candidates, and communicate outcomes. Recruiters and HR departments can use this to efficiently screen applicants, automatically reject unqualified candidates, and send acceptance notifications. It significantly reduces manual review time and streamlines the hiring process.
AI Customer feedback sentiment analysis
Automate the analysis of customer feedback by leveraging AI to understand sentiment and efficiently organize insights. This powerful workflow connects a Form Trigger to capture incoming customer feedback, then sends that feedback to OpenAI for classification, determining whether the sentiment is positive, negative, or neutral. The classified sentiment is then merged with the original feedback content and automatically added as a new row to a designated Google Sheet, creating a centralized, AI-powered repository of customer opinions. Businesses can use this to quickly identify customer pain points, track product satisfaction over time, or prioritize feature development based on user sentiment, eliminating the manual effort of reading and categorizing every piece of feedback. This saves significant time and resources for customer service teams, product managers, and marketing departments, allowing them to focus on strategic initiatives rather than repetitive data entry and analysis.
Telegram AI multi-format chatbot
Automate a multi-format AI chatbot on Telegram, allowing users to interact via text or voice. This workflow connects Telegram for user input and replies with OpenAI for AI processing and voice-to-text conversion. It's ideal for customer support, content generation, or interactive learning platforms. This saves significant time and effort in managing diverse user interactions.
Organise Your Local File Directories With AI
Organize your local file directories effortlessly with this AI-powered workflow. This powerful n8n automation connects to your local file system using the Local File Trigger and Get Files and Folders nodes, then leverages the intelligence of Mistral Cloud Chat Model and the AI File Manager to suggest and execute file organization strategies. It parses structured output from the AI to identify optimal file placements, then uses the Move Files into Folders node to automatically relocate files based on these AI-driven suggestions. This workflow is ideal for developers, content creators, or anyone with a cluttered local drive who needs an intelligent assistant to categorize documents, images, or project files, saving significant time and effort previously spent on manual sorting and reducing the frustration of searching for misplaced items.
Generating Image Embeddings via Textual Summarisation
Generates image embeddings by combining visual and textual analysis. Google Drive images are resized, color information extracted, and OpenAI generates keywords, all then embedded using OpenAI. This is useful for content creators organizing large image libraries or e-commerce platforms improving product search. It streamlines image categorization and retrieval.
Create childrens AI story videos from drawings and auto-publish to YouTube with Blotato
💥 From Drawing to Story: Auto-Publish AI Video to YouTube with Blotato Overview Transform a hand-drawn character sketch into a fully animated, narrated video story — automatically. This 3-part pipeline uses Claude AI, image generation, and video synthesis to go from a simple drawing to a publish-ready video, with no manual editing required. Perfect for: indie creators, educators, storytellers, and anyone who wants to bring hand-drawn characters to life at scale. How It Works Part 1 — From Drawing to Story: Bringing Characters to Life A form submission triggers the workflow with an uploaded drawing The image is analyzed by Claude AI to extract characters and traits Character images are generated via Nano Banana (image generation API) A full story is written by Claude AI, split into scenes, and passed to Part 2 Part 2 — From Characters to Scenes: Rendering the Visual Story Character images are downloaded and converted to Base64 references Scene images are generated using Nano Banana with character consistency Scene image URLs are mapped and the video pipeline is triggered Part 3 — From Scenes to Screen: Video, Narration & Final Render Video prompts and narration context are generated by Claude AI Videos are generated via AtlasCloud (Kling Pro 3.0) with polling loop Narration audio is created with ElevenLabs and uploaded Shotstack assembles the final video with audio sync Final video is published to YouTube (and optionally TikTok) > ⚠️ Important — Workflow Structure > > This template is split into 3 separate workflows. > Each part must be imported and deployed in its own workflow in n8n. > > 📺 Watch the step-by-step tutorial to set everything up correctly: > > @youtube Requirements Credentials needed Blotato API credentials (YouTube/TikTok publishing) AtlasCloud API (Kling Pro 3.0 video generation) Anthropic API key (Claude AI for story & prompts) ElevenLabs API key (narration audio) Shotstack API key (video assembly) Nano Banana API key (image generation) Setup steps Configure credentials for each service above in n8n Set up a form trigger with a file upload field for the drawing Deploy the 3 workflows in order and connect them via webhooks Run a test submission with a simple sketch to validate the full pipeline 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
Use skills In n8n agent node
This template gives you a framework to use skills in any n8n agent. You can use this as a starting point and add any other tools or patterns needed for your use case. What are “skills”? Skills are a context management standard created by Anthropic for use in Claude code. Basically, instead of having a HUGE system prompt, skills split that into lots of small, structured instruction files that tell an agent how to do a specific kind of task. Instead of stuffing a massive prompt full of rules, the agent: finds the relevant skill file reads it and follows the steps inside It’s a simple pattern that makes managing system prompts for general purpose agents much more straightforward. See an example of a skills repo here. What this workflow does Responds to messages in n8n Chat (or Chat Hub) Builds an “available skills” index from one or more GitHub repos Lets the agent browse folders + fetch skill files (Markdown) as needed Uses the skill content to guide how it completes tasks How it (roughly) works A chat message comes in. The workflow lists directories in the configured skills repos (root + if present), filters out noise, and merges everything into one directory map. That directory map gets injected into the agent’s system prompt so it knows what skill files exist. When it needs instructions, it uses tools: List Files by Path Name to explore folders * Get a File From GitHub to pull the skill file as raw text (no base64) Same “skills” pattern as the CLI tools The flow is: find a skill → open it → follow the steps, the same way it works in the CLI tools, but running inside n8n, so you don’t need to download or install anything locally. How to set it up (Required) Add your GitHub credentials to each node that needs it (Required) Add your OpenRouter credentials to the chat node or replace with the provider of your choosing (Optional) Add more repos to (any skills GitHub repo works as long as your credentials have access to it, such as any public repo) (Optional) Add more tools and turn it into whatever agent you actually need
Scrape, search and browse the web with a Firecrawl AI agent webhook
Turn any prompt into structured web data. Send a POST request with a natural language prompt and an optional JSON schema, and get back clean, structured results scraped from the web by an AI agent powered by Firecrawl. Use Cases Data Enrichment: Feed company names or URLs from your CRM and get back structured firmographic data (industry, funding, team size, tech stack). Lead Generation: Ask the agent to find pricing, contact pages, or product details for a list of competitors. Market Research: Extract structured pricing plans, feature comparisons, or product catalogs from any website. Content Aggregation: Pull structured news, events, or job postings from across the web on a schedule. Sales Intelligence: Enrich prospect lists with company info, recent news, or tech stack details before outreach. How It Works Receive Scrape Request receives a POST request with and an optional . Validate Output Schema checks the schema. If none is provided, it falls back to a permissive default. If the schema is malformed, it returns a clear error via Return Schema Error. Research & Extract Web Data takes the prompt and uses the full Firecrawl toolkit to research the web: - Search (): Finds relevant pages and sources across the web. - Scrape (): Extracts clean, structured content from any URL. - Interact (interactContext, interact, interactStop): Lets the agent interact with scraped pages in a live session. After scraping a page, the agent can click buttons, fill forms, navigate dynamic content, and extract data that static scraping cannot reach, all without managing sessions manually. This combination gives the AI agent complete web navigation capabilities. It can discover sources, read pages, and interact with dynamic content autonomously. Format Response to Schema (Structured Output Parser) formats the agent's response to match the provided (or default) schema. Return Structured Results sends the structured JSON back to the caller. Setup Requirements Firecrawl API Key: Sign up at firecrawl.dev and grab your API key. Connect it in the Firecrawl credential nodes. LLM Provider: Configure your Primary Chat Model and Fallback Chat Model nodes (e.g., OpenRouter, OpenAI, Anthropic). The template uses two model nodes for reliability, plus a separate Parser Chat Model for the output parser. n8n Instance: Self-hosted or cloud. Make sure the webhook node is set to accept POST requests. API Reference Endpoint Request Body | Field | Type | Required | Description | |-------|------|----------|-------------| | | string | Yes | Natural language instruction for the agent | | | object | No | JSON Schema defining the desired output structure | Response Returns a JSON object matching the provided schema, or a flexible object if no schema was given. Testing Examples Basic Request (No Schema) The agent decides the output structure on its own. Expected output: A JSON object with whatever structure the agent finds most appropriate for the data. Since no schema was provided, the internal default () is used. Request With a Custom Schema You define exactly the shape of data you want back. Expected output: Invalid Schema (String Instead of Object) Expected output: Invalid Schema (Array Instead of Object) Expected output: Same error response as above. Invalid Schema (Missing Property) Expected output: Same error response as above. Invalid Schema (Invalid Value) Expected output: Same error response as above. Workflow Architecture Schema Validation Logic The Validate Output Schema node runs this validation before passing data to the agent: If is missing or null, the default permissive schema is used: . If is present, it must be a JSON object (not a string, array, or primitive). It must have a property with a valid value: , , , , or . If validation fails, the workflow returns an error response with a helpful message and example schema. Notes The Format Response to Schema node (Structured Output Parser) requires the schema to be passed as a JSON string. The expression handles this conversion. The agent has access to Firecrawl's full toolkit: search, scrape, and interact. With all three connected, the agent has complete web navigation powers. It can discover sources via search, extract content via scrape, and interact with dynamic JavaScript-heavy pages via interact. The interact tools let the agent scrape a page first and then continue working with it in a live session, clicking buttons, filling forms, and navigating deeper, all without manual session management. The agent autonomously decides which tools to use based on the prompt. Response times vary depending on the complexity of the prompt and how many pages the agent needs to visit. Simple lookups take a few seconds; deep research can take longer.
Showing 12 of 307 templates