Enhance Customer Chat by Buffering Messages with Twilio and Redis

Elevate your customer chat experience by intelligently buffering messages and generating AI-powered responses with this n8n workflow. This automation connects Twilio for inbound messages, Redis for temporary message storage, and OpenAI for sophisticated AI agent capabilities, creating a more responsive and helpful communication channel. When a customer sends a message via Twilio, the workflow immediately adds it to a Redis message stack and then waits a few seconds, allowing for potential follow-up messages from the customer to be buffered together. After the buffer period, the workflow retrieves the latest messages from Redis, consults an OpenAI AI Agent with a window buffer memory for context, and then sends a concise, AI-generated reply back to the customer via Twilio. This workflow is ideal for businesses looking to improve their customer support, sales, or informational chatbots by providing more coherent and context-aware responses, reducing the number of individual, fragmented replies, and ultimately saving agents time by handling common queries efficiently.

18 nodesmanual trigger194 views0 copiesOther
TwilioRedisOpenAI

Workflow JSON

{"meta": {"instanceId": "26ba763460b97c249b82942b23b6384876dfeb9327513332e743c5f6219c2b8e"}, "nodes": [{"id": "d61d8ff3-532a-4b0d-a5a7-e02d2e79ddce", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [2660, 480], "parameters": {"options": {}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1}, {"id": "b6d5c1cf-b4a1-4901-b001-0c375747ee63", "name": "No Operation, do nothing", "type": "n8n-nodes-base.noOp", "position": [1660, 520], "parameters": {}, "typeVersion": 1}, {"id": "f4e08e32-bb96-4b5d-852e-26ad6fec3c8c", "name": "Add to Messages Stack", "type": "n8n-nodes-base.redis", "position": [1340, 200], "parameters": {"list": "=chat-buffer:{{ $json.From }}", "tail": true, "operation": "push", "messageData": "={{ $json.Body }}"}, "credentials": {"redis": {"id": "", "name": "[Your redis]"}}, "typeVersion": 1}, {"id": "181ae99e-ebe7-4e99-b5a5-999acc249621", "name": "Should Continue?", "type": "n8n-nodes-base.if", "position": [1660, 360], "parameters": {"options": {}, "conditions": {"options": {"leftValue": "", "caseSensitive": true, "typeValidation": "strict"}, "combinator": "and", "conditions": [{"id": "ec39573f-f92a-4fe4-a832-0a137de8e7d0", "operator": {"type": "string", "operation": "equals"}, "leftValue": "={{ $('Get Latest Message Stack').item.json.messages.last() }}", "rightValue": "={{ $('Twilio Trigger').item.json.Body }}"}]}}, "typeVersion": 2}, {"id": "640c63ca-2798-48a9-8484-b834c1a36301", "name": "Window Buffer Memory", "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow", "position": [2780, 480], "parameters": {"sessionKey": "=chat-debouncer:{{ $('Twilio Trigger').item.json.From }}", "sessionIdType": "customKey"}, "typeVersion": 1.2}, {"id": "123c35c5-f7b2-4b4d-b220-0e5273e25115", "name": "Twilio Trigger", "type": "n8n-nodes-base.twilioTrigger", "position": [940, 360], "webhookId": "0ca3da0e-e4e1-4e94-8380-06207bf9b429", "parameters": {"updates": ["com.twilio.messaging.inbound-message.received"]}, "credentials": {"twilioApi": {"id": "", "name": "[Your twilioApi]"}}, "typeVersion": 1}, {"id": "f4e86455-7f4d-4401-8f61-a859be1433a9", "name": "Get Latest Message Stack", "type": "n8n-nodes-base.redis", "position": [1500, 360], "parameters": {"key": "=chat-buffer:{{ $json.From }}", "keyType": "list", "options": {}, "operation": "get", "propertyName": "messages"}, "credentials": {"redis": {"id": "", "name": "[Your redis]"}}, "typeVersion": 1, "alwaysOutputData": false}, {"id": "02f8e7f5-12b4-4a5a-9ce9-5f0558e447aa", "name": "Sticky Note", "type": "n8n-nodes-base.stickyNote", "position": [1232.162872321277, -50.203627749982275], "parameters": {"color": 7, "width": 632.8309394802918, "height": 766.7069233634998, "content": "## Step 2. Buffer Incoming Messages\n[Learn more about using Redis](https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.redis)\n\n* New messages are captured into a list.\n* After X seconds, we get a fresh copy of this list\n* If the last message on the list is the same as the incoming message, then we know no new follow-on messages were sent within the last 5 seconds. Hence the user should be waiting and it is safe to reply.\n* But if the reverse is true, then we will abort the execution here."}, "typeVersion": 1}, {"id": "311c0d69-a735-4435-91b6-e80bf7d4c012", "name": "Send Reply", "type": "n8n-nodes-base.twilio", "position": [3000, 320], "parameters": {"to": "={{ $('Twilio Trigger').item.json.From }}", "from": "={{ $('Twilio Trigger').item.json.To }}", "message": "={{ $json.output }}", "options": {}}, "credentials": {"twilioApi": {"id": "", "name": "[Your twilioApi]"}}, "typeVersion": 1}, {"id": "c0e0cd08-66e3-4ca3-9441-8436c0d9e664", "name": "Wait 5 seconds", "type": "n8n-nodes-base.wait", "position": [1340, 360], "webhookId": "d486979c-8074-4ecb-958e-fcb24455086b", "parameters": {}, "typeVersion": 1.1}, {"id": "c7959fa2-69a5-46b4-8e67-1ef824860f4e", "name": "Get Chat History", "type": "@n8n/n8n-nodes-langchain.memoryManager", "position": [2000, 280], "parameters": {"options": {"groupMessages": true}}, "typeVersion": 1.1}, {"id": "55933c54-5546-4770-8b36-a31496163528", "name": "Window Buffer Memory1", "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow", "position": [2000, 420], "parameters": {"sessionKey": "=chat-debouncer:{{ $('Twilio Trigger').item.json.From }}", "sessionIdType": "customKey"}, "typeVersion": 1.2}, {"id": "459c0181-d239-4eec-88b6-c9603868d518", "name": "Sticky Note1", "type": "n8n-nodes-base.stickyNote", "position": [774.3250485705519, 198.07493876489747], "parameters": {"color": 7, "width": 431.1629802181097, "height": 357.49804533541777, "content": "## Step 1. Listen for Twilio Messages\n[Read more about Twilio Trigger](https://docs.n8n.io/integrations/builtin/trigger-nodes/n8n-nodes-base.twiliotrigger)\n\nIn this example, we'll use the sender's phone number as the session ID. This will be important in retrieving chat history."}, "typeVersion": 1}, {"id": "e06313a9-066a-4387-a36c-a6c6ff57d6f9", "name": "Sticky Note2", "type": "n8n-nodes-base.stickyNote", "position": [1900, 80], "parameters": {"color": 7, "width": 618.970917763344, "height": 501.77420646931444, "content": "## Step 3. Get Messages Since Last Reply\n[Read more about using Chat Memory](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymanager)\n\nOnce conditions are met and we allow the agent to reply, we'll need to find the bot's last reply and work out the buffer of user messages since then. We can do this by looking using chat memory and comparing this to the latest message in our redis messages stack."}, "typeVersion": 1}, {"id": "601a71f6-c6f8-4b73-98c7-cfa11b1facaa", "name": "Get Messages Buffer", "type": "n8n-nodes-base.set", "position": [2320, 280], "parameters": {"options": {}, "assignments": {"assignments": [{"id": "01434acb-c224-46d2-99b0-7a81a2bb50c5", "name": "messages", "type": "string", "value": "={{\n$('Get Latest Message Stack').item.json.messages\n .slice(\n $('Get Latest Message Stack').item.json.messages.lastIndexOf(\n $('Get Chat History').item.json.messages.last().human\n || $('Twilio Trigger').item.json.chatInput\n ),\n $('Get Latest Message Stack').item.json.messages.length\n )\n .join('\\n')\n}}"}]}}, "typeVersion": 3.4}, {"id": "9e49f2de-89e6-4152-8e9c-ed47c5fc4654", "name": "Sticky Note3", "type": "n8n-nodes-base.stickyNote", "position": [2549, 120], "parameters": {"color": 7, "width": 670.2274698011594, "height": 522.5993538768389, "content": "## Step 4. Send Single Agent Reply For Many Messages\n[Learn more about using AI Agents](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent)\n\nFinally, our buffered messages are sent to the AI Agent that can formulate a single response for all. This could potentially improve the conversation experience if the chat interaction is naturally more rapid and spontaneous. A drawback however is that responses could be feel much slower - tweak the wait threshold to suit your needs!"}, "typeVersion": 1}, {"id": "be13c74a-467c-4ab1-acca-44878c68dba4", "name": "Sticky Note4", "type": "n8n-nodes-base.stickyNote", "position": [380, 80], "parameters": {"width": 375.55385425077225, "height": 486.69228315530853, "content": "## Try It Out!\n### This workflow demonstrates a simple approach to stagger an AI Agent's reply if users often send in a sequence of partial messages and in short bursts.\n\n* Twilio webhook receives user's messages which are recorded in a message stack powered by Redis.\n* The execution is immediately paused for 5 seconds and then another check is done against the message stack for the latest message.\n* The purpose of this check lets use know if the user is sending more messages or if they are waiting for a reply.\n* The execution is aborted if the latest message on the stack differs from the incoming message and continues if they are the same.\n* For the latter, the agent receives buffered messages and is able to respond to all in a single reply."}, "typeVersion": 1}, {"id": "334d38e1-ec16-46f2-a57d-bf531adb8d3d", "name": "AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [2660, 320], "parameters": {"text": "={{ $json.messages }}", "agent": "conversationalAgent", "options": {}, "promptType": "define"}, "typeVersion": 1.6}], "pinData": {}, "connections": {"AI Agent": {"main": [[{"node": "Send Reply", "type": "main", "index": 0}]]}, "Twilio Trigger": {"main": [[{"node": "Add to Messages Stack", "type": "main", "index": 0}, {"node": "Wait 5 seconds", "type": "main", "index": 0}]]}, "Wait 5 seconds": {"main": [[{"node": "Get Latest Message Stack", "type": "main", "index": 0}]]}, "Get Chat History": {"main": [[{"node": "Get Messages Buffer", "type": "main", "index": 0}]]}, "Should Continue?": {"main": [[{"node": "Get Chat History", "type": "main", "index": 0}], [{"node": "No Operation, do nothing", "type": "main", "index": 0}]]}, "OpenAI Chat Model": {"ai_languageModel": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]}, "Get Messages Buffer": {"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]}, "Window Buffer Memory": {"ai_memory": [[{"node": "AI Agent", "type": "ai_memory", "index": 0}]]}, "Window Buffer Memory1": {"ai_memory": [[{"node": "Get Chat History", "type": "ai_memory", "index": 0}]]}, "Get Latest Message Stack": {"main": [[{"node": "Should Continue?", "type": "main", "index": 0}]]}}}

How to Import This Workflow

  1. 1Copy the workflow JSON above using the Copy Workflow JSON button.
  2. 2Open your n8n instance and go to Workflows.
  3. 3Click Import from JSON and paste the copied workflow.

Don't have an n8n instance? Start your free trial at n8nautomation.cloud

Related Templates

Visualize your SQL Agent queries with OpenAI and Quickchart.io

Visualize your SQL Agent queries with OpenAI and Quickchart.io empowers you to instantly transform complex SQL Agent query results into insightful charts and graphs, all through a simple chat interface. This workflow connects an OpenAI Chat Model to interpret your chat messages, determine if a chart is needed using a Text Classifier, and then leverages Quickchart.io by generating a chart definition with structured output via an HTTP Request node. It automates the entire process from receiving a chat message to extracting the user's question, passing it to an AI Agent, and then conditionally generating and displaying a chart, saving significant time and effort for data analysts, developers, and business intelligence professionals who frequently need to visualize their SQL data. By automating chart generation, this workflow eliminates the manual steps of data extraction, chart selection, and configuration, allowing users to quickly gain visual insights from their SQL Agent queries without needing to switch between multiple tools or possess advanced charting skills.

19 nodes

Integrating AI with Open-Meteo API for Enhanced Weather Forecasting

Enhance your weather forecasting capabilities by integrating artificial intelligence with real-time meteorological data. This workflow automates the process of generating detailed weather forecasts based on user queries, leveraging the power of OpenAI to interpret requests and the Open-Meteo API to retrieve accurate weather information. It connects a chat interface, triggered by a "When chat message received" node, directly to a Generic AI Tool Agent. This agent then intelligently utilizes two custom tools: one to convert a city name into geographical coordinates via an HTTP request, and another to fetch the weather forecast from Open-Meteo using those coordinates, all while maintaining conversational context with the Chat Memory Buffer. This setup is ideal for developers building intelligent assistants, businesses needing dynamic weather insights for logistics or event planning, or anyone requiring an AI-driven, conversational interface for weather information, solving the challenge of manually looking up forecasts and providing a more intuitive user experience. By automating the data retrieval and interpretation, this workflow significantly reduces the time and effort involved in accessing and understanding weather patterns, allowing for quicker decision-making and improved operational efficiency.

12 nodes

Qualify replies from Pipedrive persons with AI

Automate the qualification of inbound email replies from Pipedrive contacts using artificial intelligence. This workflow connects Gmail and OpenAI to your Pipedrive CRM, streamlining your lead nurturing process. When a new email arrives in either of your specified Gmail inboxes (Email box 1 or Email box 2), the workflow searches for the sender as a person in Pipedrive. It then retrieves their full person details and sends the email content to OpenAI for an AI-powered assessment of interest (Is interested?). Based on OpenAI's response, if the person is deemed interested, a new deal is automatically created in Pipedrive, ensuring hot leads are immediately acted upon. This is ideal for sales teams, marketers, and business development professionals who receive a high volume of email replies and need to quickly identify and prioritize genuinely interested prospects, saving significant manual review time and accelerating sales cycles.

11 nodes

Ready to automate with n8n?

Get affordable managed n8n hosting with 24/7 support.