Translate audio using AI
Effortlessly translate audio from one language to another and generate spoken output in the target language with this powerful AI workflow. This automation connects OpenAI's advanced language models with ElevenLabs for realistic voice generation, allowing you to transcribe spoken content, translate the text, and then synthesize the translated text back into speech. Imagine a global business needing to quickly understand customer feedback in various languages, or content creators localizing their audio content for international audiences. This workflow solves the challenge of language barriers in audio by automating the entire transcription, translation, and speech synthesis process, saving significant time and resources compared to manual translation and voiceover efforts. It enables rapid communication and content localization, making multilingual audio accessible and actionable.
Workflow JSON
{"meta": {"instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7"}, "nodes": [{"id": "aa0c62d1-2a5e-4336-8783-a8a21cb23374", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [1180, 760], "parameters": {"options": {"temperature": 0}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1}, {"id": "0c7d21e6-5bf6-4927-ad23-008b22e2ffde", "name": "When clicking \"Execute Workflow\"", "type": "n8n-nodes-base.manualTrigger", "position": [280, 560], "parameters": {}, "typeVersion": 1}, {"id": "352de912-3a36-4bf2-b013-b46e0ace38e9", "name": "Generate French Audio", "type": "n8n-nodes-base.httpRequest", "position": [720, 560], "parameters": {"url": "=https://api.elevenlabs.io/v1/text-to-speech/{{ $json.voice_id }}", "method": "POST", "options": {}, "jsonBody": "={\"text\":\"{{ $json.text }}\",\"model_id\":\"eleven_multilingual_v2\",\"voice_settings\":{\"stability\":0.5,\"similarity_boost\":0.5}}", "sendBody": true, "sendQuery": true, "sendHeaders": true, "specifyBody": "json", "authentication": "genericCredentialType", "genericAuthType": "httpHeaderAuth", "queryParameters": {"parameters": [{"name": "optimize_streaming_latency", "value": "1"}]}, "headerParameters": {"parameters": [{"name": "accept", "value": "audio/mpeg"}]}}, "credentials": {"httpHeaderAuth": {"id": "", "name": "[Your httpHeaderAuth]"}}, "typeVersion": 4.1}, {"id": "0cde2e89-0669-41b4-8fe1-1a6aff14792f", "name": "Set ElevenLabs voice ID and text", "type": "n8n-nodes-base.set", "position": [500, 560], "parameters": {"fields": {"values": [{"name": "voice_id", "stringValue": "wl7sZxfTOitHVachQiUm"}, {"name": "text", "stringValue": "=Apr\u00e8s, on a fait la sieste, Camille a travaill\u00e9 pour French Today et j\u2019ai \u00e9tudi\u00e9 un peu, et puis Camille a propos\u00e9 de suivre une visite guid\u00e9e de l\u2019Abbaye de Beauport qui commen\u00e7ait \u00e0 17 heures. On a march\u00e9 environ vingt minutes, et je m\u2019arr\u00eatais souvent pour prendre des photos : la baie de Paimpol est si jolie ! Mais Camille m\u2019a dit : \u00ab D\u00e9p\u00eache-toi Sunny\u202f! La visite guid\u00e9e commence dans cinq minutes. \u00bb Donc, j\u2019ai boug\u00e9 mes fesses et on est arriv\u00e9es \u00e0 l\u2019abbaye"}]}, "options": {}}, "typeVersion": 3.2}, {"id": "38aa323e-a899-4018-afb9-4d4682ac8ff1", "name": "Translate Text to English", "type": "@n8n/n8n-nodes-langchain.chainLlm", "position": [1180, 560], "parameters": {"prompt": "=Translate to English:\n{{ $json.text }}"}, "typeVersion": 1.2}, {"id": "f0b7adad-fa0b-4764-96e0-0883bbcc02d6", "name": "Translate English text to speech", "type": "n8n-nodes-base.httpRequest", "position": [1540, 560], "parameters": {"url": "=https://api.elevenlabs.io/v1/text-to-speech/{{ $('Set ElevenLabs voice ID and text').item.json.voice_id }}", "method": "POST", "options": {}, "jsonBody": "={\"text\":\"{{ $json[\"text\"].replaceAll('\"', '\\\\\"').trim() }}\",\"model_id\":\"eleven_multilingual_v2\",\"voice_settings\":{\"stability\":0.5,\"similarity_boost\":0.5}}", "sendBody": true, "sendQuery": true, "sendHeaders": true, "specifyBody": "json", "authentication": "genericCredentialType", "genericAuthType": "httpHeaderAuth", "queryParameters": {"parameters": [{"name": "optimize_streaming_latency", "value": "1"}]}, "headerParameters": {"parameters": [{"name": "accept", "value": "audio/mpeg"}]}}, "credentials": {"httpHeaderAuth": {"id": "", "name": "[Your httpHeaderAuth]"}}, "typeVersion": 4.1}, {"id": "f8700266-5491-4ca7-b29a-3f5ec1e9b66f", "name": "Transcribe Audio", "type": "n8n-nodes-base.httpRequest", "position": [960, 560], "parameters": {"url": "https://api.openai.com/v1/audio/transcriptions", "method": "POST", "options": {}, "sendBody": true, "contentType": "multipart-form-data", "authentication": "predefinedCredentialType", "bodyParameters": {"parameters": [{"name": "file", "parameterType": "formBinaryData", "inputDataFieldName": "data"}, {"name": "model", "value": "whisper-1"}]}, "nodeCredentialType": "openAiApi"}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 4.1}, {"id": "25630b45-3827-4ee0-a77e-c30cadefe999", "name": "Sticky Note2", "type": "n8n-nodes-base.stickyNote", "position": [449.2637232176971, 319.7947500318393], "parameters": {"color": 7, "width": 199.37543798209555, "height": 420.623805972039, "content": "1] In ElevenLabs, add a voice to your [voice lab](https://elevenlabs.io/voice-lab) and copy its ID. Open this node and add the ID there"}, "typeVersion": 1}, {"id": "a41d2622-4476-44c2-bac6-212be237aa4b", "name": "Sticky Note3", "type": "n8n-nodes-base.stickyNote", "position": [680, 320], "parameters": {"color": 7, "width": 192.21792012722693, "height": 418.3754668433847, "content": "2] Get your ElevenLabs API key (click your name in the bottom-left of [ElevenLabs](https://elevenlabs.io/voice-lab) and choose \u2018profile\u2019)\n\nIn this node, create a new header auth cred. Set the name to `xi-api-key` and the value to your API key"}, "typeVersion": 1}, {"id": "58143bb1-816f-4ff6-9cac-9ce7765e02be", "name": "Sticky Note4", "type": "n8n-nodes-base.stickyNote", "position": [920, 320], "parameters": {"color": 7, "width": 192.21792012722693, "height": 414.59045768149747, "content": "3] In the 'credential' field of this node, create a new OpenAI cred with your [OpenAI API key](https://platform.openai.com/api-keys)"}, "typeVersion": 1}, {"id": "bd2ef5d2-c27d-45e4-a66e-a73168f94087", "name": "Sticky Note", "type": "n8n-nodes-base.stickyNote", "position": [160, 273.1221160672591], "parameters": {"color": 7, "width": 230.39134868652621, "height": 233.3354221029769, "content": "### About\nThis workflow takes some French text, and translates it into spoken audio.\n\nIt then transcribes that audio back into text, translates it into English and generates an audio file of the English text"}, "typeVersion": 1}, {"id": "a1f207d4-dbed-4dfa-aad5-2b2f6e4e6271", "name": "Sticky Note1", "type": "n8n-nodes-base.stickyNote", "position": [440, 272.42998167622557], "parameters": {"color": 7, "width": 685.8541178336201, "height": 478.0993479050163, "content": "### Setup steps"}, "typeVersion": 1}], "pinData": {}, "connections": {"Transcribe Audio": {"main": [[{"node": "Translate Text to English", "type": "main", "index": 0}]]}, "OpenAI Chat Model": {"ai_languageModel": [[{"node": "Translate Text to English", "type": "ai_languageModel", "index": 0}]]}, "Generate French Audio": {"main": [[{"node": "Transcribe Audio", "type": "main", "index": 0}]]}, "Translate Text to English": {"main": [[{"node": "Translate English text to speech", "type": "main", "index": 0}]]}, "Set ElevenLabs voice ID and text": {"main": [[{"node": "Generate French Audio", "type": "main", "index": 0}]]}, "When clicking \"Execute Workflow\"": {"main": [[{"node": "Set ElevenLabs voice ID and text", "type": "main", "index": 0}]]}}}How to Import This Workflow
- 1Copy the workflow JSON above using the Copy Workflow JSON button.
- 2Open your n8n instance and go to Workflows.
- 3Click Import from JSON and paste the copied workflow.
Don't have an n8n instance? Start your free trial at n8nautomation.cloud
Related Templates
Text to Speech (OpenAI)
Converts text into natural-sounding speech using OpenAI's Text-to-Speech API. It sends your input text to OpenAI and receives an audio file in return. This is useful for creating audio versions of articles, generating voiceovers for videos, or providing accessibility features for web content. Quickly transform written content into engaging audio.
LangChain - Example - Code Node Example
Explore a basic LangChain agent that answers questions using a custom tool. This workflow connects n8n's AI nodes and custom code nodes to OpenAI for language model interactions. It's useful for developers building custom AI assistants or researchers experimenting with agentic workflows. This saves development time by providing a ready-to-use example of a LangChain agent.
AI-Powered Candidate Shortlisting Automation for ERPNext
Automate AI-powered candidate shortlisting for ERPNext job applications. This workflow connects ERPNext, Google Gemini, WhatsApp, and Outlook to process resumes, evaluate candidates, and communicate outcomes. Recruiters and HR departments can use this to efficiently screen applicants, automatically reject unqualified candidates, and send acceptance notifications. It significantly reduces manual review time and streamlines the hiring process.