ETL pipeline

Automate your data extraction, transformation, and loading with this robust ETL pipeline, designed to efficiently process and analyze information from various sources. This workflow begins on a schedule, fetching tweets from Twitter/X, then storing them in MongoDB for initial processing. The MongoDB data is then sent to Google Cloud Natural Language for sentiment analysis or entity extraction, with the results subsequently prepared and stored in PostgreSQL. A conditional check on the PostgreSQL data determines whether to send an alert to Slack, ensuring timely notifications for critical insights or anomalies. This powerful automation is ideal for marketing teams monitoring brand sentiment, researchers analyzing public opinion, or businesses tracking competitor activity, providing actionable intelligence without manual data handling. By automating data ingestion, enrichment, and storage, this workflow significantly reduces the time and effort spent on data preparation, allowing teams to focus on analysis and strategic decision-making while ensuring data consistency and accessibility.

9 nodesschedule trigger70 views0 copiesData
PostgreSQLMongoDBTwitter/XSlack

Workflow JSON

{"id": "6", "name": "ETL pipeline", "nodes": [{"name": "Twitter", "type": "n8n-nodes-base.twitter", "position": [300, 300], "parameters": {"limit": 3, "operation": "search", "searchText": "=#OnThisDay", "additionalFields": {}}, "credentials": {"twitterOAuth1Api": "twitter_api"}, "typeVersion": 1}, {"name": "Postgres", "type": "n8n-nodes-base.postgres", "position": [1100, 300], "parameters": {"table": "tweets", "columns": "text, score, magnitude", "returnFields": "=*"}, "credentials": {"postgres": "postgres"}, "typeVersion": 1}, {"name": "MongoDB", "type": "n8n-nodes-base.mongoDb", "position": [500, 300], "parameters": {"fields": "text", "options": {}, "operation": "insert", "collection": "tweets"}, "credentials": {"mongoDb": "mongodb"}, "typeVersion": 1}, {"name": "Slack", "type": "n8n-nodes-base.slack", "position": [1500, 200], "parameters": {"text": "=\ud83d\udc26 NEW TWEET with sentiment score {{$json[\"score\"]}} and magnitude {{$json[\"magnitude\"]}} \u2b07\ufe0f\n{{$json[\"text\"]}}", "channel": "tweets", "attachments": [], "otherOptions": {}}, "credentials": {"slackApi": "slack"}, "typeVersion": 1}, {"name": "IF", "type": "n8n-nodes-base.if", "position": [1300, 300], "parameters": {"conditions": {"number": [{"value1": "={{$json[\"score\"]}}", "operation": "larger"}]}}, "typeVersion": 1}, {"name": "NoOp", "type": "n8n-nodes-base.noOp", "position": [1500, 400], "parameters": {}, "typeVersion": 1}, {"name": "Google Cloud Natural Language", "type": "n8n-nodes-base.googleCloudNaturalLanguage", "position": [700, 300], "parameters": {"content": "={{$node[\"MongoDB\"].json[\"text\"]}}", "options": {}}, "credentials": {"googleCloudNaturalLanguageOAuth2Api": "google_nlp"}, "typeVersion": 1}, {"name": "Set", "type": "n8n-nodes-base.set", "position": [900, 300], "parameters": {"values": {"number": [{"name": "score", "value": "={{$json[\"documentSentiment\"][\"score\"]}}"}, {"name": "magnitude", "value": "={{$json[\"documentSentiment\"][\"magnitude\"]}}"}], "string": [{"name": "text", "value": "={{$node[\"Twitter\"].json[\"text\"]}}"}]}, "options": {}}, "typeVersion": 1}, {"name": "Cron", "type": "n8n-nodes-base.cron", "position": [100, 300], "parameters": {"triggerTimes": {"item": [{"hour": 6}]}}, "typeVersion": 1}], "active": false, "settings": {}, "connections": {"IF": {"main": [[{"node": "Slack", "type": "main", "index": 0}], [{"node": "NoOp", "type": "main", "index": 0}]]}, "Set": {"main": [[{"node": "Postgres", "type": "main", "index": 0}]]}, "Cron": {"main": [[{"node": "Twitter", "type": "main", "index": 0}]]}, "MongoDB": {"main": [[{"node": "Google Cloud Natural Language", "type": "main", "index": 0}]]}, "Twitter": {"main": [[{"node": "MongoDB", "type": "main", "index": 0}]]}, "Postgres": {"main": [[{"node": "IF", "type": "main", "index": 0}]]}, "Google Cloud Natural Language": {"main": [[{"node": "Set", "type": "main", "index": 0}]]}}}

How to Import This Workflow

  1. 1Copy the workflow JSON above using the Copy Workflow JSON button.
  2. 2Open your n8n instance and go to Workflows.
  3. 3Click Import from JSON and paste the copied workflow.

Don't have an n8n instance? Start your free trial at n8nautomation.cloud

Related Templates

Ask questions about a PDF using AI

Effortlessly transform your Google Drive PDFs into an interactive knowledge base with this powerful AI workflow. This n8n automation connects your Google Drive files, processes them with OpenAI embeddings, and stores them in a Pinecone vector database, allowing you to ask questions and receive intelligent answers directly from your document content. When a new PDF is uploaded to Google Drive, the workflow automatically extracts its text, splits it into manageable chunks using the Recursive Character Text Splitter, generates embeddings via OpenAI, and then inserts this structured data into Pinecone for efficient retrieval. Later, by clicking the 'Chat' button, you can engage in a natural language conversation with your document, powered by the OpenAI Chat Model and the Question and Answer Chain, which retrieves relevant information from Pinecone. This is ideal for researchers needing to quickly extract insights from large reports, legal professionals analyzing contracts, or businesses creating searchable knowledge bases from their documentation, saving countless hours of manual review and information searching.

16 nodes

Supabase Insertion & Upsertion & Retrieval

Efficiently manage and query your data with the Supabase Insertion & Upsertion & Retrieval workflow, a powerful solution for integrating document management with intelligent data processing. This 21-node workflow, triggered manually, connects Google Drive, Supabase, and OpenAI to automate the ingestion, updating, and retrieval of information. It allows you to upload documents from Google Drive, which are then processed by a Recursive Character Text Splitter and embedded using OpenAI Embeddings for insertion or upsertion into your Supabase vector store via the Insert Documents and Update Documents nodes. When a chat message is received, the workflow leverages OpenAI's Chat Model and a Question and Answer Chain to retrieve relevant information from Supabase using the Retrieve by Query node, providing intelligent responses based on your stored documents. This workflow is ideal for businesses and individuals who need to maintain an up-to-date knowledge base, power AI-driven chatbots with proprietary information, or automate the synchronization of document content with a searchable database, significantly reducing manual data entry and improving information accessibility.

21 nodes

Chat with Postgresql Database

Empower your users to interact with your PostgreSQL database using natural language by automating the process of querying and retrieving information. This workflow connects a chat interface, triggered by a new message, to an AI Agent that leverages OpenAI's powerful language model to understand user requests. The AI Agent intelligently utilizes a suite of PostgreSQL tools, including "Get Table Definition," "Execute SQL Query," and "Get DB Schema and Tables List," to dynamically fetch database schema, generate appropriate SQL queries, and execute them against your database. Chat history is maintained using an AI memory buffer, allowing for contextual conversations. This solution is ideal for support teams needing quick data lookups, business analysts exploring data without writing SQL, or developers building interactive data dashboards. It eliminates the need for manual SQL query writing, speeds up data access, and reduces the training burden for non-technical users, saving significant time and resources while improving data accessibility.

11 nodes

Ready to automate with n8n?

Get affordable managed n8n hosting with 24/7 support.