Chat with GitHub OpenAPI Specification using RAG (Pinecone and OpenAI)

Engage in intelligent conversations with the GitHub OpenAPI Specification by leveraging this n8n workflow. This powerful automation connects to Pinecone and OpenAI to provide a Retrieval Augmented Generation (RAG) system, allowing you to query and understand complex API documentation with ease. The workflow begins by fetching the GitHub OpenAPI Specification via an HTTP Request node, then processes and stores this documentation in a Pinecone Vector Store using a Default Data Loader and Recursive Character Text Splitter. When a chat message is received through the AI:chatTrigger, an AI Agent orchestrates the interaction, utilizing an OpenAI Chat Model for natural language understanding and a Window Buffer Memory to maintain context. To answer your queries, the AI Agent employs a Vector Store Tool that generates user query embeddings with OpenAI and retrieves relevant information from Pinecone Vector Store (Querying). This workflow is ideal for developers, technical writers, and API consumers who need quick, accurate answers about GitHub's API, eliminating the need to manually sift through extensive documentation and significantly reducing research time and effort.

17 nodesmanual trigger203 views0 copiesAI
PineconeOpenAI

Workflow JSON

{"id": "FD0bHNaehP3LzCNN", "meta": {"instanceId": "69133932b9ba8e1ef14816d0b63297bb44feb97c19f759b5d153ff6b0c59e18d"}, "name": "Chat with GitHub OpenAPI Specification using RAG (Pinecone and OpenAI)", "tags": [], "nodes": [{"id": "362cb773-7540-4753-a401-e585cdf4af8a", "name": "When clicking \u2018Test workflow\u2019", "type": "n8n-nodes-base.manualTrigger", "position": [0, 0], "parameters": {}, "typeVersion": 1}, {"id": "45470036-cae6-48d0-ac66-addc8999e776", "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "position": [300, 0], "parameters": {"url": "https://raw.githubusercontent.com/github/rest-api-description/refs/heads/main/descriptions/api.github.com/api.github.com.json", "options": {}}, "typeVersion": 4.2}, {"id": "a9e65897-52c9-4941-bf49-e1a659e442ef", "name": "Pinecone Vector Store", "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone", "position": [520, 0], "parameters": {"mode": "insert", "options": {}, "pineconeIndex": {"__rl": true, "mode": "list", "value": "n8n-demo", "cachedResultName": "n8n-demo"}}, "credentials": {"pineconeApi": {"id": "", "name": "[Your pineconeApi]"}}, "typeVersion": 1}, {"id": "c2a2354b-5457-4ceb-abfc-9a58e8593b81", "name": "Default Data Loader", "type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader", "position": [660, 180], "parameters": {"options": {}}, "typeVersion": 1}, {"id": "7338d9ea-ae8f-46eb-807f-a15dc7639fc9", "name": "Recursive Character Text Splitter", "type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter", "position": [740, 360], "parameters": {"options": {}}, "typeVersion": 1}, {"id": "44fd7a59-f208-4d5d-a22d-e9f8ca9badf1", "name": "When chat message received", "type": "@n8n/n8n-nodes-langchain.chatTrigger", "position": [-20, 760], "webhookId": "089e38ab-4eee-4c34-aa5d-54cf4a8f53b7", "parameters": {"options": {}}, "typeVersion": 1.1}, {"id": "51d819d6-70ff-428d-aa56-1d7e06490dee", "name": "AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [320, 760], "parameters": {"options": {"systemMessage": "You are a helpful assistant providing information about the GitHub API and how to use it based on the OpenAPI V3 specifications."}}, "typeVersion": 1.7}, {"id": "aed548bf-7083-44ad-a3e0-163dee7423ef", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [220, 980], "parameters": {"options": {}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1.1}, {"id": "dfe9f356-2225-4f4b-86c7-e56a230b4193", "name": "Window Buffer Memory", "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow", "position": [420, 1020], "parameters": {}, "typeVersion": 1.3}, {"id": "4cf672ee-13b8-4355-b8e0-c2e7381671bc", "name": "Vector Store Tool", "type": "@n8n/n8n-nodes-langchain.toolVectorStore", "position": [580, 980], "parameters": {"name": "GitHub_OpenAPI_Specification", "description": "Use this tool to get information about the GitHub API. This database contains OpenAPI v3 specifications."}, "typeVersion": 1}, {"id": "1df7fb85-9d4a-4db5-9bed-41d28e2e4643", "name": "OpenAI Chat Model1", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [840, 1160], "parameters": {"options": {}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1.1}, {"id": "7b52ef7a-5935-451e-8747-efe16ce288af", "name": "Sticky Note", "type": "n8n-nodes-base.stickyNote", "position": [-40, -260], "parameters": {"width": 640, "height": 200, "content": "## Indexing content in the vector database\nThis part of the workflow is responsible for extracting content, generating embeddings and sending them to the Pinecone vector store.\n\nIt requests the OpenAPI specifications from GitHub using a HTTP request. Then, it splits the file in chunks, generating embeddings for each chunk using OpenAI, and saving them in Pinecone vector DB."}, "typeVersion": 1}, {"id": "3508d602-56d4-4818-84eb-ca75cdeec1d0", "name": "Sticky Note1", "type": "n8n-nodes-base.stickyNote", "position": [-20, 560], "parameters": {"width": 580, "content": "## Querying and response generation \n\nThis part of the workflow is responsible for the chat interface, querying the vector store and generating relevant responses.\n\nIt uses OpenAI GPT 4o-mini to generate responses."}, "typeVersion": 1}, {"id": "5a9808ef-4edd-4ec9-ba01-2fe50b2dbf4b", "name": "Generate User Query Embedding", "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi", "position": [480, 1400], "parameters": {"options": {}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1.2}, {"id": "f703dc8e-9d4b-45e3-8994-789b3dfe8631", "name": "Pinecone Vector Store (Querying)", "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone", "position": [440, 1220], "parameters": {"options": {}, "pineconeIndex": {"__rl": true, "mode": "list", "value": "n8n-demo", "cachedResultName": "n8n-demo"}}, "credentials": {"pineconeApi": {"id": "", "name": "[Your pineconeApi]"}}, "typeVersion": 1}, {"id": "ea64a7a5-1fa5-4938-83a9-271929733a8e", "name": "Generate Embeddings", "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi", "position": [480, 220], "parameters": {"options": {}}, "credentials": {"openAiApi": {"id": "", "name": "[Your openAiApi]"}}, "typeVersion": 1.2}, {"id": "65cbd4e3-91f6-441a-9ef1-528c3019e238", "name": "Sticky Note2", "type": "n8n-nodes-base.stickyNote", "position": [-820, -260], "parameters": {"width": 620, "height": 320, "content": "## RAG workflow in n8n\n\nThis is an example of how to use RAG techniques to create a chatbot with n8n. It is an API documentation chatbot that can answer questions about the GitHub API. It uses OpenAI for generating embeddings, the gpt-4o-mini LLM for generating responses and Pinecone as a vector database.\n\n### Before using this template\n* create OpenAI and Pinecone accounts\n* obtain API keys OpenAI and Pinecone \n* configure credentials in n8n for both\n* ensure you have a Pinecone index named \"n8n-demo\" or adjust the workflow accordingly."}, "typeVersion": 1}], "active": false, "pinData": {}, "settings": {"executionOrder": "v1"}, "versionId": "2908105f-c20c-4183-bb9d-26e3559b9911", "connections": {"HTTP Request": {"main": [[{"node": "Pinecone Vector Store", "type": "main", "index": 0}]]}, "OpenAI Chat Model": {"ai_languageModel": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]}, "Vector Store Tool": {"ai_tool": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]}, "OpenAI Chat Model1": {"ai_languageModel": [[{"node": "Vector Store Tool", "type": "ai_languageModel", "index": 0}]]}, "Default Data Loader": {"ai_document": [[{"node": "Pinecone Vector Store", "type": "ai_document", "index": 0}]]}, "Generate Embeddings": {"ai_embedding": [[{"node": "Pinecone Vector Store", "type": "ai_embedding", "index": 0}]]}, "Window Buffer Memory": {"ai_memory": [[{"node": "AI Agent", "type": "ai_memory", "index": 0}]]}, "When chat message received": {"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]}, "Generate User Query Embedding": {"ai_embedding": [[{"node": "Pinecone Vector Store (Querying)", "type": "ai_embedding", "index": 0}]]}, "Pinecone Vector Store (Querying)": {"ai_vectorStore": [[{"node": "Vector Store Tool", "type": "ai_vectorStore", "index": 0}]]}, "Recursive Character Text Splitter": {"ai_textSplitter": [[{"node": "Default Data Loader", "type": "ai_textSplitter", "index": 0}]]}, "When clicking \u2018Test workflow\u2019": {"main": [[{"node": "HTTP Request", "type": "main", "index": 0}]]}}}

How to Import This Workflow

  1. 1Copy the workflow JSON above using the Copy Workflow JSON button.
  2. 2Open your n8n instance and go to Workflows.
  3. 3Click Import from JSON and paste the copied workflow.

Don't have an n8n instance? Start your free trial at n8nautomation.cloud

Ready to automate with n8n?

Get affordable managed n8n hosting with 24/7 support.