🔧 How to Set Up Ollama with n8n
Integrating Ollama—a locally running large language model (LLM)—with n8n allows you to automate intelligent workflows using natural language prompts. Below is a step-by-step professional setup guide:
✅ Prerequisites
-
- Ollama installed and running locally — Official site
- n8n installed (locally or via cloud) — Install options
- Basic familiarity with creating workflows in n8n
🧠 Step 1: Run a Model in Ollama
After installing Ollama, start a language model (e.g., LLaMA 3) by running the following in your terminal:
ollama run llama3
This launches a local API server at:
http://localhost:11434
🔁 Step 2: Create a New Workflow in n8n
- Open your n8n dashboard.
- Create a new workflow.
- Add a Trigger node (e.g., Manual Trigger or Webhook).
📡 Step 3: Send a Prompt to Ollama via HTTP Request
Add a new node: HTTP Request, then configure it as follows:
- HTTP Method: POST
- URL: http://localhost:11434/api/generate
- Headers:
{
"Content-Type": "application/json"
}
Body (JSON):
{
"model": "llama3",
"prompt": "Hello, can you explain what Ollama is?",
"stream": false
}
🔄 Step 4: Process the Response
Ollama will return a JSON response like this:
{
"response": "Ollama is a platform for running local LLMs...",
...
}
You can use Set, Function, or Logic nodes in n8n to work with this response.
📁 Optional: Full Sample Workflow JSON
Here’s a minimal working example of an n8n workflow to call Ollama:
{
"nodes": [
{
"parameters": {},
"id": "Manual Trigger",
"name": "Manual Trigger",
"type": "n8n-nodes-base.manualTrigger",
"typeVersion": 1,
"position": [250, 250]
},
{
"parameters": {
"requestMethod": "POST",
"url": "http://localhost:11434/api/generate",
"jsonParameters": true,
"options": {},
"bodyParametersJson": "{\"model\":\"llama3\",\"prompt\":\"Tell me a fun fact about AI\",\"stream\":false}"
},
"id": "HTTP Request",
"name": "Query Ollama",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 2,
"position": [450, 250]
}
],
"connections": {
"Manual Trigger": {
"main": [
[
{
"node": "Query Ollama",
"type": "main",
"index": 0
}
]
]
}
}
}
📌 Notes & Best Practices
- Ensure Ollama is running when making requests; otherwise, requests will fail.
- Consider using a reverse proxy and authentication for production use.
- Limit the request rate to avoid overloading your local machine.
💬 Final Thoughts
This integration enables you to use powerful open-source LLMs in your automation flows—great for content creation, summarization, classification, and much more.
Related Content: https://7balance.org/can-n8n-access-stockhero-trading-bots-without-an-api/