Introduction
This tutorial will guide you through connecting your cloud-hosted n8n instance with Ollama running locally on your PC. We’ll cover multiple methods, from quick testing solutions to more permanent setups.
What is Ollama?
Ollama is a free, open-source application that lets you run large language models (LLMs) locally on your own computer. Think of it as having ChatGPT running on your PC instead of in the cloud.
Prerequisites
- n8n instance hosted in the cloud (e.g., n8n.cloud, self-hosted VPS)
- Ollama installed and running on your local PC
- Basic command line knowledge
- Administrator access to your PC
Understanding the Problem
Your cloud n8n instance cannot directly access your local Ollama because:
- Your local PC is behind a router/firewall
- It doesn’t have a public IP address accessible from the internet
- Cloud services can only connect to publicly accessible endpoints
Solution: Create a secure tunnel that exposes your local Ollama to the internet.
Method 1: Using ngrok (Best for Testing)
Why ngrok?
- Quick setup (5 minutes)
- Free tier available
- Automatic HTTPS
- Perfect for development and testing
Step 1: Install ngrok
- Visit ngrok.com and create a free account
- Download ngrok for your operating system
- Extract the executable to a folder (e.g.,
C:\ngrok)
Step 2: Configure ngrok
- Copy your authtoken from the ngrok dashboard
- Open Command Prompt or PowerShell and run:
ngrok config add-authtoken YOUR_AUTH_TOKEN
Step 3: Start Ollama
- Make sure Ollama is running on your PC
- By default, Ollama runs on port
11434 - Test it locally by opening:
http://localhost:11434
Step 4: Create the Tunnel
- Open a new terminal window
- Run:
ngrok http 11434 - You’ll see output like:
Forwarding https://abc123-456.ngrok.io -> http://localhost:11434 - Copy this URL – this is your public Ollama endpoint!
Step 5: Configure n8n
- Log into your n8n instance
- Create a new workflow
- Add an HTTP Request node
- Configure it:
- Method: POST
- URL:
https://abc123-456.ngrok.io/api/generate - Authentication: None
- Body: JSON
- JSON Body:
{ "model": "llama2", "prompt": "Why is the sky blue?", "stream": false}
- Execute the node to test
Step 6: Parse the Response
- Add a Set node after the HTTP Request
- Extract the response:
- Name:
ollama_response - Value:
{{ $json.response }}
- Name:
Important Notes for ngrok
- Free tier: Your URL changes every time you restart ngrok
- Keep terminal open: The tunnel stays active only while ngrok is running
- Paid tier: Get a permanent URL that doesn’t change
Method 2: Using Cloudflare Tunnel (Best for Permanent Setup)
Why Cloudflare Tunnel?
- Free forever
- More reliable than ngrok free tier
- Can create named tunnels
- Better for long-term projects
Step 1: Install Cloudflare Tunnel
Windows (using winget):
winget install --id Cloudflare.cloudflared
Windows (manual):
- Download from Cloudflare’s website
- Extract to a folder like
C:\cloudflared - Add to PATH or use full path in commands
Linux/Mac:
# Linux
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb
# Mac
brew install cloudflare/cloudflare/cloudflared
Step 2: Authenticate Cloudflare
- Run:
cloudflared tunnel login - Your browser will open – log in with your Cloudflare account
- Select a domain (or create one free at Cloudflare)
- Authorize the tunnel
Step 3: Quick Tunnel (Temporary)
For quick testing:
cloudflared tunnel --url http://localhost:11434
This gives you a temporary URL like: https://random-words.trycloudflare.com
Step 4: Named Tunnel (Permanent)
For a permanent setup:
- Create a tunnel:
cloudflared tunnel create ollama-tunnelNote the tunnel ID that appears - Create a config file (
~/.cloudflared/config.ymlorC:\Users\YourName\.cloudflared\config.yml):tunnel: TUNNEL_ID_HERE credentials-file: C:\Users\YourName\.cloudflared\TUNNEL_ID.json ingress: - hostname: ollama.yourdomain.com service: http://localhost:11434 - service: http_status:404 - Create DNS record:
cloudflared tunnel route dns ollama-tunnel ollama.yourdomain.com - Run the tunnel:
cloudflared tunnel run ollama-tunnel - Run as a service (optional, so it starts automatically):
# Windows (PowerShell as Admin) cloudflared service install # Linux sudo cloudflared service install
Step 5: Use in n8n
Use your tunnel URL: https://ollama.yourdomain.com/api/generate
Method 3: Router Port Forwarding (Advanced)
Why Port Forwarding?
- No third-party service needed
- Full control
- Best performance
When NOT to use this:
- You don’t have a static IP
- Your ISP blocks incoming connections
- You’re not comfortable with network security
Steps:
- Find your local IP:
- Windows: Run
ipconfigin CMD - Look for “IPv4 Address” (e.g.,
192.168.1.100)
- Windows: Run
- Access your router:
- Usually
192.168.1.1or192.168.0.1 - Log in with admin credentials
- Usually
- Create port forwarding rule:
- External Port:
11434 - Internal IP: Your PC’s IP (e.g.,
192.168.1.100) - Internal Port:
11434 - Protocol: TCP
- External Port:
- Configure Windows Firewall:
# Run as Administrator New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -LocalPort 11434 -Protocol TCP -Action Allow - Set up Dynamic DNS (if no static IP):
- Use services like DuckDNS, No-IP, or Dynu
- Install their client on your PC
- Get a free domain like
yourname.duckdns.org
- Use in n8n:
- URL:
http://YOUR_PUBLIC_IP:11434/api/generate - Or:
http://yourname.duckdns.org:11434/api/generate
- URL:
Building a Complete n8n Workflow
Here’s a practical example workflow that uses your local Ollama:
Workflow: AI-Powered Email Responder
- Trigger Node: Manual Trigger or Webhook
- HTTP Request Node (Call Ollama):
- Method: POST
- URL:
YOUR_TUNNEL_URL/api/generate - Body:
{ "model": "llama2", "prompt": "Write a professional email response to: {{ $json.email_content }}", "stream": false}
- Code Node (Parse Response):
const response = $input.item.json.response; return [{ json: { ai_response: response } }]; - Send Email Node or Slack Node:
- Use
{{ $json.ai_response }}as the message
- Use
Security Best Practices
1. Add Basic Authentication
Wrap your Ollama endpoint with a reverse proxy (nginx, Caddy) that requires authentication.
2. Use HTTPS Only
- ngrok: Provides HTTPS automatically
- Cloudflare: HTTPS by default
- Port forwarding: Set up Let’s Encrypt
3. IP Whitelisting
If your n8n instance has a static IP, configure your firewall to only accept connections from that IP.
4. Monitor Access
- Check ngrok/Cloudflare dashboards for unusual traffic
- Set up alerts for excessive requests
5. Use Environment Variables in n8n
Store your tunnel URL as an environment variable instead of hardcoding it:
- URL:
{{ $env.OLLAMA_URL }}/api/generate
Troubleshooting
“Connection Refused” Error
Check:
- Is Ollama running? Test:
curl http://localhost:11434 - Is your tunnel active? Check the terminal window
- Is the URL correct in n8n?
“Timeout” Error
Solutions:
- Increase timeout in n8n HTTP Request settings
- Use smaller models or shorter prompts
- Check your internet connection speed
ngrok URL Changed
Solution:
- Upgrade to ngrok paid plan for static URLs
- Or switch to Cloudflare Tunnel (free permanent URLs)
Ollama Not Responding
Check:
- Restart Ollama service
- Check if port 11434 is already in use:
# Windowsnetstat -ano | findstr :11434 - Check Ollama logs for errors
Firewall Blocking Connection
Windows:
# Allow Ollama through firewall
New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -Program "C:\Program Files\Ollama\ollama.exe" -Action Allow
Performance Tips
- Use
"stream": falsein your requests for easier n8n handling - Keep models small for faster responses (e.g., llama2:7b instead of llama2:70b)
- Cache responses in n8n if you’re asking the same questions repeatedly
- Set appropriate timeouts – AI generation can take 30-60 seconds for longer responses
Cost Comparison
| Method | Free Tier | Paid Tier | Best For |
|---|---|---|---|
| ngrok | URL changes on restart | $8/month for static URL | Testing |
| Cloudflare | Free forever | Free (or paid plans available) | Production |
| Port Forward | Free | Free | Advanced users |
Next Steps
- Test the connection with simple prompts
- Build a workflow that uses your local LLM
- Secure your setup using the security practices above
- Scale up by running multiple Ollama models
- Automate the tunnel startup using system services
Additional Resources
Conclusion
You now have multiple ways to connect your cloud n8n with local Ollama:
- Quick testing: Use ngrok
- Long-term projects: Use Cloudflare Tunnel
- Maximum control: Use port forwarding
Start with ngrok to test everything works, then move to Cloudflare Tunnel for a permanent setup. Happy automating with local LLMs!
