The adventures of Lila, a young engineer and Zor, a quirky AI robot who work together to restore a space station before it’s too late. Episode 1: The Lost Signal
View this on YouTube. View more videos
My web diary where I share technical articles and write about things that matter to me.
The adventures of Lila, a young engineer and Zor, a quirky AI robot who work together to restore a space station before it’s too late. Episode 1: The Lost Signal
View this on YouTube. View more videos
Text-to-video AI tools have revolutionized content creation, allowing anyone to generate video content from simple text descriptions. This tutorial will guide you through the best affordable options and how to get started.
Text-to-video AI uses machine learning models to interpret your text prompts and generate corresponding video content. These tools can create anything from realistic scenes to animated sequences, and even talking head presentations.
[Subject] + [Action] + [Setting] + [Style/Mood] + [Camera Work]
Poor Prompt: “A cat”
Good Prompt: “A fluffy orange cat walking through a sun-drenched garden, flowers swaying in the breeze, cinematic lighting, slow-motion, camera tracking shot”
A steaming cup of coffee on a wooden table, morning sunlight streaming through window, steam rising slowly, warm cozy atmosphere, close-up shot| Platform | Best For | Cost | Output Length | Quality |
|---|---|---|---|---|
| Runway | Realistic clips | $0-12/mo | 3-5 sec | ????? |
| Pika | Creative effects | $0-10/mo | 3-5 sec | ???? |
| Luma | Cinematic shots | $0-30/mo | 5 sec | ????? |
| HeyGen | Talking heads | $24/mo | Minutes | ???? |
| Invideo | Long-form | $20/mo | Minutes | ??? |
Text-to-video AI has become remarkably accessible and affordable. Start with free tiers to learn the tools, focus on improving your prompting skills, and upgrade only when you need more generation capacity. The technology improves monthly, so what seems impossible today might be standard tomorrow.
Happy creating!
Bookmark icons (also called favicons) help you quickly identify your saved sites. This tutorial shows you multiple ways to change or update these icons in Brave browser.
Difficulty: Easy
Best for: When the website has changed its icon or the icon isn’t displaying correctly
Why this works: Brave caches favicons, and visiting the site forces it to refresh the cached icon.
Difficulty: Easy to Moderate
Best for: Adding custom icons to bookmarks
Tip: Most extensions support PNG, JPG, and ICO file formats. For best results, use square images (16×16 or 32×32 pixels).
Difficulty: Advanced
Best for: Power users comfortable with JSON editing
?? Warning: Always backup the Bookmarks file before editing!
Windows:
%LOCALAPPDATA%\BraveSoftware\Brave-Browser\User Data\Default\
macOS:
~/Library/Application Support/BraveSoftware/Brave-Browser/Default/
Linux:
~/.config/BraveSoftware/Brave-Browser/Default/
Bookmarks (no extension)Bookmarks.backupBookmarks file in a text editor (Notepad++, VS Code, or similar)"favicon" field – it looks like this: "favicon": "data:image/png;base64,iVBORw0KG...""favicon": "https://example.com/icon.png"If you want to embed a custom icon:
data:image/png;base64, prefix)Changing bookmark icons in Brave is straightforward once you know the right method:
Choose the method that best fits your technical comfort level and needs!
This tutorial will guide you through connecting your cloud-hosted n8n instance with Ollama running locally on your PC. We’ll cover multiple methods, from quick testing solutions to more permanent setups.
Ollama is a free, open-source application that lets you run large language models (LLMs) locally on your own computer. Think of it as having ChatGPT running on your PC instead of in the cloud.
Your cloud n8n instance cannot directly access your local Ollama because:
Solution: Create a secure tunnel that exposes your local Ollama to the internet.
C:\ngrok)ngrok config add-authtoken YOUR_AUTH_TOKEN11434http://localhost:11434ngrok http 11434Forwarding https://abc123-456.ngrok.io -> http://localhost:11434https://abc123-456.ngrok.io/api/generate{ "model": "llama2", "prompt": "Why is the sky blue?", "stream": false}ollama_response{{ $json.response }}Windows (using winget):
winget install --id Cloudflare.cloudflared
Windows (manual):
C:\cloudflaredLinux/Mac:
# Linux
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb
# Mac
brew install cloudflare/cloudflare/cloudflared
cloudflared tunnel loginFor quick testing:
cloudflared tunnel --url http://localhost:11434
This gives you a temporary URL like: https://random-words.trycloudflare.com
For a permanent setup:
cloudflared tunnel create ollama-tunnel Note the tunnel ID that appears~/.cloudflared/config.yml or C:\Users\YourName\.cloudflared\config.yml): tunnel: TUNNEL_ID_HERE credentials-file: C:\Users\YourName\.cloudflared\TUNNEL_ID.json ingress: - hostname: ollama.yourdomain.com service: http://localhost:11434 - service: http_status:404cloudflared tunnel route dns ollama-tunnel ollama.yourdomain.comcloudflared tunnel run ollama-tunnel# Windows (PowerShell as Admin) cloudflared service install # Linux sudo cloudflared service installUse your tunnel URL: https://ollama.yourdomain.com/api/generate
ipconfig in CMD192.168.1.100)192.168.1.1 or 192.168.0.111434192.168.1.100)11434# Run as Administrator New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -LocalPort 11434 -Protocol TCP -Action Allowyourname.duckdns.orghttp://YOUR_PUBLIC_IP:11434/api/generatehttp://yourname.duckdns.org:11434/api/generateHere’s a practical example workflow that uses your local Ollama:
YOUR_TUNNEL_URL/api/generate{ "model": "llama2", "prompt": "Write a professional email response to: {{ $json.email_content }}", "stream": false}const response = $input.item.json.response; return [{ json: { ai_response: response } }];{{ $json.ai_response }} as the messageWrap your Ollama endpoint with a reverse proxy (nginx, Caddy) that requires authentication.
If your n8n instance has a static IP, configure your firewall to only accept connections from that IP.
Store your tunnel URL as an environment variable instead of hardcoding it:
{{ $env.OLLAMA_URL }}/api/generateCheck:
curl http://localhost:11434Solutions:
Solution:
Check:
# Windowsnetstat -ano | findstr :11434Windows:
# Allow Ollama through firewall
New-NetFirewallRule -DisplayName "Ollama" -Direction Inbound -Program "C:\Program Files\Ollama\ollama.exe" -Action Allow
"stream": false in your requests for easier n8n handling| Method | Free Tier | Paid Tier | Best For |
|---|---|---|---|
| ngrok | URL changes on restart | $8/month for static URL | Testing |
| Cloudflare | Free forever | Free (or paid plans available) | Production |
| Port Forward | Free | Free | Advanced users |
You now have multiple ways to connect your cloud n8n with local Ollama:
Start with ngrok to test everything works, then move to Cloudflare Tunnel for a permanent setup. Happy automating with local LLMs!