Run Ollama AI models locally with remote access via Cloudflare tunnels.
- ollama: AI model server with GPU support (port 11434)
- ollama-tunnel: Secure Cloudflare tunnel for remote access
- Edit
cloudflared/config.ymlwith your tunnel UUID and hostname - Run
docker compose up -d - Access locally at
http://localhost:11434or remotely via Cloudflare - Check tunnel status:
docker compose logs ollama-tunnel
# Start containers then run:
bash ollama/models.sh- LLMs: llama3.2:1b, gemma3:1b, deepseek-r1:1.5b
- Embeddings: nomic-embed-text, mxbai-embed-large
- Vision: granite3.2-vision:2b
- Edit
ollama/models.shto add/remove models - Uncomment models in the script to enable them
- View installed models by uncommenting
ollama listin the script
This project is licensed under the MIT License.