A modern web interface for chatting with your local LLMs through Ollama
- 🖥️ Clean, modern interface for interacting with Ollama models
- 💾 Local chat history using IndexedDB
- 📝 Full Markdown support in messages
- 🌙 Dark mode support
- 🚀 Fast and responsive
- 🔒 Privacy-focused: All processing happens locally
# Start Ollama server with your preferred model
ollama pull mistral # or any other model
ollama serve
# Clone and run the GUI
git clone https://github.com/HelgeSverre/ollama-gui.git
cd ollama-gui
yarn install
yarn dev
To use the hosted version, run Ollama with:
OLLAMA_ORIGINS=https://ollama-gui.vercel.app ollama serve
No need to install anything other than docker
.
If you have GPU, please uncomment the following lines in the file
compose.yml
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
docker compose up -d
# Access at http://localhost:8080
docker compose down
# Enter the ollama container
docker exec -it ollama bash
# Inside the container
ollama pull <model_name>
# Example
ollama pull deepseek-r1:7b
Restart the containers using docker compose restart
.
Models will get downloaded inside the folder ./ollama_data
in the repository. You can change it inside the compose.yml
- Chat history with IndexedDB
- Markdown message formatting
- Code cleanup and organization
- Model library browser and installer
- Mobile-responsive design
- File uploads with OCR support
- Vue.js - Frontend framework
- Vite - Build tool
- Tailwind CSS - Styling
- VueUse - Vue Composition Utilities
- @tabler/icons-vue - Icons
- Design inspired by LangUI
- Hosted on Vercel
Released under the MIT License.