Requirements
- Docker
- Ollama
Setup
- Grab the embedding model:
ollama pull mxbai-embed-large
- Grab the LLM:
ollama pull llama3.1
Startup
docker compose up
In the console, the app
logs will tell you port the application is running on, typically http://localhost:3000/
The application is broken into a number of sections which demonstrate the data store, the embedding process, the vector store and then a full RAG chatbot.