Completely local RAG with chat UI
Check out the RagBase on Streamlit Cloud. Runs with Groq API.
Clone the repo:
git clone [email protected]:curiousily/ragbase.git
cd ragbase
Install the dependencies (requires Poetry):
poetry install
Fetch your LLM (gemma2:9b by default):
ollama pull gemma2:9b
Run the Ollama server
ollama serve
Start RagBase:
poetry run streamlit run app.py
Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question
- Ollama - run local LLM
- Groq API - fast inference for mutliple LLMs
- LangChain - build LLM-powered apps
- Qdrant - vector search/database
- FlashRank - fast reranking
- FastEmbed - lightweight and fast embedding generation
- Streamlit - build UI for data apps
- PDFium - PDF processing and text extraction
You can also use the Groq API to replace the local LLM, for that you'll need a .env
file with Groq API key:
GROQ_API_KEY=YOUR API KEY