This guide covers the complete setup process for SaralPolicy, including environment configuration, dependency installation, and local AI setup.
- OS: Windows, Linux, or macOS
- Python: 3.9 or higher
- RAM: Minimum 8GB (16GB recommended for optimal performance)
- Disk Space: ~10GB (for models and virtual environment)
git clone https://github.com/VIKAS9793/SaralPolicy.git
cd SaralPolicyCreate and activate a virtual environment to keep dependencies isolated.
Windows:
python -m venv venv
venv\Scripts\activateLinux/Mac:
python3 -m venv venv
source venv/bin/activatecd backend
pip install -r requirements.txtSaralPolicy uses Ollama for privacy-first, local AI processing.
Detailed Guide: See Ollama Setup Guide for full instructions.
Summary:
- Download Ollama
- Pull the required model:
ollama pull gemma2:2b
- Pull the embedding model:
ollama pull nomic-embed-text
- Start the server (keep running in background):
ollama serve
For high-quality Hindi TTS using Indic Parler-TTS:
# Copy environment template
copy .env.example .env # Windows
# cp .env.example .env # Linux/MacEdit backend/.env and add your HuggingFace token:
HF_TOKEN=hf_your_token_here
Get a token from: https://huggingface.co/settings/tokens
Note: This is optional. Without the token, TTS falls back to gTTS (instant but lower quality).
Index the IRDAI regulatory documents into the local vector database.
# Ensure you are in the 'backend' directory
python scripts/index_irdai_knowledge.pypython main.pyVisit http://localhost:8000 in your browser.
SaralPolicy supports Indic Parler-TTS for high-quality Hindi speech synthesis.
- HuggingFace token (free)
- ~4GB RAM during inference
- ~2GB disk for model cache
- Get token from https://huggingface.co/settings/tokens
- Add to
backend/.env:HF_TOKEN=hf_your_token_here
- CPU: 2-5 minutes per generation (expected)
- GPU (CUDA): 5-15 seconds per generation
- Falls back to gTTS automatically if unavailable
To ensure everything is working correctly, you can run the integration tests:
python -m pytest tests/test_rag_citations.py
python -m pytest tests/test_translation_offline.pyConfiguration is managed via environment variables and main.py constants.
- Port: Default
8000. Change inmain.py. - Allowed Origins: Configured in
app/dependencies.py. - Ollama Host: Default
localhost:11434.
See System Architecture for deeper technical details.