Skip to content

pmgautam/local-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local-RAG: A Retrieval-Augmented Generation App Using Local Resources

Local-RAG is a Retrieval-Augmented Generation (RAG) application designed to work entirely with local resources.

🚀 Key Components

Planned Enhancements

  1. Contextual Conversations
  2. History Tracking using SQlite
  3. Advanced RAG Features (reranking, query processing, handling complex queries etc)
  4. Configurable Setup
  5. Better error handling and logging

🛠️ Installation

Step-by-Step Setup

  1. Clone the repository

    git clone https://github.com/pmgautam/local-rag.git
    cd local-rag
  2. Create virtual environment

    python -m venv venv
    source venv/bin/activate  # Linux/Mac
    # or
    .\venv\Scripts\activate  # Windows
  3. Install dependencies

    pip install -r requirements.txt
  4. Install and start Qdrant

    # Start Qdrant with persistent storage and default ports
    docker run -d \
      -p 6333:6333 \
      -p 6334:6334 \
      -v $(pwd)/qdrant_storage:/qdrant/storage:z \
      qdrant/qdrant
  5. Install Ollama and download model

    # Install Ollama from ollama.com
    # Pull the Llama 3.1 model
    ollama pull llama3.1

📚 Usage

Indexing Documents

# Index documents with default settings
python -m app.indexer --folder /path/to/documents --collection my_docs

Starting the Application

# Start with default configuration
streamlit run app/chat_app.py

# Start with custom configuration
streamlit run app/chat_app.py -- --config path/to/config.yaml

🤝 Contributing

Contributions are welcome! Please send a PR for any improvements you would want to make.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages