Skip to content

solcanine/LLM-crypto-chat-bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

💬 Chain-Field Chatbot (Solana + EVM)

A simple chatbot with two selectable chain fields (Solana and EVM). You pick a field and chat; answers are grounded in that field's knowledge base via RAG.

Email Telegram Twitter


📸 Preview

The app shows a clean UI:

  • 🔽 Chain field – Dropdown to select Solana or EVM
  • 💬 Chat – Type your question (e.g. "Hello, what is solana"); the bot replies using that chain's docs

Chain-Field Chatbot UI


✅ Requirements

  • Python 3.10+ (including 3.14). The app uses FAISS for the vector store (no ChromaDB).

🚀 Setup

  1. Clone or open the project and go to the project root.

  2. Create a virtual environment (recommended):

    python -m venv .venv
    .venv\Scripts\activate   # Windows
    # source .venv/bin/activate  # macOS/Linux
  3. Install dependencies:

    pip install -r requirements.txt
  4. Configure environment:

    • Copy .env.example to .env
    • Chat (LLM): set OPENAI_API_KEY (required for replies).
    • Ingest (embeddings): default is free Hugging Face API. Set HF_TOKEN (get one at huggingface.co/settings/tokens). If you hit OpenAI quota (429), use EMBEDDING_PROVIDER=huggingface and HF_TOKEN=hf_... so ingest runs without OpenAI.

    Example .env:

    OPENAI_API_KEY=sk-...
    LLM_MODEL=gpt-4o-mini
    EMBEDDING_PROVIDER=huggingface
    HF_TOKEN=hf_...
    

📚 Ingest documents (optional but recommended)

  • Put Solana-related docs (.md, .txt, .pdf) in data/solana/
  • Put EVM-related docs in data/evm/
  • See data/README.md for sources (e.g. Solana Cookbook, Ethereum docs)

Then run:

python -m app.rag.ingest

This builds FAISS indexes under vector_store/solana and vector_store/evm. If you skip this, the app will error on first chat until you run ingest.


▶️ Run the app

  1. Start the backend (from project root):

    uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
  2. Start the frontend (in another terminal):

    streamlit run frontend/streamlit_app.py
  3. Open the URL shown by Streamlit (e.g. http://localhost:8501). Choose Solana or EVM, then type your message.


🔧 Optional: backend URL

If the API runs on another host/port, set:

set CHATBOT_BACKEND_URL=http://localhost:8000
streamlit run frontend/streamlit_app.py

📁 Project layout

  • app/ – FastAPI app, config, RAG (ingest + chains), Pydantic models
  • app/rag/ingest.py – Ingest script: python -m app.rag.ingest
  • app/rag/chains.py – RAG retrieval and get_answer(field, message)
  • frontend/streamlit_app.py – Streamlit UI
  • data/solana/, data/evm/ – Documents to ingest per field
  • vector_store/ – FAISS index per field (created at runtime; in .gitignore)

About

💬 Chain-field chatbot for Solana and EVM. Pick a chain, ask questions—answers are grounded in your docs via RAG (FAISS + OpenAI). FastAPI + Streamlit.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages