Skip to content

Enables users to interact with the LLM via Ollama by implementating a client-server architecture utilizing FastAPI as server-side framework and Streamlit for user interface.

License

Notifications You must be signed in to change notification settings

iSiddharth20/LLM-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Completely Local/Offline LLM Chatbot on your System

Client-Server Architecture LLM Chatbot with optional Streamlit UI


Project Demo on YouTube

Project Demo on YouTube


Understand Role of:

How to Setup:

How to Run:

Prerequisites for Client and Server:

  • STEP 1 : Clone this Repo
  • STEP 2 : [Optional but Recommended] Download and Install Miniconda, to create and manage conda environments

What does Client do:

  • STEP 1 : Request Query from User
  • STEP 2 : Send Query to Server
  • STEP 3 : Receive Response from Server
  • STEP 4 : Display Response to User

What does Server do:

  • STEP 1 : Receive Query from Client
  • STEP 2 : Send Query to Ollama
  • STEP 3 : Receive Response from Ollama
  • STEP 4 : Send Response to Client

Setup for Server:

  • STEP 1 : Download and Install Ollama
  • STEP 2 : Download desired model from Ollama
      NOTE : To download Meta-Llama-3.1-8B, Run command: ollama pull llama3.1
  • STEP 3 : [Optional but Recommended] Create a Conda Environment, Run command : conda create -n "env_server" python=3.11 -y
      NOTE: This is a one-time setup
  • STEP 4 : [Optional but Recommended] Activate the created Conda Environment with conda activate env_server
      NOTE: Activate conda environment with each new instance of Terminal
  • STEP 5 : Install Dependencies: pip install -r requirements.txt

Setup for Client without Stramlit UI:

  • STEP 1 : [Optional but Recommended] Create a Conda Environment, Run command : conda create -n "env_client" python=3.11 -y
      NOTE: This is a one-time setup
  • STEP 2 : [Optional but Recommended] Activate the created Conda Environment with conda activate env_client
      NOTE: Activate conda environment with each new instance of Terminal
  • STEP 3 : Install Dependencies: pip install -r requirements.txt
  • STEP 4 : Change Server IP and Port Number in .env file.
      NOTE: If you are not using separate device as Server, do not change contents of .env file

Setup for Client with Stramlit UI:

  • STEP 1 : [Optional but Recommended] Create a Conda Environment, Run command : conda create -n "env_client" python=3.11 -y
  • STEP 2 : [Optional but Recommended] Activate the created Conda Environmen, Run command : conda activate env_client
  • STEP 3 : Install Dependencies: pip install -r requirements_ui.txt
  • STEP 4 : Change Server IP and Port Number in .env file.
      NOTE: If you are not using separate device as Server, do not change contents of .env file

Run Server component:

  • STEP 1 : [Optional but Recommended] Activate the created Conda Environment, Run command : conda activate env_server
      NOTE: Activate conda environment with each new instance of Terminal
  • STEP 2 : Run command : uvicorn server:app --host 0.0.0.0 --port 8000
      NOTE: You can change the Port Number, make sure to update it in .env file on client
  • IMP : Do not close the Terminal, else Server will Stop

Run Client component without Stramlit UI:

  • STEP 1 : [Optional but Recommended] Activate the created Conda Environment, Run command : conda activate env_client
      NOTE: Activate conda environment with each new instance of Terminal
  • STEP 2 : Verify Server IP and Port Number in .env file.
      NOTE: If you are not using separate device as Server, do not change contents of .env file
  • STEP 3 : Run command : python client.py

Run Client component with Stramlit UI:

  • STEP 1 : [Optional but Recommended] Activate the created Conda Environment, Run command : conda activate env_client
      NOTE: Activate conda environment with each new instance of Terminal
  • STEP 2 : Verify Server IP and Port Number in .env file.
      NOTE: If you are not using separate device as Server, do not change contents of .env file
  • STEP 3 : Run command : streamlit run client_ui.py

Tested On:

  • Client  : MacBook Pro 14"
  • Server : RTX 3060 12G || Ubuntu Server 24.04 LTS
  • LLM     : Meta-Llama-3.1-8B

About

Enables users to interact with the LLM via Ollama by implementating a client-server architecture utilizing FastAPI as server-side framework and Streamlit for user interface.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages