Skip to content

hhnguyen-20/LocalGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LocalGPT

Steps to run the app

  1. Create and activate a virtual environment
python3 -m venv .venv && source .venv/bin/activate
  1. Install all requirements
pip install -r requirements.txt
  1. Train the model using the Ollama
ollama create <your-model-name> -f Modelfile
  1. Check and use the created model in the app.py file
  • Check if the model is created successfully:
ollama list
  • Use the model in the app.py file:
model = Ollama(model="<your-model-name>")
  1. Run the app using chainlit
chainlit run app.py
  1. Optional: (Need connecting to the internet) You can create a database named "LocalGPT" to store the user inputs and chat responses on Langchain. Rename the file example.env to .env and add the following environment variables:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="<your-api-key>"
LANGCHAIN_PROJECT="LocalGPT"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages