Skip to content

Adamwgoh/llm-chatbot-workshop

 
 

Repository files navigation

Fully-featured & beautiful web interface for Ollama LLMs A LLM Chatbot workshop based off [nextjs-ollama-llm-ui](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)

GitHub Repo stars

Get up and running with Large Language Models quickly, locally and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!

Features ✨

  • In this series, you will go through the following:

    • Setting up various LLM API to work with your Chatbot UI
    • Learning about embeddings and vector DB
    • Learning various ways of LLM inferencing
    • How to update Chatbot UI with more informative UI features
    • How to update Chatbot UI to support multi-modal
    • Deploying Chatbot UI to a deployable endpoint
  • UI is based off a template chatbot UI

Requisites ⚙️

To use the web interface, these requisites must be met:

  1. Download Ollama and have it running. Or run it in a Docker container. Check the docs for instructions.
  2. Node.js (18+) and npm is required. Download

Deployment

This project uses Fly.io as part of the deployment. Please note that the Github actions CICD only deploys the webapp to fly.io as a NextJS App.

If you are intending to deploy your own LLM, do

Deploy your own to Vercel or Netlify in one click ✨

Deploy with Vercel Deploy to Netlify Button

You'll need to set your OLLAMA_ORIGINS environment variable on your machine that is running Ollama:

OLLAMA_ORIGINS="https://your-app.vercel.app/"

Installation 📖

Packaging status

Use a pre-build package from one of the supported package managers to run a local environment of the web interface. Alternatively you can install from source with the instructions below.

Note

If your frontend runs on something other than http://localhost or http://127.0.0.1, you'll need to set the OLLAMA_ORIGINS to your frontend url.

This is also stated in the documentation:

Ollama allows cross-origin requests from 127.0.0.1 and 0.0.0.0 by default. Additional origins can be configured with OLLAMA_ORIGINS

Install from source

1. Clone the repository to a directory on your pc via command prompt:

git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui

2. Open the folder:

cd nextjs-ollama-llm-ui

3. Rename the .example.env to .env:

mv .example.env .env

4. If your instance of Ollama is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:

NEXT_PUBLIC_OLLAMA_URL="http://localhost:11434"

5. Install dependencies:

npm install

6. Start the development server:

npm run dev

5. Go to localhost:3000 and start chatting with your favourite model!

Upcoming features

This is a to-do list consisting of upcoming features.

  • ✅ Voice input support
  • ✅ Code syntax highlighting
  • ✅ Ability to send an image in the prompt to utilize vision language models.
  • ✅ Ability to regenerate responses
  • ⬜️ Import and export chats

Tech stack

NextJS - React Framework for the Web

TailwindCSS - Utility-first CSS framework

shadcn-ui - UI component built using Radix UI and Tailwind CSS

shadcn-chat - Chat components for NextJS/React projects

Framer Motion - Motion/animation library for React

Lucide Icons - Icon library

Helpful links

Medium Article - How to launch your own ChatGPT clone for free on Google Colab. By Bartek Lewicz.

Lobehub mention - Five Excellent Free Ollama WebUI Client Recommendations

Packages

No packages published

Languages

  • Jupyter Notebook 96.0%
  • TypeScript 3.9%
  • Other 0.1%