Skip to content

Latest commit

 

History

History
71 lines (49 loc) · 2.03 KB

README.md

File metadata and controls

71 lines (49 loc) · 2.03 KB

This is a LlamaIndex project using Next.js with Vercel AI RSC.

Getting Started

First, install the dependencies:

npm install

Second, generate the embeddings of the documents in the ./data directory (if this folder exists - otherwise, skip this step):

npm run generate

Third, run the development server:

npm run dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Using Docker

  1. Build an image for the Next.js app:
docker build -t <your_app_image_name> .
  1. Generate embeddings:

Parse the data and generate the vector embeddings if the ./data folder exists - otherwise, skip this step:

docker run \
  --rm \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
  <your_app_image_name> \
  npm run generate
  1. Start the app:
docker run \
  --rm \
  -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
  -v $(pwd)/config:/app/config \
  -v $(pwd)/cache:/app/cache \ # Use your file system to store gea vector database
  -p 3000:3000 \
  <your_app_image_name>

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!