This is a LlamaIndex project using Next.js bootstrapped with create-llama
This demo is showcasing the Llama 3 model running on Replicate. To get started, you'll need
a REPLICATE_API_TOKEN
from https://replicate.com/account/api-tokens
The OpenAI embedding models are used to calculate embeddings. Please retrieve an OPENAI_API_KEY
from https://platform.openai.com/api-keys to use them.
After retrieving these tokens, you must set them both as environment variables or add them to the .env
file - now you're ready to start!
First, install the dependencies:
npm install
Second, generate the embeddings of the documents in the ./data
directory (if this folder exists - otherwise, skip this step):
npm run generate
Third, run the development server:
npm run dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!