This is a LlamaIndex project using Next.js bootstrapped with create-llama
.
It's using the Edge runtime from NextJS and Pinecone as the vector database.
As the default PDF reader from LlamaIndexTS does not support the Edge runtime, we're using LlamaParse to parse the PDF documents.
To use the application, you have to add the Pinecone, OpenAI, and LlamaParse configurations to the .env
file.
We provide a .env.template
file to help you get started. Copy the .env.template
file to .env
and fill in the values that you will retrieve in the following sections.
First, you'll need a Pinecone account; go to https://www.pinecone.io to sign up for free. Then proceed as follows:
1 . Create an index with 1536 dimensions (vector size of the default OpenAI embeddings model text-embedding-ada-002
):
- Retrieve the
PINECONE_INDEX_NAME
andPINECONE_ENVIRONMENT
values and set them in the.env
file:
- Retrieve the API key and set it as
PINECONE_API_KEY
in the.env
file:
Create an OpenAI key from https://platform.openai.com/api-keys and set it as OPENAI_API_KEY
in the .env
file.
LlamaParse is an API created by LlamaIndex to parse files efficiently; for example, it's excellent at converting PDF tables into markdown.
To use it, get an API key from https://cloud.llamaindex.ai. Store the key in the .env
file under the LLAMA_CLOUD_API_KEY
variable.
First, install the dependencies:
npm install
Second, generate the embeddings of the documents in the ./data
directory (if this folder exists - otherwise, skip this step):
npm run generate
Third, run the development server:
npm run dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!