This is a LlamaIndex project using Next.js bootstrapped with create-llama
.
This example allows you to have a chat using the GPT4 Vision model from OpenAI. You can upload files and ask the model to describe them.
To keep the example simple, we are not using a database or any other kind of storage for the images.
Instead, they are sent to the model in base64 encoding. This is not very efficient and only works for small images
like the ones in the ./data
folder.
We recommended implementing a server upload and sending just the URL of the image instead. A straightforward way is to use Vercel Blob which is a file storage service that is easy to integrate with Next.js.
First, install the dependencies:
npm install
Second, run the development server:
npm run dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!