Conversion Issues from Node.js to Edge Runtime #715
-
Hi there, first off thank you @marcusschiesser and @himself65, for implementing the edge runtime in LlamaIndexTS. I am currently facing challenges in transitioning my project from the Node.js to the Edge runtime. Initially, I uninstalled the llamaindex package and replaced it with @llamaindex/edge, following the updates to all imports as outlined in the Using the Edge Runtime section of the README. Unfortunately, this approach did not yield a successful outcome. In an attempt to resolve this, I created a new project using the latest version of create-llama on Node.js, hoping to convert it subsequently to the Edge runtime. This attempt also failed. I prefer not to use the nextjs-edge-llamaparse as it requires integration with Llama Cloud or Pinecone, which I am avoiding. I want to have everything locally. My primary concern is retaining the functionality from the Node.js runtime, where the store is generated in the cache folder, while still migrating to the Edge runtime. Here's the link to my repository, which currently encounters errors when trying to generate the vector indexes: https://github.com/nikolailehbrink/llamaindex-edge-test Can anyone provide insights or guidance on how to maintain local cache functionality with the Edge runtime without using Llama Cloud or Pinecone? Thank you for your help! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@nikolailehbrink I'm afraid this isn't possible. The issue is that the Edge runtime doesn't support local filesystem access (For fun, I even tried local URLs with So, you'll need some kind of network storage to store your data. That's why I added Pinecone to the https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse example and that's why |
Beta Was this translation helpful? Give feedback.
@nikolailehbrink I'm afraid this isn't possible. The issue is that the Edge runtime doesn't support local filesystem access (For fun, I even tried local URLs with
fetch
- it doesn't work).So, you'll need some kind of network storage to store your data. That's why I added Pinecone to the https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse example and that's why
create-llama
doesn't generate an Edge runtime example.