Replies: 4 comments
-
Answer generated by a 🤖 AnswerI understand that you're encountering an In your code, you've set the OpenAI API key using the line To address this issue, I suggest setting the openai.api_key = 'xxx' Please replace If you're using custom embeddings, ensure that the model is correctly initialized and the embeddings are correctly generated. In case you're using Azure OpenAI, you need to set the openai.api_type = "azure"
openai.api_base = "https://<foo-bar>.openai.azure.com"
openai.api_version = "2022-12-01"
os.environ["OPENAI_API_KEY"] = "youcannottellanyone"
openai.api_key = os.getenv("OPENAI_API_KEY") And initialize the llm = AzureOpenAI(
engine="<foo-bar-deployment>",
temperature=0,
openai_api_version=openai.api_version,
model_kwargs={
"api_key": openai.api_key,
"api_base": openai.api_base,
"api_type": openai.api_type,
"api_version": openai.api_version,
},
) Please replace This solution should help resolve the Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
-
The way llama-index is implemented forces you to set the environment variable before importing the VectorStoreIndex. So, put |
Beta Was this translation helpful? Give feedback.
-
If this is the case, then the documentation needs to be updated here. |
Beta Was this translation helpful? Give feedback.
-
To resolve the
import os
os.environ["OPENAI_API_KEY"] = 'xxx'
from flask import Flask, render_template, request, jsonify
from llama_index import SimpleDirectoryReader, VectorStoreIndex, StorageContext
from llama_index.vector_stores import PineconeVectorStore
import pinecone
import re
import openai
import time
pinecone.init(api_key="xxxx", environment="xxx")
index = pinecone.Index("index-name")
vector_store = PineconeVectorStore(pinecone_index=index)
loaded_index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
query_engine = loaded_index.as_query_engine(similarity_top_k=3)
response = query_engine.query("walking,standing")
import openai
openai.api_key = 'xxx' Replace If you prefer to use a local model instead of OpenAI for embeddings, you can use the Huggingface pipeline. Here’s an example: from langchain.embeddings import HuggingfaceEmbeddings
from llama_index import GPTSimpleVectorIndex
# Initialize the Huggingface embeddings model
embed_model = HuggingfaceEmbeddings(model_name="your-local-model")
# Create the index with the local embeddings model
index = GPTSimpleVectorIndex(
documents, llm_predictor=embed_model, prompt_helper=prompt_helper
) Replace For using local models, you may need to install additional dependencies: pip install llama-index-embeddings-huggingface
pip install transformers optimum[exporters]
pip install llama-index-embeddings-huggingface-optimum Example code for using a Hugging Face model: from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5"
) Example code for using Hugging Face Optimum ONNX embeddings: from llama_index.embeddings.huggingface_optimum import OptimumEmbedding
OptimumEmbedding.create_and_save_optimum_model(
"BAAI/bge-small-en-v1.5", "./bge_onnx"
)
Settings.embed_model = OptimumEmbedding(folder_name="./bge_onnx") For more details on embedding options, refer to the LlamaIndex documentation [2][3][4]. |
Beta Was this translation helpful? Give feedback.
-
Question Validation
Question
I'm encountering an AuthenticationError when trying to generate embeddings using the OpenAI API in my application that uses the llama_index library. Here's a snippet of the code where the issue occurs:
And the error message I get is:
The error is raised when trying to create an embedding using the OpenAI's service, but it fails due to an authentication issue. It seems like the tenacity library is trying to retry the operation, but it fails every time due to the same error.
Any help on how to resolve this would be much appreciated.
Beta Was this translation helpful? Give feedback.
All reactions