When i run the below code i get the below error
Traceback (most recent call last):
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/embeddings/utils.py”, line 59, in resolve_embed_model
validate_openai_api_key(embed_model.api_key)
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/embeddings/openai/utils.py”, line 104, in validate_openai_api_key
raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/Users/navdeepparmar/Documents/python/Llama/Llamaindex.py”, line 22, in
index = VectorStoreIndex.from_documents(documents)
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/indices/base.py”, line 145, in from_documents
return cls(
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/indices/vector_store/base.py”, line 71, in init
else embed_model_from_settings_or_context(Settings, service_context)
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/settings.py”, line 274, in embed_model_from_settings_or_context
return settings.embed_model
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/settings.py”, line 67, in embed_model
self._embed_model = resolve_embed_model(“default”)
File “/Users/navdeepparmar/opt/anaconda3/envs/dev/lib/python3.10/site-packages/llama_index/core/embeddings/utils.py”, line 66, in resolve_embed_model
raise ValueError(
ValueError:
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
Consider using embed_model=’local’.
Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules
This is my code
import os
GOOGLE_API_KEY = "abcd" # add your GOOGLE API key here
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings
from llama_index.llms.gemini import Gemini
Settings.llm = Gemini(model="models/gemini-pro")
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("which fruit is Green in color?")
print(response)
1