It looks like llama is defaulting to OpenAI.
I am using this tutorial: https://www.youtube.com/watch?v=i8n2Se8PAXg&list=PLBSCvBlTOLa-vUt7mCECzaJjJEjos6K-l&index=7
And saw this interesting article: Why does llama-index still require an OpenAI key when using Hugging Face local embedding model?
Still I am not able to solve my problem.
To me, my model name is already defined in “llm”. I think I need to use the service context, but without success. It still asks to provide the model name.
from llama_index.core import SimpleDirectoryReader
reader = SimpleDirectoryReader(input_files= ["acquisition2.csv"])
documents = reader.load_data()
documents
import os
from getpass import getpass
from huggingface_hub import login
HF_TOKEN = os.environ["HF_TOKEN"] = "XXXXXXX"
login(token=HF_TOKEN)
# create llm model
from llama_index.llms.huggingface import HuggingFaceInferenceAPI
llm = HuggingFaceInferenceAPI(model_name="mistralai/Mixtral-8x7B-Instruct-v0.1", token=HF_TOKEN)
llm
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(documents,embed_model="local")
ValueError: The model_name
argument must be provided.
Adding the model_name as a parameter is still throwing an error.