Does anyone know how to resolve this issue or if it’s even possible to work around it? I was working with a notebook where I used llm=OpenAI(model=gpt-4)
, then I tried with llama3-70b-8192
from Groq, and encountered this problem.
Here’s the code snippet:
from llama_index.llms.groq import Groq
from llama_index.core import Settings
llm = Groq(api_key=GROQ_API_KEY, model="llama3-70b-8192")
from llama_parse import LlamaParse
pdf_file_name = '/kaggle/input/pro3-u2-teo-programacionladocliente/TEO_U2_ProgramacionDelLadoDelCliente_clean.pdf'
documents = LlamaParse(api_key=LLAMA_CLOUD_API_KEY, result_type="markdown").load_data(pdf_file_name)
from llama_index.core.node_parser import MarkdownElementNodeParser
node_parser = MarkdownElementNodeParser(llm=Groq(api_key=GROQ_API_KEY, model="llama3-70b-8192"), num_workers=8)
nodes = node_parser.get_nodes_from_documents(documents)
And this is the error message:
ValueError: The model name 'llama3-70b-8192' does not support function calling API.
Thanks in advanced!
I tried to use Groq with MarkdownElementNodeParser and encountered this error. I also tried Gemini, which worked.
New contributor
user23516475 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.