Is there any way i can make the LLM actually execute a function. Instead of simply returning a JSON with the function name and parameters?
I want to make the llm select and actually execute its tools and then use the output from those tools in other tools to come to a final answer. Is this possible? I want to use Ollama, Llama3 and LangGraph.