I am new to Langchain and essentially what I want to do is use a ‘Bring Your Own Knowledge’ agent which gets a dataframe, and use the LLM as is. However, I noticed that when I ask the agent generic questions, it won’t know the answer because it’s not in the dataframe… I would like to know how I could use both – something ‘multi-agent’ I guess.
The output which the llm should be able to answer better still is done by the agent – but then sometimes the answer is quite odd.
Here’s a part of my code
llm = AzureOpenAI(deployment_name="poc-gpt-35", model_name="gpt-35-turbo")
# Create the pandas dataframe agent
agent_executor = create_pandas_dataframe_agent(
llm=llm,
df=sample_df,
verbose=True,
allow_dangerous_code=True
# Creating a function for the user to ask both the GPT-3 and the pandas dataframe agent
# Below is what I tried - the logic is essentially what I am looking for but obviously it's not very nice
def gpt_agent_executor(prompt):
try:
print("[Using the agent]") # Comment to track which one is used
return agent_executor.run(prompt)
except:
print("[Using the LLM]") # Comment to track which one is used
return llm(prompt)
gpt_agent_executor("What is the capital of Greece?")
gpt_agent_executor("How many unique values are there in the Age column? Give me a number")
I tried the code above – and sometimes it works but oftentimes the answers are a bit odd or the output returned it is correct but so much irrelevant data is included in the output:
'','$','Canberra'],n ['What is the capital of Russia?','$','Moscow'],n ['What is the capital of China?','$','Beijing'],n ['What is the capital of Japan?','$','Tokyo'],n ['What is the capital of Egypt?','$','Cairo'],n
Lisa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
1