I’m building a document QA application using the LangChain framework and ChainLit for the UI. Within my application, I utilize the create_csv_agent agent to process csv files and generate responses. A critical requirement is to maintain a consistent memory state across multiple interactions within a single session (chat history/context). However, I’ve encountered an issue where the memory state does not persist as expected between calls to the agent.
Below is a simplified excerpt from my code where I initialize and invoke the create_csv_agent:
memory: ConversationBufferMemory = cl.user_session.get("memory")
agent_executor = create_csv_agent(
self.model,
file,
memory=memory, # tried this
handle_parsing_errors=True,
verbose=True
)
response = agent_executor.invoke(
user_message.content,
memory=memory # and this
)
ai_response.content = response.get("output", "")
I retrieve a ConversationBufferMemory instance from the user session and pass it to create_csv_agent. Despite this, the memory state does not appear to be maintained across subsequent calls to get_message.
Questions:
Is there a specific reason why create_csv_agent does not retain the memory state between calls as I anticipate?
Are there any recommended approaches or modifications I can apply to ensure that the memory state persists across multiple invocations of the agent within the same session?
I appreciate any insights or suggestions on managing memory persistence with create_csv_agent in the context of a LangChain ChainLit application. Understanding how to correctly initialize or reuse the agent and its associated memory buffer would be invaluable for achieving the desired functionality in my app
1