I had a LLMChain which I want to change now with LCEL to have proper support for streaming.
My original chain was:
llm_chain = LLMChain(
llm=llm_chat,
prompt=prompt,
output_key="answer",
verbose=False,
)
My updated chain is:
retrieved_chat_history = ChatMessageHistory(messages=[]) ## example
conversation_memory = ConversationBufferMemory(
input_key="question",
output_key="answer",
memory_key="chat_history",
return_messages=True,
chat_memory=retrieved_chat_history,
)
lcel_chain = (
RunnablePassthrough.assign(
chat_history=lambda x: conversation_memory.load_memory_variables(x)["chat_history"]
)
| prompt
| llm_chat
| {"answer": RunnablePassthrough()}
)
The problem with the new LCEL syntax is my history object ConversationBuffereMemory
is not updated after every invocation. How can I fix this?
Apart from the snippet pasted I also tried following snippet but it does not work:
from langchain_core.runnables.history import RunnableWithMessageHistory
base_chain = prompt | llm_chat
# Create a function to get an empty message history
def get_session_history(session_id):
return ChatMessageHistory()
# Wrap the base chain with RunnableWithMessageHistory
chain_with_memory = RunnableWithMessageHistory(
base_chain,
get_session_history,
input_messages_key="question",
history_messages_key=memory_key,
output_messages_key="answer"
)
session_id = "user_123" # Use a unique identifier for each conversation
result = chain_with_memory.invoke(
{"question": "Hello, my name is XYZ"},
config={"configurable": {"session_id": session_id}}
)