from operator import itemgetter
from langchain_core.runnables import RunnablePassthrough
chain = (
RunnablePassthrough.assign(messages=itemgetter("messages") | trimmer)
| prompt
| model
)
response = chain.invoke(
{
"messages": messages + [HumanMessage(content="what's my name?")],
"language": "English",
}
)
The above code is from the LangChain documentation for creating a chatbot with memory. They use trim_messages to reduce the messages to fit the context window which is understandable.
But, then they wrap this chain with RunnableWithMessageHistory which doesn’t make sense to me using:
with_message_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="messages",
)
config = {"configurable": {"session_id": "abc20"}}
response = with_message_history.invoke(
{
"messages": messages + [HumanMessage(content="whats my name?")],
"language": "English",
},
config=config,
)
response.content
Considering the fact that RunnableWithMessageHistory is supposed to automatically log and fetch messages previously sent, why do they have a messages key in the dictionary being passed to invoke method? I am very very confused as to what’s going on over here. Please explain to me how the bottom code works. If it is wrong, how can I integrate trim_messages to automatically fetch messages from history.
I am just confused. I don’t even know what to try.