I’m working with LangChain to generate responses based on user inputs. My initial prompt requires some variables, and I use these variables to generate a response. The problem arises when I try to ask a follow-up question based on the initial response; the model throws errors because it expects the initial variables again.
How can I manage the context so that follow-up questions can be asked without needing to re-input the initial variables?
This is my code.
class Chat:
def __init__(self):
self.llm = ChatOpenAI(
api_key=os.getenv('OPENAI_API_KEY'),
model='gpt-4o',
temperature=0.3,
)
self.parser = JsonOutputParser(pydantic_object=TargetingType)
self.prompt_template = ChatPromptTemplate.from_messages([
("system", SYSTEM_PROMPT_ONE),
("human", USER_PROMPT),
])
def get_chain(self):
return self.prompt_template | self.llm | self.parser
MAIN FILE
chain = self.chat.get_chain()
response = chain.invoke({
// input variables
})
// do some work on the response
// After everything is done I want to feed again the data to the Model which is different from the initial request