This is the code shown below for getting response from RAG LLM.
def get_res(prompt, text1, text2, int1, int2):
if len(text1)>1:
prompt = prompt + "n text1: " + text1
if len(text2) > 1:
prompt = prompt + "n text2: " + text2
docs = retriever.get_relevant_documents(prompt)
qa_chain = RetrievalQA.from_chain_type(llm=llm,chain_type = "stuff",retriever =
retriever,return_source_documents = True)
llm_response = qa_chain(prompt)
return{'response':llm_response['result'],'int1':int1,'int2':int2}
I want to capture the following information from this process – prompt, response, and context-data for every call made to the retrieval end-point.
Please share an approach (with an example) on how I can achieve to pull out these data values.