Handle questions where the whole context is needed in a RAG pipeline
I have created a RAG pipeline using Langchain components, and llama3.
My use case is providing a PDF and ask questions from the PDF. The RAG pipeline can handle the QA part just fine.
I’m using a dense retriever, and a reranker for my implementation.
Retriever using LLM – capture context data
This is the code shown below for getting response from RAG LLM.