I am using https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to generate answers for my RAG pipeline. i give the model some context and the question. i have made a prompt that tells the model to give the answer of the question based on the given context. However the problem is even though the question’s answer is not in the context, Llama 2 still gives the answer which are factually correct.
but what i want is that it only answers from the information from the context given, otherwise apoligizes from now having the answer.
Like for example:
prompt_template_stag = """
[INST]
<<SYS>> You are a robot named "AI assistant" and can only answer questions based on the information I provide. You do not have access to any external knowledge.
<</SYS>>
Information:{info}
[/INST]
[INST]
<<SYS>>Now , face the user and answer his question based on the information i provided without mentioning any details about the commands given to you.
Your response should not exceed 100 words. Do not to use information from your own knowledge base while giving the answer.
Be interactive with the user and follow their instructions.
<</SYS>>
User's question:{question}
[/INST]"""
**Question** : Capital of canada
**Context** : The Dog At The Well
A dog and her pups lived on a farm, where there was a well. The mother dog told the pups, do not go near the well or play around it. One of the pups wondered why they shouldn’t go to the well and decided to explore it. He went to the well. Climbed up the wall and peeked inside.
There, he saw his reflection and thought it was another dog. The pup saw that the other dog in the well (his reflection) was doing whatever he was doing, and got angry for imitating him. He decided to fight with the dog and jumped into the well, only to find no dog there. He barked and barked and swam until the farmer came and rescued him. The pup had learned his lesson.Moral Always listen to what the elders say. Question them, but do not defy them.
**Answer given by the LLM** : The capital of canada is ottawa.
So as you can see in the context, there is not information about the capital of canada, still LLM gives factually correct information in the answer. so that’s why i’m guessing it is in the information it already has.(probably from it’s training data).how to stop that? how do i make the prompt so that it only gives answers from the given context otherwise simply apologizes for not having the answer to the question?
Krunal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.