I am trying to build a chatbot using LangChain. This chatbot uses different backend:
- Ollama
- Huggingfaces
- LLama.cpp
- Open AI
and in a YAML file, I can configure the back end (aka provider) and the model. For Ollama I use the class Ollama from langchain_community.llms package. For LLama.cpp I use the class LLama in the llama_cpp package.
The thing I don’t understand is that if I use the LLama 2 model my impression is that I should give the conversation in the format:
<s>
<<SYS>>
system message
<</SYS>>
[INST]
user message
[/INST]
assistant message
</s>
However, I think if I use the Ollama backend, this is not required. Can anyone help me to understand when the above prompt template should be provided? If I use LLama.cpp should I have a different prompt template for each different model?