I’m making a chatbot using langchain
and Ollama
inference, but after some research I see some different module import such as:
from langchain.llms.ollama import Ollama
model = Ollama(model="llama3")
another example:
from langchain_community.llms import Ollama
ollama = Ollama(model="llama3")
another example:
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3")
another example:
from langchain.llms import Ollama
llm = Ollama(model="llama3")
and maybe there’s another module that I missed
So what is the difference? And which one should I use?
From my understanding, the difference is the model’s objective. Eg.
The following is specific for chat completion
>
https://python.langchain.com/v0.2/docs/integrations/chat/ollama/
And this is the list of model for text completion
>
https://python.langchain.com/v0.2/docs/integrations/llms/ollama/
For more details, check this > https://python.langchain.com/v0.2/docs/concepts/#llms
where explains
Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as chat completion models, even for non-chat use cases.
2