Prompts for LLava running on Ollama
I am just starting to learn how to use LLMs. For starters I have installed Ollama on a PC and pull some models, one of them being LlaVA.
How to print input requests and output responses in Ollama server?
I’m working with Langchain and CrewAI libraries to gain an in-depth understanding of system prompting. Currently, I’m running the Ollama server manually (ollama serve) and trying to intercept the messages flowing through using a proxy server I’ve created.