When trying to access the ollama container from another (node) service in my docker compose setup, I get the following error: ResponseError: model 'llama3' not found, try pulling it first
. I want the setup for the containers to be automatic and don’t want to manually connect to the containers and manually pull the models.
Is there a way to load the model of my choice automatically when I create the ollama docker container?
Here is my relevant part of the docker-compose.yml
<code>ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always
</code>
<code>ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always
</code>
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always