How do you configure a Github Actions yml file to serve ollama and to load it with a base model (like llama3)? I tried to use a Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | sh
# Pull the Llama3 model
RUN nohup ollama serve > ollama.log 2>&1 & sleep 10 && ollama pull llama3
# Expose port 11434 to the outside world
EXPOSE 11434
# Start Ollama service and keep the container running
CMD nohup ollama serve > ollama.log 2>&1 & tail -f /dev/null
I tried to use it like this:
name: Ollama
on:
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
jobs:
auxiliary:
runs-on: ubuntu-latest
container:
image: username/ollama_container:latest
ports:
- 11434:11434
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Check for dockerenv file
run: (ls /.dockerenv && echo Found dockerenv) || (echo No dockerenv)
- name: Configure Git Safe Directory
run: git config --global --add safe.directory /__w/ai-integration/ai-integration
- name: Start Ollama Service
run: |
nohup ollama serve > ollama.log 2>&1 &
sleep 5
curl -i http://localhost:11434 || (cat ollama.log && exit 1)
tail -f /dev/null &
but if i don’t pull the llama3 model inside the workflow, it won’t be found, returning this error:
{"error":"model "llama3" not found, try pulling it first"}
New contributor
Andreea is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.