Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
samchenghowing committed Aug 12, 2024
1 parent b41de0f commit 437ad2a
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion .env
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ MONGODB_URI=mongodb://mongo:27017
#*****************************************************************
# Ollama
#*****************************************************************
# OLLAMA_BASE_URL=http://llm-gpu:11434
OLLAMA_BASE_URL=http://llm:11434

#*****************************************************************
# OpenAI
Expand Down
2 changes: 1 addition & 1 deletion backend/pull_model.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ COPY <<EOF pull_model.clj
(async/go-loop [n 0]
(let [[v _] (async/alts! [done (async/timeout 5000)])]
(if (= :stop v) :stopped (do (println (format "... pulling model (%ss) - will take several minutes" (* n 10))) (recur (inc n))))))
(process/shell {:env {"OLLAMA_HOST" url} :out :inherit :err :inherit} (format "bash -c './bin/ollama show %s --modelfile > /dev/null || ./bin/ollama pull %s'" llm llm))
(process/shell {:env {"OLLAMA_HOST" url "HOME" (System/getProperty "user.home")} :out :inherit :err :inherit} (format "bash -c './bin/ollama show %s --modelfile > /dev/null || ./bin/ollama pull %s'" llm llm))
(async/>!! done :stop))
(println "OLLAMA model only pulled if both LLM and OLLAMA_BASE_URL are set and the LLM model is not gpt")))
Expand Down
24 changes: 12 additions & 12 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,16 @@ services:
count: all
capabilities: [ gpu ]

# pull-model:
# build:
# context: backend
# dockerfile: pull_model.Dockerfile
# environment:
# - OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
# - LLM=${LLM-llama2}
# networks:
# - net
# tty: true
pull-model:
build:
context: backend
dockerfile: pull_model.Dockerfile
environment:
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL-http://host.docker.internal:11434}
- LLM=${LLM-llama2}
networks:
- net
tty: true

neo4j-database:
user: neo4j:neo4j
Expand Down Expand Up @@ -97,8 +97,8 @@ services:
depends_on:
neo4j-database:
condition: service_healthy
# pull-model:
# condition: service_completed_successfully
pull-model:
condition: service_completed_successfully
develop:
watch:
- action: rebuild
Expand Down

2 comments on commit 437ad2a

@gwijsman
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion only the change in backend/pull_model.Dockerfile is needed.
tested with local llama2 (llm) and --profile linux

@samchenghowing
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad that you find it! I modify other componets as I want to revoke my previous changes.

Please sign in to comment.