Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Source git branch doesn't update #2895

Open
1 of 2 tasks
lorenzomigliorero opened this issue Jul 20, 2024 · 26 comments
Open
1 of 2 tasks

[Bug]: Source git branch doesn't update #2895

lorenzomigliorero opened this issue Jul 20, 2024 · 26 comments
Labels
🐛 Bug Reported issues that need to be reproduced by the team. 🐞 Confirmed Bug Verified issues that have been reproduced by the team.

Comments

@lorenzomigliorero
Copy link
Contributor

lorenzomigliorero commented Jul 20, 2024

Description

The source git branch doesn't update on a public repository / docker-compose.
I didn't test other build-packs/resource types.

Minimal Reproduction

https://www.loom.com/share/46359b1ba1864ac3a8cad54c64e04de0

  • Add a new resource
  • Select public repository
  • Use https://github.com/verdaccio/verdaccio as repo URL
  • Select docker-compose as Build Pack
  • Paste /docker-examples/v4/docker-local-storage-volume as base directory
  • Go to source and update git branch to master
  • Go to general and click Reload Compose File

Exception or Error

No response

Version

317

Cloud?

  • Yes
  • No
@SapphSky
Copy link

I've encountered this issue as well just now on the cloud version

@SapphSky
Copy link

SapphSky commented Jul 22, 2024

Hey, following up after doing a bit of trial and error.
I got my scenario to work by pasting the branch tree URL as the repo URL when creating the resource. In your case, it would be https://github.com/verdaccio/verdaccio/tree/master. The tooltip on the Repository URL explains how the branch is selected this way. Hope this helps

@replete
Copy link

replete commented Jul 24, 2024

Since 4 beta .317 added Preserve Repository During Deploymentit works once and then never seems to update the git repo, so the webhook rebuilds are running an old version of the app. Presumably it is not clearing the mounted volume. Giving up volumes for a bit until this is work as expected

@Nneji123
Copy link

Nneji123 commented Oct 8, 2024

Has this been fixed @replete ? Or what did you do to work around this issue? I've encountered the same issue with a multi service deployment and only one of the services' is updated. The other two services do not get updated.

Here's my compose file as an example:

services:
  web:
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    container_name: trendibble-api
    command: scripts/start_server.sh
    ports:
      - ${WEB_PORT:-8000}:8000
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "web"
    env_file: .env
    restart: on-failure
    volumes:
      - ./:/app
      - ./logs:/app/logs

  redis:
    image: redis:7.2.4-alpine3.19
    ports:
      - "${REDIS_PORT:-6379}:6379"
    restart: always
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

  rabbitmq:
    image: rabbitmq:3-management
    container_name: rabbitmq
    ports:
      - "${RABBITMQ_PORT:-5672}:5672"
      - "${RABBITMQ_MANAGEMENT_PORT:-15672}:15672"
    environment:
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-user}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-password}
    env_file:
      - .env
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "status"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 40s

  scheduler:
    container_name: trendibble-api-scheduler
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    command: scripts/start_celery_beat.sh
    volumes:
      - .:/app
    depends_on:
      - rabbitmq
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "scheduler"
    restart: on-failure

  worker:
    container_name: trendibble-api-worker
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    command: scripts/start_celery_worker.sh
    volumes:
      - .:/app
    depends_on:
      - rabbitmq
      - mjml-server
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "worker"
    env_file: .env
    restart: on-failure

  mjml-server:
    image: danihodovic/mjml-server:4.15.3
    ports:
      - "${MJML_SERVER_PORT:-15500}:15500"
    restart: unless-stopped

volumes:
  data:

@replete
Copy link

replete commented Oct 8, 2024

@Nneji123 no idea, I moved on because it was not reliable and will revisit in a year or so. Shame because this workflow is the best feature IMO

@peaklabs-dev peaklabs-dev added 🐛 Bug Reported issues that need to be reproduced by the team. 🐞 Confirmed Bug Verified issues that have been reproduced by the team. labels Oct 16, 2024
@peaklabs-dev peaklabs-dev added this to the v4.0.0 Stable Release milestone Oct 16, 2024
@ejscheepers
Copy link
Contributor

I am also experiencing this issue. Docker Compose with 1 service that is not updating. If I copy my master branch to a "test" branch and use that one to deploy it works (at least the first time).

@VaarunSinha
Copy link

Is their a workaround to this? I am using via Github application.

@renanmoretto
Copy link

i'm also getting this bug with githubapp and deploy keys

any updates on this?

also, how do i manually force a git pull on a project? i think this would be the simplest/easiest fix for now.

@sdezza
Copy link

sdezza commented Nov 12, 2024

same issue here. Trying to find a solution

@simonjcarr
Copy link

I have the same issue. I'm going to see if deleting the app and recreating it works. Not brilliant but everything else is so good I don't want to throw the baby out with the bathwater, so will wait for the developer to fix.
As a matter of interest my issue is with a project running on a second server, would be interesting to know if this is common to any one else having this issue?

@renanmoretto
Copy link

same issue here. Trying to find a solution

i tried everything and gave up

mine was deploying a private directory with a docker-compose file.

i tried with github app and deploy keys and neither worked. basically coolify doesnt fetch new commits/update the vps branch, no matter what. it was random because some redeploys/deploys fetched new updates, but then it gets stuck.

i talked about it on discord, few people tried to help but nothing worked so i gave it up.

if i had a button to manually force a branch update / git pull i would be fine, but i dont see any options to do that.

@andrasbacsai
sorry for tagging but is this being looked into? this bug has been around for months and yet no signs of being fixed. i think it's a pretty important bug because it makes coolify basically unusable.

@djsisson
Copy link
Contributor

A lot of the time I see this they have a volume mount, which overwrites what gets built in the container

So first question does yours have a volume mount?

@ejscheepers
Copy link
Contributor

When I had this issue a few weeks ago, it ended up being a container running on server built using the same branch i.e. Docker Compose. So it was using the same ports etc. So when you deploy, your changes are pushed, but the container being "served" is the old container, so you don't see any changes.

I fixed it by running docker ps in the terminal, finding duplicate containers and deleting.

Mine was caused by restoring a backup and Coolify not "knowing" the old container still existed and redeploying.

So yeah, maybe check if you have any "unmanaged resources" on your server.

@sdezza
Copy link

sdezza commented Nov 13, 2024

After checking the logs, the GitHub commit ID was correct, so the problem didn't come from there.

The solution: delete the Coolify resource and recreate it. Before deleting, remember to save your environment variables. Then recreate the resource and deploy. This forced Coolify to take into account the latest code and create a new image without using old caches (which I had deleted...).

@djsisson
Copy link
Contributor

@sdezza did u have a volume mount like .:./app ?

@sdezza
Copy link

sdezza commented Nov 13, 2024

yes:

services:
  django:
    build:
      context: .
    command: ["/usr/src/app/entrypoint.sh"]
    volumes:
      - .:/usr/src/app
    ports:
      - "8000:8000"

  celery-worker:
    build:
      context: .
    command: celery -A core worker --loglevel=info
    depends_on:
      - django

  celery-beat:
    build:
      context: .
    command: celery -A core beat --loglevel=info
    depends_on:
      - django

  flower:
    image: mher/flower:latest
    ports:
      - "5555:5555"
    depends_on:
      - celery-worker
    environment:
      - FLOWER_BASIC_AUTH=${FLOWER_BASIC_AUTH}
    volumes:
      - flower_data:/usr/src/app

volumes:
  flower_data:

I deleted all the volumes (UI and docker command), same issue.

@djsisson
Copy link
Contributor

@sdezza you cant have this

.:/usr/src/app

it just overwrites whats built

@sdezza
Copy link

sdezza commented Nov 13, 2024

@djsisson make sense! Should be like the flower mount?

@djsisson
Copy link
Contributor

@sdezza no, you do not need any mounts here, if you need some files from your repo, just copy them in inside your dockerfile, all youre doing otherwise is overwriting what has been built

if there is some static files you need then u can mount ot a directory within /usr/src/app/staticdir

this mount is usually used during dev, where its using your local repo, for prod you should not have such mounts

@douglasg14b
Copy link

douglasg14b commented Dec 18, 2024

Well, that explains why I can't deploy anything.

I did have a volume like .:./app. However, all volumes and storage have been removed, including from the docker-compose spec, and yet we still can't get builds to use the code that is in the repo...

Force deploy with no cache, disabling cache...etc doesn't seem to work. Current commit is pulled, but the filesystem in the built container does not match.

Feel like I'm losing my mind, we can't update apps, at all. It's definitely not workable to delete and recreate the app on each deployment once it stops working

@douglasg14b
Copy link

douglasg14b commented Dec 18, 2024

After removing the app, and recreating it, a new build reflected the changes.

However, new builds just don't, again. It appears to be essentially broken, which significantly reduces the utility of coolify ;/

Edit: Maybe this is the issue?

2024-Dec-18 07:05:13.129159 docker rm -f k4gcskks00wgcckso00ooksg
2024-Dec-18 07:05:13.129159 Error response from daemon: No such container: k4gcskks00wgcckso00ooksg

Seems that on every redeploy, the docker rm command fails

@douglasg14b
Copy link

Actually it looks like coolify serves up an old image, and no matter how many time you try and rebuild, the old image is never removed and the first image is the only one that shows up.

Image

@douglasg14b
Copy link

How to work around

  1. Stop the container, and just click through the prompts. (Leave run docker cleanup) checked
  2. Deploy again

This consistently avoids the problem for me by pruning unused images each time.

@fguillen
Copy link

fguillen commented Jan 8, 2025

I am having this issue with v4.0.0-beta.380.

With docker-compose:

services:
  app:
    build:
      context: .
      dockerfile: ./docker/app/DockerFile
    volumes:
      - .:/var/www/app
    ports:
      - 3000
    restart: always
    depends_on:
      - db
      - redis
      - sidekiq

  sidekiq:
    build:
      context: .
      dockerfile: ./docker/app/DockerFile
    volumes:
      - .:/var/www/app
    depends_on:
      - redis
    command: bundle exec sidekiq

  db:
    image: pgvector/pgvector:pg16
    volumes:
      - ./_data/myapp:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=myapp
      - POSTGRES_USER=myapp
      - POSTGRES_DB=myapp

  redis:
    image: "redis:7-alpine"
    ports:
      - 6379
    volumes:
    - ./_data/redis:/var/lib/redis/data

This is the deployment log:

Coolify.log

The commit id is correct. Just the application remains in the old state :/


Nothing of above said works for me. No the Stop container, cleanup and deploy again. Neither the Preserve Repository During Deployment neither both

@djsisson
Copy link
Contributor

djsisson commented Jan 8, 2025

@fguillen docker compose is a plugin, that automates running docker commands.
Those commands are based on what is in the compose file.
The default behaviour is to only pull images that are missing
So since when you build it always has the same tag, it will not switch to the newest image
to change the default behaviour you need to add

pull_policy: always

The above will not be an issue in your instance, but i will leave for others incase.

For your issue you must remove the volume mounts, the first time your app is ran they are created and populated with what is in the image
everytime after they will override what is inside the container

only use mounts for things like config files and uploads, do not use for files that you include when you build

@fguillen
Copy link

fguillen commented Jan 8, 2025

@djsisson it works, thanks! This is my docker-compose.yml now:

services:
  app:
    build:
      context: .
      dockerfile: ./docker/app/DockerFile
    ports:
      - 3000
    restart: always
    depends_on:
      - db
      - redis
      - sidekiq

  sidekiq:
    build:
      context: .
      dockerfile: ./docker/app/DockerFile
    depends_on:
      - redis
    command: bundle exec sidekiq

  db:
    image: pgvector/pgvector:pg16
    volumes:
      - ./_data/dbchatbot:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=dbchatbot
      - POSTGRES_USER=dbchatbot
      - POSTGRES_DB=dbchatbot

  redis:
    image: "redis:7-alpine"
    ports:
      - 6379
    volumes:
    - ./_data/redis:/var/lib/redis/data

I have remove the volumes from the 2 app sections but keep them from the db and redis sections what they need to be persistent between deploys

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Reported issues that need to be reproduced by the team. 🐞 Confirmed Bug Verified issues that have been reproduced by the team.
Projects
None yet
Development

No branches or pull requests