Locally, we should always depend on the latest docker containers. So:
docker-compose.override.yml:
version: "3.7"
services:
lpdc:
image: lblod/frontend-lpdc:latest
lpdc-management:
image: lblod/lpdc-management-service:latest
lpdc-publish:
image: lblod/lpdc-publish-service:latest
The dev environment is configured to run the latest of the development branch, and to also use the 'latest' dependents for frontend, management, publish.
Following steps can be used if you want to manually deploy a new version on dev environment; however see Continuous Integration.
ssh [email protected]
cd /data/app-lpdc-digitaal-loket-dev # or `cd /data/app-lpdc-digitaal-loket-test` for Test
git pull
drc down --remove-orphans
drc pull
drc up -d
drc logs --follow --timestamps --since 1m
Continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day.
However, we agreed to use trunk-based-development. Developers commit directly on the development branch (for each of the projects).
So we need a continuous integration build that verifies all commits (either on the app-lpdc-digitaal-loket, or frontend-lpdc, or lpdc-management-service, or lpdc-publish). A commit on lpdc-publish can also break something on app-lpdc-digitaal-loket ...
So we created a ci pipeline that verifies all when committing on development branches: overview of Continuous Integration setup (private link).
More in detail: We have an automated build pipeline in woodpecker ci that:
- builds frontend-lpdc, runs its component and unit tests and deploys a new latest docker container on commit of development branch
- builds lpdc-management-service, runs its unit tests and deploys a new latest docker container on commit of development branch
- builds lpdc-publish and deploys a new latest docker container on commit of development branch
When woodpecker ci build created a new latest container of frontend-lpdc, lpdc-management-service or lpdc-publish; or on a new commit on development branch of app-lpdc-digitaal-loket, the run-latest test suite and deploy dev is run for app-lpdc-digitaal-loket project When this build succeeds, a new latest build is automatically deployed on development environment as well.
- We created a private / public key pair on the woodpecker ci server. The private key was exposed as ssh_key on the woodpecker ci, the public key was added to authorized keys on the lpdc dev machine.
- We noticed connection ssh problems from the woodpecker ci machine to the lpdc dev machine, the ufw limit always hit, so we added an extra rule to allow traffic from the woodpeckeer ci machine to the lpdc dev machine
sudo ufw insert 1 allow from <<the ip address from the woodpecker ci machine>>
This might cause future problems if the ip address from the woodpecker ci ever changes ...
- We did a checkout in the folder data/app-lpdc-digitaal-loket-ci on the lpdc dev machine, on branch development.
- For the frontend-lpdc, lpdc-management-service, and the lpdc-publish project in woodpecker, a secret with name woodpecker_token was added containing a 'Personal Access Token' from a user of the project that has access to the app-lpdc-digitaal-loket project.
Woodpecker ci can unfortunately not directly be configured to view the playwright html test results or the traces. Only a textual output can be viewed in woodpecker ci.
If you would like to view the html report (and traces), you have to manually copy the results from the lpdc-dev machine to your local machine.
How ? The build automatically copies the playwright build results to a folder specific per build on the lpdc-dev machine.
You can view the results locally by first configuring playwright globally (only needed first time):
npm install --global -D @playwright/test
And then executing for a specific build:
cd /tmp
scp -p -r [email protected]:/data/woodpecker-ci-app-lpdc-digitaal-loket-ci-build-results/build-<your build number here> /tmp
cd /tmp/build-<your build number here>/all-reports
npx playwright show-report playwright-report-api # for the api tests; browser should open automatically
npx playwright show-report playwright-report-e2e # for the e2e tests
app-lpdc-digitaal-loket uses 3 other docker containers we also develop directly:
- lblod/frontend-lpdc:
- lblod/lpdc-management-service:
- lblod/lpdc-publish-service:
After the demo after each sprint, we want to make a release of app-lpdc-digitaal-loket. For this we should first verify if we made any changes to any of the three other docker containers. (frontend, management, publish). If needed, we first make a new release of these containers (instructions to be found in these repos). Then we can update the versions in the docker-compose file. And make a release of app-lpdc-digitaal-loket, by making a release version in git (which also tags).
On acc we always deploy a released version.
Infrastructure notes: acceptance currently has special configs we want to remove over time.
Deployment instructions: similar to prod.
On prod we always deploy a released version.
Infrastructure notes: production currently has special configs we want to remove over time.
Mention on rocket chat that we will perform a new release, so the operations team is warned.
ssh [email protected]
# bring the app-http-logger down
cd /data/app-http-logger
drc down
cd /data/app-lpdc-digitaal-loket
#verify that ldes consumers and its processing in lpdc-management have finished (via logs)
drc logs --timestamps --since 10m | grep ldes-consumer
drc logs --timestamps --since 10m | grep lpdc-management-1
# Remove all user sessions to avoid that users can keep working on cached version
# DELETE WHERE {
# GRAPH <http://mu.semte.ch/graphs/sessions> {
# ?s ?p ?o.
# }
# }
#before stopping virtuoso make sure all db changes are saved to disk
docker exec -it my-virtuoso bash
isql-v -U dba -P $DBA_PASSWORD
SQL> checkpoint;
#stop all containers
drc stop
#take a backup of the existing logs
drc logs --timestamps > /backups/prod-logs-backups/log-<your date - and followletter here>.txt
#as an example: drc logs --timestamps > /backups/prod-logs-backups/log-2024-03-26-a.txt
#zip the backup of the logs
tar -zcvf /backups/prod-logs-backups/log-2024-03-26-a.txt.tar.gz /backups/prod-logs-backups/log-2024-03-26-a.txt
#remove the full file
rm /backups/prod-logs-backups/log-2024-03-26-a.txt
# bring the app-lpdc-digitaal-loket down
drc down --remove-orphans
cd /data
#take a backup of all
tar -zcvf app-lpdc-digitaal-loket-prod.tar.gz app-lpdc-digitaal-loket/
cd /data/app-lpdc-digitaal-loket
git fetch --all --tags
#some configs are only for prod, so stash them for now
git stash -u
git checkout tags/<my version>
#e.g. of a version: v0.2.0
# get back those configs
git stash apply
#manually merge and verify the configs unstashed (sometimes new configs have been added, and need manual additions/corrections)
#(possibly non exhaustive) list of manual changes:
#Ensure to copy the /config/dispatcher/dispatcher.ex (without the commented /mock/sessions) to /config/controle-dispatcher/dispatcher.ex. .
#Merge the /config/dispatcher/dispatcher.ex change of the /mock/sessions with the latest version of this file.
#Update in the docker-compose.override.yml manually the frontend version (controle container), identifier version (controle-identifier container) and dispatcher version (controle-dispatcher) to the one of this release.
# enable maintenance frontend docker-compose.override.yml when migrations need to be executed
# lpdc:
# image: lblod/frontend-generic-maintenance
# environment:
# EMBER_MAINTENANCE_MESSAGE: " We geven de Lokale Producten- en Dienstencatalogus (LPDC) momenteel een update. Binnen enkele uren kan je gebruikmaken van een verbeterde versie van LPDC voor een nog vlottere gebruikerservaring."
# EMBER_MAINTENANCE_APP_TITLE: "Lokale Producten- en Dienstencatalogus"
# EMBER_MAINTENANCE_APP_URL: "lpdc.lokaalbestuur.vlaanderen.be"
drc pull
drc up -d
drc logs --follow --timestamps --since 1m
git stash clear
cd /data
#take a backup of all
tar -zcvf app-lpdc-digitaal-loket-prod-2.tar.gz app-lpdc-digitaal-loket/
#move backups to /backups/prod-data-backups/<releasename> folder (e.g. 2023-09)
# bring the app-http-logger back up
cd /data/app-http-logger
drc up -d
# clean up unused docker containers
docker system prune -a
Mention on rocket chat that a new release was performed, operations monitoring can continue.