diff --git a/.vols b/.vols new file mode 120000 index 00000000..f37dcb3a --- /dev/null +++ b/.vols @@ -0,0 +1 @@ +/home/oleger/kobo-docker-data/ \ No newline at end of file diff --git a/README.md b/README.md index 2d0d4813..583a1d3a 100644 --- a/README.md +++ b/README.md @@ -28,6 +28,8 @@ database. They now each have their own, separate databases. **If you are upgrading an existing single-database installation, you must follow [these instructions](https://community.kobotoolbox.org/t/upgrading-to-separate-databases-for-kpi-and-kobocat/7202)** to migrate the KPI tables to a new database and adjust your configuration appropriately. This assumes your last upgrade was **more recent** than March 4, 2019. If not, you must [upgrade your databases](#important-notice-when-upgrading-from-commit-5c2ef02-march-4-2019-or-earlier) before proceeding. +If you do not want to upgrade at this time, please use the [`shared-database-obsolete`](https://github.com/kobotoolbox/kobo-docker/tree/shared-database-obsolete) branch instead. + ## Important notice when upgrading from commit [`5c2ef02` (March 4, 2019)](https://github.com/kobotoolbox/kobo-docker/commit/5c2ef0273339bee5c374830f72e52945947042a8) or earlier @@ -42,9 +44,10 @@ Below is a diagram (made with [Lucidchart](https://www.lucidchart.com)) of the c ![Diagram of Docker Containers](./doc/container-diagram.svg) -### Secure your installation! -kobo-docker **opens ports on all interfaces** to let the `frontend` containers communicate with `backend` containers. -A firewall is **HIGHLY recommended**. You **MUST** block PostgreSQL, Redis and MongoDB ports when the server is exposed publicly. +### Secure your installation +`kobo-docker` does **not** expose backend container ports. +If you want to use `kobo-docker` with separated servers (one for frontend containers, one for backend containers), +you will need to expose ports. A firewall is **HIGHLY recommended** to grant access `frontend` containers only to `PostgreSQL`, `redis` and `MongoDB` ports. ## Setup procedure @@ -68,7 +71,7 @@ Already have an existing installation? Please see below. - `kobo-deployments/envfiles/kobocat.txt` ```diff - KOBOCAT_BROKER_URL=amqp://kobocat: kobocat@rabbit.[internal domain name]:5672/kobocat - + KOBOCAT_BROKER_URL =redis://redis-main.[internal domain name]:6389/2` + + KOBOCAT_BROKER_URL=redis://redis-main.[internal domain name]:6389/2` ``` 2. **Load balancing and redundancy** @@ -88,34 +91,38 @@ Already have an existing installation? Please see below. - Redis Docker-compose for `frontend` can be started on its own server, same thing for `backend`. Users can start as many `frontend` servers they want. A load balancer can spread the traffic between `frontend` servers. - kobo-docker uses (private) domain names between `frontend` and `backend`. + `kobo-docker` uses (private) domain names between `frontend` and `backend`. It's fully customizable in configuration files. Once again, [kobo-install](https://github.com/kobotoolbox/kobo-install) does simplify the job by creating the configuration files for you. 2. Redundancy `Backend` containers not redundant yet. Only `PostgreSQL` can be configured in `Master/Slave` mode where `Slave` is a real-time read-only replica. - This is a diagram that shows how kobo-docker can be used for a load-balanced/(almost) redundant solution. + This is a diagram that shows how `kobo-docker` can be used for a load-balanced/(almost) redundant solution. _NB: The diagram is based on AWS infrastructure, but it's not required to host your environment there._ ![aws diagram](./doc/aws-diagram.svg) ## Usage -It's recommended to create `*.override.yml` docker-compose files to customize your environment. It makes easier to update. +It's recommended to create `*.override.yml` docker-compose files to customize your environment. It makes easier to update. +Samples are provided. Remove `.sample` extension and update them to match your environment. - `docker-compose.frontend.override.yml` - `docker-compose.backend.master.override.yml` - `docker-compose.backend.slave.override.yml` (if a postgres replica is used) -1. **Start/start containers** +1. **Start/start containers** + ``` - $kobo-docker> docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] up -d - $kobo-docker> docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] up -d - $kobo-docker> docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] stop - $kobo-docker> docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] stop + # Start + $kobo-docker> docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml up -d + $kobo-docker> docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml up -d + + # Stop + $kobo-docker> docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml stop + $kobo-docker> docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml stop ``` - 2. **Backups** Automatic, periodic backups of KoBoCAT media, MongoDB, PostgreSQL and Redis can be individually enabled by uncommenting (and optionally customizing) the `*_BACKUP_SCHEDULE` variables in your envfiles. @@ -130,10 +137,10 @@ It's recommended to create `*.override.yml` docker-compose files to customize yo Backups **on disk** can also be manually triggered when kobo-docker is running by executing the the following commands: ``` - $kobo-docker> docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] exec kobocat /srv/src/kobocat/docker/backup_media.bash - $kobo-docker> docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] exec mongo /bin/bash /kobo-docker-scripts/backup-to-disk.bash - $kobo-docker> docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] exec -e PGUSER=kobo postgres /bin/bash /kobo-docker-scripts/backup-to-disk.bash - $kobo-docker> docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] exec redis_main /bin/bash /kobo-docker-scripts/backup-to-disk.bash + $kobo-docker> docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml exec kobocat /srv/src/kobocat/docker/backup_media.bash + $kobo-docker> docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml exec mongo /bin/bash /kobo-docker-scripts/backup-to-disk.bash + $kobo-docker> docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml exec -e PGUSER=kobo postgres /bin/bash /kobo-docker-scripts/backup-to-disk.bash + $kobo-docker> docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml exec redis_main /bin/bash /kobo-docker-scripts/backup-to-disk.bash ``` 2. **Restore backups** @@ -153,7 +160,7 @@ It's recommended to create `*.override.yml` docker-compose files to customize yo **Start** ``` - docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] stop nginx + docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml stop nginx docker-compose -f docker-compose.maintenance.yml up -d ``` @@ -161,7 +168,7 @@ It's recommended to create `*.override.yml` docker-compose files to customize yo ``` docker-compose -f docker-compose.maintenance.yml down - docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] up -d nginx + docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml up -d nginx ``` There are 4 variables that can be customized in `docker-compose.maintenance.yml` @@ -177,11 +184,11 @@ It's recommended to create `*.override.yml` docker-compose files to customize yo You can confirm that your containers are running with `docker ps`. To inspect the log output from: - - the frontend containers, execute `docker-compose -f docker-compose.frontend.yml [-f docker-compose.frontend.override.yml] logs -f` - - the master backend containers, execute `docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] logs -f` - - the slaved backend container, execute `docker-compose -f docker-compose.backend.slave.yml [-f docker-compose.backend.slave.override.yml] logs -f` + - the frontend containers, execute `docker-compose -f docker-compose.frontend.yml -f docker-compose.frontend.override.yml logs -f` + - the master backend containers, execute `docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml logs -f` + - the slaved backend container, execute `docker-compose -f docker-compose.backend.slave.yml -f docker-compose.backend.slave.override.yml logs -f` - For a specific container use e.g. `docker-compose -f docker-compose.backend.master.yml [-f docker-compose.backend.master.override.yml] logs -f redis_main`. + For a specific container use e.g. `docker-compose -f docker-compose.backend.master.yml -f docker-compose.backend.master.override.yml logs -f redis_main`. The documentation for Docker can be found at https://docs.docker.com. diff --git a/backups b/backups new file mode 120000 index 00000000..0a6c9a35 --- /dev/null +++ b/backups @@ -0,0 +1 @@ +/home/oleger/kobo-docker-data/.backups/ \ No newline at end of file diff --git a/deployments/envfiles/databases.txt b/deployments/envfiles/databases.txt index 105e7cac..b9714545 100644 --- a/deployments/envfiles/databases.txt +++ b/deployments/envfiles/databases.txt @@ -6,6 +6,11 @@ # Please see kobocat.txt to set container variables KOBO_MONGO_PORT=27017 KOBO_MONGO_HOST=mongo.domain.name +MONGO_INITDB_ROOT_USERNAME=root +MONGO_INITDB_ROOT_PASSWORD=kobo +MONGO_INITDB_DATABASE=formhub +KOBO_MONGO_USERNAME=kobo +KOBO_MONGO_PASSWORD=kobo # Default MongoDB backup schedule is weekly at 01:00 AM UTC on Sunday. #MONGO_BACKUP_SCHEDULE=0 1 * * 0 @@ -20,12 +25,14 @@ KOBO_MONGO_HOST=mongo.domain.name # `DATABASE_URL` environment variable. POSTGRES_PORT=5432 POSTGRES_HOST=postgres.domain.name -POSTGRES_DB=kobotoolbox POSTGRES_USER=kobo POSTGRES_PASSWORD=kobo +KC_POSTGRES_DB=kobocat +KPI_POSTGRES_DB=koboform # Postgres database used by kpi and kobocat Django apps -DATABASE_URL=postgis://kobo:kobo@postgres.domain.name:5432/kobotoolbox +KC_DATABASE_URL=postgis://kobo:kobo@postgres.domain.name:5432/kobotoolbox +KPI_DATABASE_URL=postgis://kobo:kobo@postgres.domain.name:5432/kobotoolbox # Replication. Password is mandatory KOBO_POSTGRES_REPLICATION_USER=kobo_replication @@ -41,3 +48,6 @@ KOBO_POSTGRES_MASTER_ENDPOINT=primary.postgres.domain.name #-------------------------------------------------------------------------------- #REDIS_BACKUP_SCHEDULE=0 3 * * 0 + +REDIS_SESSION_URL=redis://:kobo@redis-cache.kobo.private:6390/2 +REDIS_PASSWORD=kobo diff --git a/deployments/envfiles/kobocat.txt b/deployments/envfiles/kobocat.txt index 1a6df21f..aeda9f64 100644 --- a/deployments/envfiles/kobocat.txt +++ b/deployments/envfiles/kobocat.txt @@ -5,7 +5,7 @@ USE_X_FORWARDED_HOST=False DJANGO_SETTINGS_MODULE=onadata.settings.kc_environ ENKETO_VERSION=Express -KOBOCAT_BROKER_URL=redis://redis-main.domain.name:6379/2 +KOBOCAT_BROKER_URL=redis://:kobo@redis-main.domain.name:6379/2 KOBOCAT_CELERY_LOG_FILE=/srv/logs/celery.log #ENKETO_OFFLINE_SURVEYS=True diff --git a/deployments/envfiles/kpi.txt b/deployments/envfiles/kpi.txt index 234b3fbd..1f2a8dda 100644 --- a/deployments/envfiles/kpi.txt +++ b/deployments/envfiles/kpi.txt @@ -5,7 +5,7 @@ USE_X_FORWARDED_HOST=False ENKETO_VERSION=Express KPI_PREFIX=/ -KPI_BROKER_URL=redis://redis-main.domain.name:6379/1 +KPI_BROKER_URL=redis://:kobo@redis-main.domain.name:6379/1 KPI_MONGO_HOST=mongo.domain.name diff --git a/docker-compose.backend.master.override.yml.sample b/docker-compose.backend.master.override.yml.sample new file mode 100644 index 00000000..4c2d5a3f --- /dev/null +++ b/docker-compose.backend.master.override.yml.sample @@ -0,0 +1,46 @@ +# For public, HTTPS servers. +version: '2.2' + +services: + + postgres: + #environment: + # - POSTGRES_BACKUP_FROM_SLAVE=True + # Uncomment `ports` section if you want to expose ports (e.g. use as separated servers) + #ports: + # - 5432:5432 + # Comment out `networks` section below if you want to expose ports + networks: + kobo-be-network: + aliases: + - postgres.kobo.private + + mongo: + # Uncomment `ports` section if you want to expose ports (e.g. use as separated servers) + #ports: + # - 27017:27017 + # Comment out section below if you want to expose ports + networks: + kobo-be-network: + aliases: + - mongo.kobo.private + + redis_main: + # Uncomment `ports` section if you want to expose ports (e.g. use as separated servers) + #ports: + # - 6379:6379 + # Comment out `networks` section below if you want to expose ports + networks: + kobo-be-network: + aliases: + - redis-main.kobo.private + + redis_cache: + # Uncomment `ports` section if you want to expose ports (e.g. use as separated servers) + #ports: + # - 6380:6380 + # Comment out `networks` section below if you want to expose ports + networks: + kobo-be-network: + aliases: + - redis-cache.kobo.private diff --git a/docker-compose.backend.master.yml b/docker-compose.backend.master.yml index 62a10e79..5a18d3fb 100644 --- a/docker-compose.backend.master.yml +++ b/docker-compose.backend.master.yml @@ -27,3 +27,7 @@ services: service: redis_cache environment: - KOBO_REDIS_SERVER_ROLE=cache + +networks: + kobo-be-network: + driver: bridge diff --git a/docker-compose.backend.slave.yml b/docker-compose.backend.slave.yml index f973729b..9efa4452 100644 --- a/docker-compose.backend.slave.yml +++ b/docker-compose.backend.slave.yml @@ -9,19 +9,6 @@ services: environment: - KOBO_POSTGRES_DB_SERVER_ROLE=slave -# mongo: -# extends: -# file: docker-compose.primary.backend.server.yml -# service: mongo -# -# # Adapted from https://github.com/kobotoolbox/enketo-express/blob/docker/docker-compose.yml. -# redis_main: -# extends: -# file: docker-compose.primary.backend.server.yml -# service: redis_main -# -# # Adapted from https://github.com/kobotoolbox/enketo-express/blob/docker/docker-compose.yml. -# redis_cache: -# extends: -# file: docker-compose.primary.backend.server.yml -# service: redis_cache +networks: + kobo-be-network: + driver: bridge diff --git a/docker-compose.backend.template.yml b/docker-compose.backend.template.yml index 3848a1b3..afa818a2 100644 --- a/docker-compose.backend.template.yml +++ b/docker-compose.backend.template.yml @@ -16,6 +16,7 @@ services: - ./postgres:/kobo-docker-scripts command: "/bin/bash /kobo-docker-scripts/entrypoint.sh" restart: on-failure + stop_grace_period: 5m # Ports should be declared in `docker-compose.backend.master.override.yml` and docker-compose.backend.slave.override.yml` #ports: # - 5432:5432 @@ -36,14 +37,13 @@ services: - ./log/mongo:/srv/logs restart: on-failure command: "/bin/bash /kobo-docker-scripts/entrypoint.sh" + stop_grace_period: 5m # Ports should be declared in `docker-compose.backend.master.override.yml` and docker-compose.backend.slave.override.yml` #ports: # - 27017:27017 - # Adapted from https://github.com/kobotoolbox/enketo-express/blob/docker/docker-compose.yml. redis_main: image: redis:3.2 - # Map our "main" Redis config into the container. env_file: - ../kobo-deployments/envfile.txt - ../kobo-deployments/envfiles/databases.txt @@ -56,6 +56,7 @@ services: - ./redis/entrypoint.sh:/tmp/redis/entrypoint.sh:ro - ./log/redis_main:/var/log/redis restart: on-failure + stop_grace_period: 2m30s # Ports should be declared in `docker-compose.backend.master.override.yml` and docker-compose.backend.slave.override.yml` #ports: # - 6379:6379 @@ -63,16 +64,17 @@ services: - net.core.somaxconn=2048 command: "/bin/bash /tmp/redis/entrypoint.sh" - # Adapted from https://github.com/kobotoolbox/enketo-express/blob/docker/docker-compose.yml. redis_cache: image: redis:3.2 - # Map our "cache" Redis config into the container. + env_file: + - ../kobo-deployments/envfiles/databases.txt volumes: - ./.vols/redis_cache_data/:/data/ - ./redis/redis-enketo-cache.conf.tmpl:/etc/redis/redis.conf.tmpl:ro - ./redis/entrypoint.sh:/tmp/redis/entrypoint.sh:ro - ./log/redis_cache:/var/log/redis restart: on-failure + stop_grace_period: 2m30s # Ports should be declared in `docker-compose.backend.master.override.yml` and docker-compose.backend.slave.override.yml` #ports: # - 6380:6380 diff --git a/docker-compose.frontend.override.yml.sample b/docker-compose.frontend.override.yml.sample new file mode 100644 index 00000000..4c156cfd --- /dev/null +++ b/docker-compose.frontend.override.yml.sample @@ -0,0 +1,67 @@ +# For public, HTTPS servers. +version: '3' + +services: + kobocat: + environment: + # change `ENKETO_PROTOCOL` to http if HTTPS is not used + - ENKETO_PROTOCOL=https + # `NGINX_PUBLIC_PORT` is the port used to access KoBoToolbox (e.g. `https://kc.kobotoolbox.org:`) + - NGINX_PUBLIC_PORT=80 + # Uncomment the lines below to tweak uWSGI + #- KC_UWSGI_WORKERS_COUNT=2 + #- KC_UWSGI_CHEAPER_WORKERS_COUNT=1 + #- KC_UWSGI_MAX_REQUESTS=512 + #- KC_UWSGI_CHEAPER_RSS_LIMIT_SOFT=134217728 + networks: + kobo-be-network: + aliases: + - kobocat + - kobocat.docker.container + + kpi: + environment: + # `NGINX_PUBLIC_PORT` is the port used to access KoBoToolbox (e.g. `https://kf.kobotoolbox.org:`) + - NGINX_PUBLIC_PORT=80 + # Uncomment the lines below to tweak uWSGI + #- KPI_UWSGI_WORKERS_COUNT=2 + #- KPI_UWSGI_CHEAPER_WORKERS_COUNT=1 + #- KPI_UWSGI_MAX_REQUESTS=512 + #- KPI_UWSGI_CHEAPER_RSS_LIMIT_SOFT=134217728 + # Comment out the line below if HTTPS is not used + - SECURE_PROXY_SSL_HEADER=HTTP_X_FORWARDED_PROTO, https + networks: + kobo-be-network: + aliases: + - kpi + - kpi.docker.container + + nginx: + environment: + # `NGINX_PUBLIC_PORT` is the port used to access KoBoToolbox (e.g. `https://kf.kobotoolbox.org:`) + - NGINX_PUBLIC_PORT=80 + ports: + # :80 . If no proxies, `proxy_port` should be the same as `NGINX_PUBLIC_PORT` + - 80:80 + networks: + kobo-fe-network: + aliases: + - nginx + # These aliases must match the concatenation of `*_PUBLIC_SUBDOMAIN` and `INTERNAL_DOMAIN_NAME` + # found in `../kobo-deployments/envfile.txt` + - kf.docker.internal + - kc.docker.internal + - ee.docker.internal + + enketo_express: + networks: + kobo-be-network: + aliases: + - enketo_express + +networks: + kobo-be-network: + external: + # name: _kobo-be-network`, where `prefix` is usually the parent + # folder name + name: kobodocker_kobo-be-network diff --git a/docker-compose.frontend.yml b/docker-compose.frontend.yml index 05a84202..3b39f9b9 100644 --- a/docker-compose.frontend.yml +++ b/docker-compose.frontend.yml @@ -1,12 +1,11 @@ # NOTE: Generate `../kobo-deployments/` environment files using # https://github.com/kobotoolbox/kobo-install. You may manually customize the -# files aftewards and stop using kobo-install if necessary. - +# files afterwards and stop using kobo-install if necessary. version: '3' services: kobocat: - image: kobotoolbox/kobocat:2.020.01 + image: kobotoolbox/kobocat:2.020.18 hostname: kobocat env_file: - ../kobo-deployments/envfile.txt @@ -25,8 +24,6 @@ services: - KC_UWSGI_WORKERS_COUNT=2 - KC_UWSGI_CHEAPER_RSS_LIMIT_SOFT=134217728 - KC_UWSGI_CHEAPER_WORKERS_COUNT=1 - # `NGINX_PUBLIC_PORT` should be declared in docker-compose.frontend.override.yml - #- NGINX_PUBLIC_PORT=80 volumes: - ./.vols/static/kobocat:/srv/static - ./.vols/kobocat_media_uploads:/srv/src/kobocat/media @@ -45,7 +42,7 @@ services: - kobocat.docker.container kpi: - image: kobotoolbox/kpi:2.020.01 + image: kobotoolbox/kpi:2.020.18 hostname: kpi env_file: - ../kobo-deployments/envfile.txt @@ -58,15 +55,11 @@ services: sysctls: - net.core.somaxconn=2048 environment: - # `SECURE_PROXY_SSL_HEADER` should be declared in docker-compose.frontend.override.yml - #- SECURE_PROXY_SSL_HEADER=HTTP_X_FORWARDED_PROTO, https - SYNC_KOBOCAT_XFORMS=False # Should be True on at least one frontend environment - KPI_UWSGI_MAX_REQUESTS=512 - KPI_UWSGI_WORKERS_COUNT=2 - KPI_UWSGI_CHEAPER_RSS_LIMIT_SOFT=134217728 - KPI_UWSGI_CHEAPER_WORKERS_COUNT=1 - # `NGINX_PUBLIC_PORT` should be declared in docker-compose.frontend.override.yml - #- NGINX_PUBLIC_PORT=80 volumes: - ./.vols/static/kpi:/srv/static - ./log/kpi:/srv/logs @@ -74,7 +67,9 @@ services: - ./scripts/wait_for_postgres.bash:/srv/init/wait_for_postgres.bash:ro - ./scripts/runtime_variables_kpi.source.bash:/etc/profile.d/runtime_variables_kpi.source.bash.sh:ro - ./uwsgi/kpi_uwsgi.ini:/srv/src/kpi/uwsgi.ini - # Allow access to Kobocat's media uploads within KPI + # Persistent storage for FileFields when S3 not used (e.g. exports, uploaded map layers) + - ./.vols/kpi_media:/srv/src/kpi/media + # Allow access to KoBoCAT media uploads within KPI - ./.vols/kobocat_media_uploads:/srv/src/kobocat/media restart: on-failure networks: @@ -94,12 +89,6 @@ services: - ../kobo-deployments/envfiles/kpi.txt environment: - TEMPLATED_VAR_REFS=$${PUBLIC_REQUEST_SCHEME} $${INTERNAL_DOMAIN_NAME} $${PUBLIC_DOMAIN_NAME} $${KOBOFORM_PUBLIC_SUBDOMAIN} $${KOBOCAT_PUBLIC_SUBDOMAIN} $${ENKETO_EXPRESS_PUBLIC_SUBDOMAIN} - # `NGINX_PUBLIC_PORT` should be declared in docker-compose.frontend.override.yml - #- NGINX_PUBLIC_PORT=80 - # `ports` should to be declared in docker-compose.frontend.override.yml - # It allows `kobo-install` to specify the port without opening port 80 by default - #ports: - # - 80:80 volumes: - ./.vols/static:/srv/www:ro - ./log/nginx:/var/log/nginx @@ -112,13 +101,7 @@ services: kobo-fe-network: aliases: - nginx - # These aliases must match the concatenation of `*_PUBLIC_SUBDOMAIN` and `INTERNAL_DOMAIN_NAME` found in `../kobo-deployments/envfile.txt` - # and should be declared in docker-compose.frontend.override.yml - #- kc.docker.internal - #- kf.docker.internal - #- ee.docker.internal - # Adapted from https://github.com/kobotoolbox/enketo-express/blob/docker/docker-compose.yml. enketo_express: image: kobotoolbox/enketo-express-extra-widgets:1.86.3-jnm-docker-ie11 env_file: diff --git a/log b/log new file mode 120000 index 00000000..7937a8dc --- /dev/null +++ b/log @@ -0,0 +1 @@ +/home/oleger/kobo-docker-data/.logs/ \ No newline at end of file diff --git a/mongo/.nfs000000020442fe9b00000003 b/mongo/.nfs000000020442fe9b00000003 new file mode 100644 index 00000000..f08631dc --- /dev/null +++ b/mongo/.nfs000000020442fe9b00000003 @@ -0,0 +1,15 @@ +#!/usr/bin/env bash +# set -e + +BASH_PATH=$(command -v bash) +export KOBO_DOCKER_SCRIPTS_DIR=/kobo-docker-scripts + +$BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh + +echo "Copying init scripts ..." +cp $KOBO_DOCKER_SCRIPTS_DIR/init_* /docker-entrypoint-initdb.d/ + +$BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/upsert_users.sh + +echo "Launching official entrypoint..." +$BASH_PATH /entrypoint.sh mongod diff --git a/mongo/backup-to-disk.bash b/mongo/backup-to-disk.bash index 1454c82b..e956f59e 100644 --- a/mongo/backup-to-disk.bash +++ b/mongo/backup-to-disk.bash @@ -6,6 +6,11 @@ BACKUP_FILENAME="mongo-${MONGO_MAJOR}-${PUBLIC_DOMAIN_NAME}-${DBDATESTAMP}.gz" cd /srv/backups rm -rf *.gz -mongodump --archive="${BACKUP_FILENAME}" --gzip + +if [[ -n "${MONGO_INITDB_ROOT_USERNAME}" ]] && [[ -n "${MONGO_INITDB_ROOT_PASSWORD}" ]]; then + mongodump --archive="${BACKUP_FILENAME}" --gzip --username="${MONGO_INITDB_ROOT_USERNAME}" --password="${MONGO_INITDB_ROOT_PASSWORD}" +else + mongodump --archive="${BACKUP_FILENAME}" --gzip +fi echo "Backup file \`${BACKUP_FILENAME}\` created successfully." diff --git a/mongo/backup-to-s3.py b/mongo/backup-to-s3.py index dfba42c2..7da71f21 100644 --- a/mongo/backup-to-s3.py +++ b/mongo/backup-to-s3.py @@ -14,7 +14,18 @@ DBDATESTAMP, ) -BACKUP_COMMAND = "mongodump --archive --gzip" +MONGO_INITDB_ROOT_USERNAME = os.environ.get('MONGO_INITDB_ROOT_USERNAME') +MONGO_INITDB_ROOT_PASSWORD = os.environ.get('MONGO_INITDB_ROOT_PASSWORD') + +if MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD: + BACKUP_COMMAND = 'mongodump --archive --gzip --username="{username}"' \ + ' --password="{password}"'.format( + username=MONGO_INITDB_ROOT_USERNAME, + password=MONGO_INITDB_ROOT_PASSWORD + ) +else: + BACKUP_COMMAND = "mongodump --archive --gzip" + yearly_retention = int(os.environ.get("AWS_BACKUP_YEARLY_RETENTION", 2)) monthly_retention = int(os.environ.get("AWS_BACKUP_MONTHLY_RETENTION", 12)) diff --git a/mongo/entrypoint.sh b/mongo/entrypoint.sh index f575dac2..4aa9cbf1 100644 --- a/mongo/entrypoint.sh +++ b/mongo/entrypoint.sh @@ -1,7 +1,7 @@ #!/usr/bin/env bash # set -e -BASH_PATH=$(which bash) +BASH_PATH=$(command -v bash) export KOBO_DOCKER_SCRIPTS_DIR=/kobo-docker-scripts $BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh @@ -9,5 +9,9 @@ $BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh echo "Copying init scripts ..." cp $KOBO_DOCKER_SCRIPTS_DIR/init_* /docker-entrypoint-initdb.d/ +$BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/upsert_users.sh + echo "Launching official entrypoint..." -$BASH_PATH /entrypoint.sh mongod +# `exec` here is important to pass signals to the database server process; +# without `exec`, the server will be terminated abruptly with SIGKILL (see #276) +exec $BASH_PATH /entrypoint.sh mongod diff --git a/mongo/init_01_add_index.sh b/mongo/init_01_add_index.sh index 34fb7fec..a9f839f4 100644 --- a/mongo/init_01_add_index.sh +++ b/mongo/init_01_add_index.sh @@ -1,10 +1,12 @@ #!/bin/bash # copyleft 2015 Serban Teodorescu - # creates the additional required index for kobo mongo -DB=formhub +echo "Creating index for ${MONGO_INITDB_DATABASE}..." + COL=instances -echo "db.$COL.createIndex( { _userform_id: 1 } )" | mongo $DB +echo "db.$COL.createIndex( { _userform_id: 1 } )" | mongo ${MONGO_INITDB_DATABASE} + +echo "Done!" diff --git a/mongo/init_02_create_user.sh b/mongo/init_02_create_user.sh new file mode 100644 index 00000000..21f59941 --- /dev/null +++ b/mongo/init_02_create_user.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +echo "Creating user for ${MONGO_INITDB_DATABASE}..." + +mongo=( mongo --host 127.0.0.1 --port 27017 --quiet ) + +_js_escape() { + jq --null-input --arg 'str' "$1" '$str' +} + +"${mongo[@]}" "$MONGO_INITDB_DATABASE" <<-EOJS + db.createUser({ + user: $(_js_escape "$KOBO_MONGO_USERNAME"), + pwd: $(_js_escape "$KOBO_MONGO_PASSWORD"), + roles: [ { role: 'readWrite', db: $(_js_escape "$MONGO_INITDB_DATABASE") } ] + }) +EOJS + +echo "Done!" diff --git a/mongo/toggle-backup-activation.sh b/mongo/toggle-backup-activation.sh index ed764a5d..cfcbeccf 100644 --- a/mongo/toggle-backup-activation.sh +++ b/mongo/toggle-backup-activation.sh @@ -13,6 +13,8 @@ else # Pass env variables to cron task echo "MONGO_MAJOR=${MONGO_MAJOR}" >> /etc/cron.d/backup_mongo_crontab echo "PUBLIC_DOMAIN_NAME=${PUBLIC_DOMAIN_NAME}" >> /etc/cron.d/backup_mongo_crontab + echo "MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}" >> /etc/cron.d/backup_mongo_crontab + echo "MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}" >> /etc/cron.d/backup_mongo_crontab # To use S3 as storage, AWS access key, secret key and bucket name must filled up USE_S3=1 @@ -20,48 +22,50 @@ else FALSE=0 # Add only non-empty variable to cron tasks - if [ ! -z "${AWS_ACCESS_KEY_ID}" ]; then + if [ -n "${AWS_ACCESS_KEY_ID}" ]; then echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" >> /etc/cron.d/backup_mongo_crontab else USE_S3=$FALSE fi - if [ ! -z "${AWS_SECRET_ACCESS_KEY}" ]; then + if [ -n "${AWS_SECRET_ACCESS_KEY}" ]; then echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}" >> /etc/cron.d/backup_mongo_crontab else USE_S3=$FALSE fi - if [ ! -z "${BACKUP_AWS_STORAGE_BUCKET_NAME}" ]; then + if [ -n "${BACKUP_AWS_STORAGE_BUCKET_NAME}" ]; then echo "BACKUP_AWS_STORAGE_BUCKET_NAME=${BACKUP_AWS_STORAGE_BUCKET_NAME}" >> /etc/cron.d/backup_mongo_crontab else USE_S3=$FALSE fi - if [ ! -z "${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" ]; then + if [ -n "${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" ]; then echo "AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED=${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" >> /etc/cron.d/backup_mongo_crontab fi - if [ ! -z "${AWS_BACKUP_YEARLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_YEARLY_RETENTION}" ]; then echo "AWS_BACKUP_YEARLY_RETENTION=${AWS_BACKUP_YEARLY_RETENTION}" >> /etc/cron.d/backup_mongo_crontab fi - if [ ! -z "${AWS_BACKUP_MONTHLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_MONTHLY_RETENTION}" ]; then echo "AWS_BACKUP_MONTHLY_RETENTION=${AWS_BACKUP_MONTHLY_RETENTION}" >> /etc/cron.d/backup_mongo_crontab fi - if [ ! -z "${AWS_BACKUP_WEEKLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_WEEKLY_RETENTION}" ]; then echo "AWS_BACKUP_WEEKLY_RETENTION=${AWS_BACKUP_WEEKLY_RETENTION}" >> /etc/cron.d/backup_mongo_crontab fi - if [ ! -z "${AWS_BACKUP_DAILY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_DAILY_RETENTION}" ]; then echo "AWS_BACKUP_DAILY_RETENTION=${AWS_BACKUP_DAILY_RETENTION}" >> /etc/cron.d/backup_mongo_crontab fi - if [ ! -z "${AWS_MONGO_BACKUP_MINIMUM_SIZE}" ]; then + if [ -n "${AWS_MONGO_BACKUP_MINIMUM_SIZE}" ]; then echo "AWS_MONGO_BACKUP_MINIMUM_SIZE=${AWS_MONGO_BACKUP_MINIMUM_SIZE}" >> /etc/cron.d/backup_mongo_crontab fi if [ "$USE_S3" -eq "$TRUE" ]; then echo "Installing virtualenv for Mongo backup on S3..." - apt-get install -y s3cmd --quiet=2 > /dev/null - apt-get install -y python-virtualenv --quiet=2 > /dev/null - virtualenv /tmp/backup-virtualenv + apt-get install -y python3-pip --quiet=2 > /dev/null + python3 -m pip install --upgrade --quiet pip + python3 -m pip install --upgrade --quiet virtualenv + python3 -m pip install --quiet s3cmd + virtualenv --quiet -p /usr/bin/python3 /tmp/backup-virtualenv . /tmp/backup-virtualenv/bin/activate pip install --quiet boto deactivate @@ -69,7 +73,7 @@ else INTERPRETER=/tmp/backup-virtualenv/bin/python BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-s3.py" else - INTERPRETER=$(which bash) + INTERPRETER=$(command -v bash) BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-disk.bash" fi diff --git a/mongo/upsert_users.sh b/mongo/upsert_users.sh new file mode 100644 index 00000000..da9df558 --- /dev/null +++ b/mongo/upsert_users.sh @@ -0,0 +1,107 @@ +#!/bin/bash +# Update users if database has been already created. +# Users creation on init is left to `init_02_create_user.sh` +BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)" +MONGO_CMD=( mongo --host 127.0.0.1 --port 27017 --quiet ) +MONGO_ADMIN_DATABASE=admin +UPSERT_DB_USERS_TRIGGER_FILE=".upsert_db_users" + +_js_escape() { + jq --null-input --arg 'str' "$1" '$str' +} + +create_root() { + "${MONGO_CMD[@]}" "$MONGO_ADMIN_DATABASE" <<-EOJS + db.createUser({ + user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"), + pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"), + roles: [ { role: 'root', db: $(_js_escape "$MONGO_ADMIN_DATABASE") } ] + }) +EOJS +} + +create_user() { + "${MONGO_CMD[@]}" "$MONGO_INITDB_DATABASE" <<-EOJS + db.createUser({ + user: $(_js_escape "$KOBO_MONGO_USERNAME"), + pwd: $(_js_escape "$KOBO_MONGO_PASSWORD"), + roles: [ { role: 'readWrite', db: $(_js_escape "$MONGO_INITDB_DATABASE") } ] + }) +EOJS +} + +delete_old_users() { + old_ifs="$IFS" + while IFS=$'\t' read -r -a lines + do + old_user="${lines[0]}" + db="${lines[1]}" + delete_user $old_user $db + done < "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" + IFS="$old_ifs" +} + +delete_user() { + # Args: + # $1: username + # $2: database + "${MONGO_CMD[@]}" "$2" <<-EOJS + db.dropUser($(_js_escape "$1")) +EOJS +} + +get_user() { + # Args: + # $1: username + # $2: database + "${MONGO_CMD[@]}" "$2" <<-EOJS + db.getUser($(_js_escape "$1")) +EOJS +} + +update_password() { + # Args: + # $1: username + # $2: password + # $3: database + "${MONGO_CMD[@]}" "$3" <<-EOJS + db.updateUser($(_js_escape "$1"), { + pwd: $(_js_escape "$2") + }) +EOJS +} + +upsert_users() { + user=$(get_user $KOBO_MONGO_USERNAME $MONGO_INITDB_DATABASE) + if [[ "$user" == "null" ]]; then + echo "Creating user for ${MONGO_INITDB_DATABASE}..." + create_user + else + echo "Updating user's password..." + update_password $KOBO_MONGO_USERNAME $KOBO_MONGO_PASSWORD $MONGO_INITDB_DATABASE + fi + + root=$(get_user $MONGO_INITDB_ROOT_USERNAME $MONGO_ADMIN_DATABASE) + if [[ "$root" == "null" ]]; then + echo "Creating super user..." + create_root + else + echo "Updating super user's password..." + update_password $MONGO_INITDB_ROOT_USERNAME $MONGO_INITDB_ROOT_PASSWORD $MONGO_ADMIN_DATABASE + fi + echo 'Done!' +} + +# Update credentials only if `data/db` is empty and `.upsert_db_users` exists. +if [[ -d "${MONGO_DATA}/" ]] && [[ ! -z "$(ls -A ${MONGO_DATA})" ]]; then + # `.upsert_db_users` is created by KoBoInstall if it has detected a + # credentials change during setup. + if [[ -f "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" ]]; then + mongod --quiet & + sleep 5 # wait for mongo to be ready + delete_old_users + upsert_users + mongod --quiet --shutdown + rm -f "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" + fi +fi diff --git a/postgres/.nfs000000020456fcab00000004 b/postgres/.nfs000000020456fcab00000004 new file mode 100644 index 00000000..69d0b80f --- /dev/null +++ b/postgres/.nfs000000020456fcab00000004 @@ -0,0 +1,56 @@ +#!/usr/bin/env bash +# set -e + +export POSTGRES_BIN_DIRECTORY=/usr/lib/postgresql/9.5/bin/ +export POSTGRES_REPO=/var/lib/postgresql +export POSTGRES_BIN=${POSTGRES_BIN_DIRECTORY}/postgres +export POSTGRES_DATA_DIR=${POSTGRES_REPO}/data +export POSTGRES_CONFIG_FILE=${POSTGRES_DATA_DIR}/postgresql.conf +export POSTGRES_CLIENT_AUTH_FILE=${POSTGRES_DATA_DIR}/pg_hba.conf +export POSTGRES_BACKUPS_DIR=/srv/backups +export POSTGRES_LOGS_DIR=/srv/logs +export KOBO_DOCKER_SCRIPTS_DIR=/kobo-docker-scripts + +echo "Copying init scripts ..." +cp $KOBO_DOCKER_SCRIPTS_DIR/shared/init_* /docker-entrypoint-initdb.d/ +cp $KOBO_DOCKER_SCRIPTS_DIR/$KOBO_POSTGRES_DB_SERVER_ROLE/init_* /docker-entrypoint-initdb.d/ + +if [ ! -d $POSTGRES_LOGS_DIR ]; then + mkdir -p $POSTGRES_LOGS_DIR +fi + +if [ ! -d $POSTGRES_BACKUPS_DIR ]; then + mkdir -p $POSTGRES_BACKUPS_DIR +fi + +# Restore permissions +chown -R postgres:postgres $POSTGRES_LOGS_DIR +chown -R postgres:postgres $POSTGRES_BACKUPS_DIR + +# if file exists. Container has already boot once +if [ -f "$POSTGRES_DATA_DIR/kobo_first_run" ]; then + # Start server locally. + su - postgres -c "$(command -v pg_ctl) -D \"$PGDATA\" -o \"-c listen_addresses='127.0.0.1'\" -w start" + until pg_isready -h 127.0.0.1 ; do + sleep 1 + done + + /bin/bash $KOBO_DOCKER_SCRIPTS_DIR/shared/init_02_set_postgres_config.sh + /bin/bash $KOBO_DOCKER_SCRIPTS_DIR/shared/upsert_users.sh + update-postgis.sh + + # Stop server + su - postgres -c "$(command -v pg_ctl) -D \"$PGDATA\" -m fast -w stop" + +elif [ "$KOBO_POSTGRES_DB_SERVER_ROLE" == "slave" ]; then + # Because slave is a replica. This script has already been run on master + echo "Disabling postgis update..." + mv /docker-entrypoint-initdb.d/postgis.sh /docker-entrypoint-initdb.d/postgis.sh.disabled +fi + + +BASH_PATH=$(command -v bash) +$BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh + +echo "Launching official entrypoint..." +/bin/bash /docker-entrypoint.sh postgres diff --git a/postgres/entrypoint.sh b/postgres/entrypoint.sh index 6e1feba3..bb713125 100644 --- a/postgres/entrypoint.sh +++ b/postgres/entrypoint.sh @@ -15,8 +15,6 @@ echo "Copying init scripts ..." cp $KOBO_DOCKER_SCRIPTS_DIR/shared/init_* /docker-entrypoint-initdb.d/ cp $KOBO_DOCKER_SCRIPTS_DIR/$KOBO_POSTGRES_DB_SERVER_ROLE/init_* /docker-entrypoint-initdb.d/ - -# Restore permissions if [ ! -d $POSTGRES_LOGS_DIR ]; then mkdir -p $POSTGRES_LOGS_DIR fi @@ -25,16 +23,24 @@ if [ ! -d $POSTGRES_BACKUPS_DIR ]; then mkdir -p $POSTGRES_BACKUPS_DIR fi +# Restore permissions chown -R postgres:postgres $POSTGRES_LOGS_DIR chown -R postgres:postgres $POSTGRES_BACKUPS_DIR # if file exists. Container has already boot once if [ -f "$POSTGRES_DATA_DIR/kobo_first_run" ]; then + # Start server locally. + su - postgres -c "$(command -v pg_ctl) -D \"$PGDATA\" -o \"-c listen_addresses='127.0.0.1'\" -w start" + until pg_isready -h 127.0.0.1 ; do + sleep 1 + done + /bin/bash $KOBO_DOCKER_SCRIPTS_DIR/shared/init_02_set_postgres_config.sh + /bin/bash $KOBO_DOCKER_SCRIPTS_DIR/shared/upsert_users.sh + update-postgis.sh - # Update PostGIS as background task. - # FIXME There should be a better way to run this script - sleep 30 && update-postgis.sh & + # Stop server + su - postgres -c "$(command -v pg_ctl) -D \"$PGDATA\" -m fast -w stop" elif [ "$KOBO_POSTGRES_DB_SERVER_ROLE" == "slave" ]; then # Because slave is a replica. This script has already been run on master @@ -43,8 +49,10 @@ elif [ "$KOBO_POSTGRES_DB_SERVER_ROLE" == "slave" ]; then fi -BASH_PATH=$(which bash) +BASH_PATH=$(command -v bash) $BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh echo "Launching official entrypoint..." -/bin/bash /docker-entrypoint.sh postgres +# `exec` here is important to pass signals to the database server process; +# without `exec`, the server will be terminated abruptly with SIGKILL (see #276) +exec /bin/bash /docker-entrypoint.sh postgres diff --git a/postgres/shared/init_02_set_postgres_config.sh b/postgres/shared/init_02_set_postgres_config.sh index b8fc1bb8..691029cc 100644 --- a/postgres/shared/init_02_set_postgres_config.sh +++ b/postgres/shared/init_02_set_postgres_config.sh @@ -1,7 +1,5 @@ #!/usr/bin/env bash - - if [ ! -f "$POSTGRES_CONFIG_FILE.orig" ]; then echo "Let's keep a copy of current configuration file!" cp $POSTGRES_CONFIG_FILE "$POSTGRES_CONFIG_FILE.orig" @@ -24,4 +22,4 @@ echo "Applying new client authentication configuration file..." cp $KOBO_DOCKER_SCRIPTS_DIR/shared/pg_hba.conf "$POSTGRES_CLIENT_AUTH_FILE" echo "Creating hg_hba config file..." -sed -i "s/KOBO_POSTGRES_REPLICATION_USER/${KOBO_POSTGRES_REPLICATION_USER//\"/}/g" "$POSTGRES_CLIENT_AUTH_FILE" \ No newline at end of file +sed -i "s/KOBO_POSTGRES_REPLICATION_USER/${KOBO_POSTGRES_REPLICATION_USER//\"/}/g" "$POSTGRES_CLIENT_AUTH_FILE" diff --git a/postgres/shared/upsert_users.sh b/postgres/shared/upsert_users.sh new file mode 100644 index 00000000..7017bd1d --- /dev/null +++ b/postgres/shared/upsert_users.sh @@ -0,0 +1,91 @@ +#!/bin/bash +# Update users if database has been already created. +BASE_DIR="$(dirname $(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd))" +UPSERT_DB_USERS_TRIGGER_FILE=".upsert_db_users" +PSQL_CMD="" + +create_user() { + sql="CREATE USER $POSTGRES_USER WITH SUPERUSER CREATEDB CREATEROLE REPLICATION BYPASSRLS ENCRYPTED PASSWORD '$POSTGRES_PASSWORD';" + "${PSQL_CMD[@]}" -q -c "$sql" +} + +does_user_exist() { + if "${PSQL_CMD[@]}" -t -c '\du' | cut -d \| -f 1 | grep -qw "$POSTGRES_USER"; then + echo 1 + else + echo 0 + fi +} + +delete_user() { + user="$1" + echo "Deleting user \`$user\`..." + sql="DROP USER \"$user\";" + "${PSQL_CMD[@]}" -q -c "$sql" +} + +get_old_user() { + # `${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}` contains previous username + # and a boolean for deletion. + # Its format should be: `` + old_ifs="$IFS" + IFS=$'\t' read -r -a line < "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" + IFS="$old_ifs" + echo "${line[0]}" # echo OLD_USER + if [[ "${line[1]}" == "true" ]]; then + return 1 + else + return 0 + fi +} + +get_psql_command() { + # We need to find the name of default DB created by `init_db` to be able + # perform next commandsv + known_dbs=( kobo postgres $OLD_USER $POSTGRES_USER ) + user="$1" + for db in "${known_dbs[@]}"; do + if psql -U "$user" -d "$db" -q -c "\du" | grep -vq "FATAL"; then + PSQL_CMD=( psql -U "$user" -d "$db" ) + break + fi + done + + if [[ "$PSQL_CMD" == "" ]]; then + echo "Could not connect with \`psql\`" + exit + fi +} + +update_password() { + sql="ALTER USER $POSTGRES_USER WITH ENCRYPTED PASSWORD '$POSTGRES_PASSWORD';" + "${PSQL_CMD[@]}" -q -c "$sql" +} + +upsert_users() { + if [[ $(does_user_exist) == "0" ]]; then + echo "Creating user..." + create_user + else + echo "Updating user's password..." + update_password + fi + echo 'Done!' +} + +# Update credentials only if `/var/lib/postgresql/data` is empty and `.upsert_db_users` exists. +if [[ -d "${PGDATA}/" ]] && [[ ! -z "$(ls -A ${PGDATA})" ]]; then + # `.upsert_db_users` is created by KoBoInstall if it has detected that + # credentials changed during setup. + if [[ -f "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" ]]; then + OLD_USER=$(get_old_user) + delete=$? + get_psql_command "$OLD_USER" + upsert_users + if [[ "$delete" == "1" ]] && [[ "$POSTGRES_USER" != "$OLD_USER" ]]; then + get_psql_command "$POSTGRES_USER" + delete_user "$OLD_USER" + fi + rm -f "${BASE_DIR}/${UPSERT_DB_USERS_TRIGGER_FILE}" + fi +fi diff --git a/postgres/toggle-backup-activation.sh b/postgres/toggle-backup-activation.sh index a9babe94..0bc7b5b3 100644 --- a/postgres/toggle-backup-activation.sh +++ b/postgres/toggle-backup-activation.sh @@ -26,48 +26,50 @@ else FALSE=0 # Add only non-empty variable to cron tasks - if [ ! -z "${AWS_ACCESS_KEY_ID}" ]; then + if [ -n "${AWS_ACCESS_KEY_ID}" ]; then echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" >> /etc/cron.d/backup_postgres_crontab else USE_S3=$FALSE fi - if [ ! -z "${AWS_SECRET_ACCESS_KEY}" ]; then + if [ -n "${AWS_SECRET_ACCESS_KEY}" ]; then echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}" >> /etc/cron.d/backup_postgres_crontab else USE_S3=$FALSE fi - if [ ! -z "${BACKUP_AWS_STORAGE_BUCKET_NAME}" ]; then + if [ -n "${BACKUP_AWS_STORAGE_BUCKET_NAME}" ]; then echo "BACKUP_AWS_STORAGE_BUCKET_NAME=${BACKUP_AWS_STORAGE_BUCKET_NAME}" >> /etc/cron.d/backup_postgres_crontab else USE_S3=$FALSE fi - if [ ! -z "${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" ]; then + if [ -n "${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" ]; then echo "AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED=${AWS_BACKUP_BUCKET_DELETION_RULE_ENABLED}" >> /etc/cron.d/backup_postgres_crontab fi - if [ ! -z "${AWS_BACKUP_YEARLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_YEARLY_RETENTION}" ]; then echo "AWS_BACKUP_YEARLY_RETENTION=${AWS_BACKUP_YEARLY_RETENTION}" >> /etc/cron.d/backup_postgres_crontab fi - if [ ! -z "${AWS_BACKUP_MONTHLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_MONTHLY_RETENTION}" ]; then echo "AWS_BACKUP_MONTHLY_RETENTION=${AWS_BACKUP_MONTHLY_RETENTION}" >> /etc/cron.d/backup_postgres_crontab fi - if [ ! -z "${AWS_BACKUP_WEEKLY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_WEEKLY_RETENTION}" ]; then echo "AWS_BACKUP_WEEKLY_RETENTION=${AWS_BACKUP_WEEKLY_RETENTION}" >> /etc/cron.d/backup_postgres_crontab fi - if [ ! -z "${AWS_BACKUP_DAILY_RETENTION}" ]; then + if [ -n "${AWS_BACKUP_DAILY_RETENTION}" ]; then echo "AWS_BACKUP_DAILY_RETENTION=${AWS_BACKUP_DAILY_RETENTION}" >> /etc/cron.d/backup_postgres_crontab fi - if [ ! -z "${AWS_POSTGRES_BACKUP_MINIMUM_SIZE}" ]; then + if [ -n "${AWS_POSTGRES_BACKUP_MINIMUM_SIZE}" ]; then echo "AWS_POSTGRES_BACKUP_MINIMUM_SIZE=${AWS_POSTGRES_BACKUP_MINIMUM_SIZE}" >> /etc/cron.d/backup_postgres_crontab fi if [ "$USE_S3" -eq "$TRUE" ]; then echo "Installing virtualenv for PostgreSQL backup on S3..." - apt-get install -y s3cmd --quiet=2 > /dev/null - apt-get install -y python-virtualenv --quiet=2 > /dev/null - virtualenv /tmp/backup-virtualenv + apt-get install -y python3-pip --quiet=2 > /dev/null + python3 -m pip install --upgrade --quiet pip + python3 -m pip install --upgrade --quiet virtualenv + python3 -m pip install --quiet s3cmd + virtualenv --quiet -p /usr/bin/python3 /tmp/backup-virtualenv . /tmp/backup-virtualenv/bin/activate pip install --quiet humanize smart-open==1.7.1 deactivate @@ -75,7 +77,7 @@ else INTERPRETER=/tmp/backup-virtualenv/bin/python BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-s3.py" else - INTERPRETER=$(which bash) + INTERPRETER=$(command -v bash) BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-disk.bash" fi diff --git a/redis/.nfs000000020424530700000006 b/redis/.nfs000000020424530700000006 new file mode 100644 index 00000000..1462b268 --- /dev/null +++ b/redis/.nfs000000020424530700000006 @@ -0,0 +1,66 @@ +# Redis configuration for Enketo's XSLT Cache + +supervised no + +daemonize no + +pidfile /var/run/redis/redis-enketo-cache.pid + +port 6380 + +bind 127.0.0.1 ${CONTAINER_IP} + +timeout 0 + +tcp-keepalive 0 + +loglevel notice + +logfile /var/log/redis/redis-enketo-cache.log + +databases 16 + +save 3600 1 + +stop-writes-on-bgsave-error yes + +rdbcompression yes + +rdbchecksum yes + +dbfilename enketo-cache.rdb + +dir /data/ + +slave-serve-stale-data yes +slave-read-only yes +repl-disable-tcp-nodelay no +slave-priority 100 + +appendonly no + +lua-time-limit 5000 + +slowlog-log-slower-than 10000 + +slowlog-max-len 128 + +notify-keyspace-events "" + +hash-max-ziplist-entries 512 +hash-max-ziplist-value 64 +list-max-ziplist-entries 512 +list-max-ziplist-value 64 +set-max-intset-entries 512 +zset-max-ziplist-entries 128 +zset-max-ziplist-value 64 +activerehashing yes +client-output-buffer-limit normal 0 0 0 +client-output-buffer-limit slave 256mb 64mb 60 +client-output-buffer-limit pubsub 32mb 8mb 60 +hz 10 +aof-rewrite-incremental-fsync yes + +requirepass "${REDIS_PASSWORD}" + +#include /etc/redis/conf.d/local.conf diff --git a/redis/.nfs000000020424530800000007 b/redis/.nfs000000020424530800000007 new file mode 100644 index 00000000..562147ff --- /dev/null +++ b/redis/.nfs000000020424530800000007 @@ -0,0 +1,66 @@ +# Redis configuration for Enketo's main database instance + +supervised no + +daemonize no + +pidfile /var/run/redis/redis-enketo-main.pid + +port 6379 + +bind 127.0.0.1 ${CONTAINER_IP} + +timeout 0 + +tcp-keepalive 0 + +loglevel notice + +logfile /var/log/redis/redis-enketo-main.log + +databases 16 + +save 300 1 + +stop-writes-on-bgsave-error yes + +rdbcompression yes + +rdbchecksum yes + +dbfilename enketo-main.rdb + +dir /data/ + +slave-serve-stale-data yes +slave-read-only yes +repl-disable-tcp-nodelay no +slave-priority 100 + +appendonly no + +lua-time-limit 5000 + +slowlog-log-slower-than 10000 + +slowlog-max-len 128 + +notify-keyspace-events "" + +hash-max-ziplist-entries 512 +hash-max-ziplist-value 64 +list-max-ziplist-entries 512 +list-max-ziplist-value 64 +set-max-intset-entries 512 +zset-max-ziplist-entries 128 +zset-max-ziplist-value 64 +activerehashing yes +client-output-buffer-limit normal 0 0 0 +client-output-buffer-limit slave 256mb 64mb 60 +client-output-buffer-limit pubsub 32mb 8mb 60 +hz 10 +aof-rewrite-incremental-fsync yes + +requirepass "${REDIS_PASSWORD}" + +#include /etc/redis/conf.d/local.conf diff --git a/redis/.nfs000000020442fe9e00000005 b/redis/.nfs000000020442fe9e00000005 new file mode 100644 index 00000000..3a0ca60b --- /dev/null +++ b/redis/.nfs000000020442fe9e00000005 @@ -0,0 +1,38 @@ +#!/usr/bin/env bash + +ORIGINAL_DIR="/tmp/redis" +REDIS_LOG_DIR="/var/log/redis" +REDIS_CONF_DIR="/etc/redis" +REDIS_CONF_FILE="${REDIS_CONF_DIR}/redis.conf" +REDIS_DATA_DIR="/data/" + +export CONTAINER_IP=$(awk 'END{print $1}' /etc/hosts) +export REDIS_PASSWORD=$(echo $REDIS_PASSWORD | sed 's/"/\\"/g') + +if [[ ! -d "$REDIS_LOG_DIR" ]]; then + mkdir -p "$REDIS_LOG_DIR" +fi + +if [[ ! -d "$REDIS_DATA_DIR" ]]; then + mkdir -p "$REDIS_DATA_DIR" +fi + +# install envsubst +apt-get update && apt-get -y install gettext-base + +cat "$REDIS_CONF_FILE.tmpl" \ + | envsubst '${CONTAINER_IP} ${REDIS_PASSWORD}' \ + > "$REDIS_CONF_FILE" + +# Make logs directory writable +chown -R redis:redis "$REDIS_LOG_DIR" +chown redis:redis "$REDIS_CONF_FILE" +chown -R redis:redis "$REDIS_DATA_DIR" + +if [[ "$KOBO_REDIS_SERVER_ROLE" == "main" ]]; then + BASH_PATH=$(command -v bash) + export KOBO_DOCKER_SCRIPTS_DIR=/kobo-docker-scripts + "$BASH_PATH" "$KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh" +fi + +su redis -c "redis-server /etc/redis/redis.conf" diff --git a/redis/backup-to-s3.py b/redis/backup-to-s3.py index 3ff91034..d4a72d2b 100644 --- a/redis/backup-to-s3.py +++ b/redis/backup-to-s3.py @@ -14,7 +14,7 @@ DBDATESTAMP, ) -BACKUP_COMMAND = "$(which bash) /kobo-docker-scripts/backup-to-disk.bash {}".format( +BACKUP_COMMAND = "$(command -v bash) /kobo-docker-scripts/backup-to-disk.bash {}".format( DUMPFILE ) diff --git a/redis/entrypoint.sh b/redis/entrypoint.sh index e3fc55d4..776a810f 100644 --- a/redis/entrypoint.sh +++ b/redis/entrypoint.sh @@ -2,36 +2,40 @@ ORIGINAL_DIR="/tmp/redis" REDIS_LOG_DIR="/var/log/redis" -REDIS_CONF_DIR="/etc/redis/" +REDIS_CONF_DIR="/etc/redis" REDIS_CONF_FILE="${REDIS_CONF_DIR}/redis.conf" REDIS_DATA_DIR="/data/" -CONTAINER_IP=$(awk 'END{print $1}' /etc/hosts) +export CONTAINER_IP=$(awk 'END{print $1}' /etc/hosts) +export REDIS_PASSWORD=$(echo $REDIS_PASSWORD | sed 's/"/\\"/g') -if [ ! -d "$REDIS_LOG_DIR" ]; then +if [[ ! -d "$REDIS_LOG_DIR" ]]; then mkdir -p "$REDIS_LOG_DIR" fi -if [ ! -d "$REDIS_DATA_DIR" ]; then +if [[ ! -d "$REDIS_DATA_DIR" ]]; then mkdir -p "$REDIS_DATA_DIR" fi +# install envsubst +apt-get update && apt-get -y install gettext-base -# Copy config file -cp "${REDIS_CONF_FILE}.tmpl" $REDIS_CONF_FILE - -# Create redis-server configuration file -sed -i "s~\${CONTAINER_IP}~${CONTAINER_IP//\"/}~g" "$REDIS_CONF_FILE" +cat "$REDIS_CONF_FILE.tmpl" \ + | envsubst '${CONTAINER_IP} ${REDIS_PASSWORD}' \ + > "$REDIS_CONF_FILE" # Make logs directory writable chown -R redis:redis "$REDIS_LOG_DIR" chown redis:redis "$REDIS_CONF_FILE" chown -R redis:redis "$REDIS_DATA_DIR" -if [ "${KOBO_REDIS_SERVER_ROLE}" == "main" ]; then - BASH_PATH=$(which bash) +if [[ "$KOBO_REDIS_SERVER_ROLE" == "main" ]]; then + BASH_PATH=$(command -v bash) export KOBO_DOCKER_SCRIPTS_DIR=/kobo-docker-scripts - $BASH_PATH $KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh + "$BASH_PATH" "$KOBO_DOCKER_SCRIPTS_DIR/toggle-backup-activation.sh" fi -su redis -c "redis-server /etc/redis/redis.conf" +# `exec` and `gosu` (vs. `su`) here are important to pass signals to the +# database server process; without them, the server will be terminated abruptly +# with SIGKILL (see #276) +exec gosu redis redis-server /etc/redis/redis.conf diff --git a/redis/redis-enketo-cache.conf.tmpl b/redis/redis-enketo-cache.conf.tmpl index db66c5cc..1462b268 100644 --- a/redis/redis-enketo-cache.conf.tmpl +++ b/redis/redis-enketo-cache.conf.tmpl @@ -61,4 +61,6 @@ client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes -#include /etc/redis/conf.d/local.conf \ No newline at end of file +requirepass "${REDIS_PASSWORD}" + +#include /etc/redis/conf.d/local.conf diff --git a/redis/redis-enketo-main.conf.tmpl b/redis/redis-enketo-main.conf.tmpl index fabe56de..562147ff 100644 --- a/redis/redis-enketo-main.conf.tmpl +++ b/redis/redis-enketo-main.conf.tmpl @@ -61,4 +61,6 @@ client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes -#include /etc/redis/conf.d/local.conf \ No newline at end of file +requirepass "${REDIS_PASSWORD}" + +#include /etc/redis/conf.d/local.conf diff --git a/redis/toggle-backup-activation.sh b/redis/toggle-backup-activation.sh index 0fdfac07..4ba56d49 100644 --- a/redis/toggle-backup-activation.sh +++ b/redis/toggle-backup-activation.sh @@ -69,7 +69,7 @@ else INTERPRETER=/tmp/backup-virtualenv/bin/python BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-s3.py" else - INTERPRETER=$(which bash) + INTERPRETER=$(command -v bash) BACKUP_SCRIPT="/kobo-docker-scripts/backup-to-disk.bash" fi diff --git a/scripts/.nfs0000000203e2e01b00000008 b/scripts/.nfs0000000203e2e01b00000008 new file mode 100644 index 00000000..f4388f49 --- /dev/null +++ b/scripts/.nfs0000000203e2e01b00000008 @@ -0,0 +1,29 @@ +if [[ ! -z "${PUBLIC_DOMAIN_NAME}" ]]; then + if [[ ${NGINX_PUBLIC_PORT} != "" && ${NGINX_PUBLIC_PORT} != "80" ]]; then + PUBLIC_PORT=":${NGINX_PUBLIC_PORT}" + else + PUBLIC_PORT="" + fi + # SERVER CONFIGURATION + export KOBOFORM_URL="${PUBLIC_REQUEST_SCHEME}://${KOBOFORM_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export KOBOFORM_INTERNAL_URL="http://${KOBOFORM_PUBLIC_SUBDOMAIN}.${INTERNAL_DOMAIN_NAME}" # Always use HTTP internally. + export KOBOCAT_URL="${PUBLIC_REQUEST_SCHEME}://${KOBOCAT_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export ENKETO_URL="${PUBLIC_REQUEST_SCHEME}://${ENKETO_EXPRESS_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export CSRF_COOKIE_DOMAIN=".${PUBLIC_DOMAIN_NAME}" + export DJANGO_ALLOWED_HOSTS=".${PUBLIC_DOMAIN_NAME} .${INTERNAL_DOMAIN_NAME}" + + # DATABASE + export DATABASE_URL="${KC_DATABASE_URL}" + export POSTGRES_DB="${KC_POSTGRES_DB}" + + # OTHER + export KOBOCAT_AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" + export KOBOCAT_AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" + export KPI_URL="${KOBOFORM_URL}" + export KPI_INTERNAL_URL="${KOBOFORM_INTERNAL_URL}" # Copy the same logic as before but why do we need another variable? + export DJANGO_DEBUG="${KOBOCAT_DJANGO_DEBUG}" + export RAVEN_DSN="${KOBOCAT_RAVEN_DSN}" +else + echo 'Please fill out your `envfile`!' + exit 1 +fi diff --git a/scripts/.nfs0000000203e2e01c00000009 b/scripts/.nfs0000000203e2e01c00000009 new file mode 100644 index 00000000..0b38b6f5 --- /dev/null +++ b/scripts/.nfs0000000203e2e01c00000009 @@ -0,0 +1,29 @@ +if [[ ! -z "${PUBLIC_DOMAIN_NAME}" ]]; then + if [[ ${NGINX_PUBLIC_PORT} != "" && ${NGINX_PUBLIC_PORT} != "80" ]]; then + PUBLIC_PORT=":${NGINX_PUBLIC_PORT}" + else + PUBLIC_PORT="" + fi + # SERVER CONFIGURATION + export KOBOFORM_URL="${PUBLIC_REQUEST_SCHEME}://${KOBOFORM_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export ENKETO_URL="${PUBLIC_REQUEST_SCHEME}://${ENKETO_EXPRESS_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export ENKETO_INTERNAL_URL="http://${ENKETO_EXPRESS_PUBLIC_SUBDOMAIN}.${INTERNAL_DOMAIN_NAME}" # Always use HTTP internally. + export KOBOCAT_URL="${PUBLIC_REQUEST_SCHEME}://${KOBOCAT_PUBLIC_SUBDOMAIN}.${PUBLIC_DOMAIN_NAME}${PUBLIC_PORT}" + export KOBOCAT_INTERNAL_URL="http://${KOBOCAT_PUBLIC_SUBDOMAIN}.${INTERNAL_DOMAIN_NAME}" # Always use HTTP internally. + export CSRF_COOKIE_DOMAIN=".${PUBLIC_DOMAIN_NAME}" + export DJANGO_ALLOWED_HOSTS=".${PUBLIC_DOMAIN_NAME} .${INTERNAL_DOMAIN_NAME}" + + # DATABASE + export DATABASE_URL="${KPI_DATABASE_URL}" + export POSTGRES_DB="${KPI_POSTGRES_DB}" + + # OTHER + export DJANGO_DEBUG="${KPI_DJANGO_DEBUG}" + export RAVEN_DSN="${KPI_RAVEN_DSN}" + export RAVEN_JS_DSN="${KPI_RAVEN_JS_DSN}" + export KPI_URL="${KOBOFORM_URL}" + +else + echo 'Please fill out your `envfile`!' + exit 1 +fi diff --git a/scripts/.nfs0000000203e2e01d0000000b b/scripts/.nfs0000000203e2e01d0000000b new file mode 100755 index 00000000..79257585 --- /dev/null +++ b/scripts/.nfs0000000203e2e01d0000000b @@ -0,0 +1,28 @@ +#!/bin/bash +set -e + +echo 'Waiting for container `postgres`.' +dockerize -timeout=20s -wait tcp://${POSTGRES_HOST}:${POSTGRES_PORT} +echo 'Container `postgres` up.' + +echo 'Waiting for Postgres service.' +# TODO: There must be a way to confirm Postgres is serving without the resulting "incomplete startup packet" warning in the logs. +POSTGRES_DB="${POSTGRES_DB:-kobotoolbox}" +POSTGRES_USER="${POSTGRES_USER:-kobo}" +until pg_isready -h "${POSTGRES_HOST}" -p "${POSTGRES_PORT}"; do + sleep 1 +done + +source /etc/profile + +echo "Postgres service running; ensuring ${POSTGRES_DB} database exists and has PostGIS extensions..." +PGPASSWORD="${POSTGRES_PASSWORD:-kobo}" psql \ + -d postgres -h "${POSTGRES_HOST}" -p "${POSTGRES_PORT}" -U "${POSTGRES_USER}" < /dev/null || [[ "$?" == '52' ]]; do + sleep 1 +done +echo '`kpi` web service ready.'