A metadata secretary for battery science
API client libraries:
The Galv backend provides a REST API powered by Django and Django REST Framework.
For more complete documentation, see the Galv Server documentation.
The Galv backend is deployed using Docker. You can deploy the Galv backend in a number of ways.
Each release is accompanied by a Docker image.
The latest stable version is tagged as latest
.
You can acquire the image by pulling it from GitHub Packages:
docker pull ghcr.io/galv-team/galv-backend:latest
You can then run the image using the following command:
docker run -p 8001:80 ghcr.io/galv-team/galv-backend:latest
You will need to add in a database and set the environment variables appropriately. You will also need to add environment variables as detailed below.
Galv can be deployed using the Dockerfile provided in this repository. Example usage is provided in the docker-compose.yml file. This is generally for development, however, so you will need to add a database and set the environment variables appropriately.
You should ensure that all environment variables in the .env
file are set correctly before deploying.
These variables can be set by editing and including the .env
file, by setting them in the environment,
or by setting them via a hosting platform's interface.
Development is most easily done by using the provided Dockerfile and docker-compose.yml files. The docker-compose.yml file will start a postgres database and the Django server. The Django server will automatically reload when changes are made to the code. The following command will start the server:
docker-compose up app
The server will be available at http://localhost:8001.
- The docker-compose file only mounts the
galv-backend
directory, so if you add a new file or directory, to the project root, you will need to rebuild the container. - The
app
container is started withserver.sh
. If this file has acquired non-LF line endings, the container will report that it can't be found when starting.
To set up the development environment in PyCharm, make sure there's a project interpreter set up for the Docker container. Once you have that, create a Django server configuration with the following settings:
- Host:
0.0.0.0
(this allows you to reach the server from your host machine) - Port:
80
(not8001
- this is the port on the Docker container, not the host machine)
Documentation is generated using Sphinx. To make it easy to develop documentation, a Dockerfile is provided that will build the documentation and serve it using a webserver. It should refresh automatically when changes are made to the documentation.
The docs container is started with docker-compose up docs
.
By default, it will serve at http://localhost:8005.
The documentation supports multiple versions.
To add a new version, add a new entry to docs/tags.json
.
These tags must be in the format v*.*.*
and must be available as a git tag.
Tags that match v\d+\.\d+\.\d+
will be tagged as latest
when released.
Tags with a suffix, e.g. v1.0.0-beta
, will not be tagged as latest
.
There is a fairly complex workflow that will update the documentation for all versions when a new version is released.
This workflow is defined in .github/workflows/docs.yml
, with help from docs/build_docs.py
.
Tests are most easily run using the provided Dockerfile and docker-compose.yml files. The docker-compose.yml file will start a postgres database and run the tests. The following command will run the tests:
docker-compose run --rm app_test
We use a fairly complicated GitHub Actions flow to ensure we don't publish breaking changes. When you push to a branch, we do the following:
- Run the tests
- If tests succeed, and branch or tag is
v*.*.*
, we check compatibility with the previous version- If the API_VERSION in
backend_django/config/settings_base.py
is different to the branch/tag name, fail. - If incompatible, and we're not publishing a new major version, fail.
- Create clients for TypeScript (axios) and Python
- Create a docker image hosted on GitHub Packages
- Create a GitHub release
- If the API_VERSION in
- If tests succeed, and branch or tag is
To run the compatibility checks locally, run the following command:
docker-compose run --rm check_spec
You can optionally specify the REMOTE_SPEC_SOURCE
environment variable to check against a different version of the galv-spec.
cp my_spec.json .dev/spec
# .dev/spec is mounted as a volume at /spec in the container
docker-compose run --rm -e REMOTE_SPEC_SOURCE=/spec/my_spec.json check_spec
We use Fly.io to host a few instances.
The configuration files are fly.*.toml
in the root of the repository.
To deploy to Fly.io, you will need to install the Fly CLI and authenticate.
Once done, use fly deploy --app <app-name> --config <config-file>
to deploy.
E.g. for the Battery Intelligence Lab staging instance, we would use:
fly deploy --app galv-stage-backend --config fly.stage.toml
You'll have to create and attach the Postgres DB to the app manually.
fly postgres create --name <app-name>-db --org <org-name-if-applicable> --vm-size shared-cpu-2x
fly postgres attach <app-name>-db --app <app-name>
Attaching will set the DATABASE_URL
environment variable in the app to the connection string for the database.
It gets set as a secret so it's not visible in the logs.
You may need to set other secrets using fly secrets set --app <app-name> --config <config-file> <SECRET_NAME>=<SECRET_VALUE>
if you're using AWS S3 for storage, etc.