Install Docker Desktop: https://docs.docker.com/get-docker/
-
Set up local environment variables
For local dev, create a
.env
file at the root of thefrontend/
directory using the.env.example
file:cp .env.example .env
-
Install the dependencies.
npm install
-
Run.
npm run develop
- View the front page: http://127.0.0.1:80
For local dev, create a .env.local
file at the root of the backend/
directory using the .env.example
file:
cp .env.example .env.local
NOTE: You must pass an ENVIRONMENT
variable to ensure you're running in the right env.
- Run the app using
docker compose
:
ENVIRONMENT=local docker compose up backend database --build
OR
./run.sh
- View the API docs: http://127.0.0.1:8080/docs
- Run all the alembic migrations to load the schema:
ENVIRONMENT=local docker compose exec backend alembic upgrade head
- Seed the local database:
ENVIRONMENT=local docker compose exec backend python -m scripts.seed_db
- This will add a fake user with a language code
spa
and country codeES
, then create some tasks the user can edit / submit.
To add a migration run:
alembic revision -m "some_revision_name"
Make sure you add an upgrade()
and downgrade()
step. Once your migration is added, rebuild the backend container
, and test that we can upgrade and downgrade a couple times without the DB getting into a broken state:
ENVIRONMENT=local docker compose exec backend alembic upgrade head
ENVIRONMENT=local docker compose exec backend alembic downgrade -1
To check the history of migrations run:
ENVIRONMENT=local docker compose exec backend alembic history
The docker compose
command will spin up the backend along with a postgres container that you can interact with.
To connect to the DB inside of docker, you can run:
ENVIRONMENT=local docker compose exec database psql instruct_multilingual -U backendapp
and interactively query data in there such as:
select * from language_code;
We currently have some Cloud Run Jobs that run on a scheduled basis via Cloud Scheduler. Each of these jobs runs in their own container, and live in the directory backend/jobs/
, with each type of job in their own subdirectory like daily/
and their own Dockerfile with a command they run.
They use the leaderboard_update_job.py
script and have the same dependencies as the FastAPI server.
Deploying jobs
To deploy and modify the schedule of the jobs, you can use the ops script:
ops/create-jobs.sh
Which will tell you the options you can use. You must pass a job type and a cron expression as a schedule.
Running jobs locally
You can run the leaderboard jobs locally with:
ENVIRONMENT=local docker compose up <job_type>_leaderboard_job --build
replacing <job_type>
with the available options (see the docker-compose.yaml
file).
Currently, we have a .env.test
file that gets read in by unit tests. To run the unit tests, you'll need a local postgres database or run unit tests in the GitHub Actions workflow.
Recommended: Run the dockerized postgresql instance using:
docker compose up database
then run unit tests, which will use the dockerized server and a database called testdb
ENVIRONMENT=test pytest
For connecting to the staging database, credentials should be retrieved from GCP Secret Manager, or via a secure message channel.
- Create a file called
.env.staging
in the root of thebackend/
project:
cp .env.example .env.staging
- Retrieve the staging database URL from GCP Secret Manager
- Retrieve the staging discord keys from GCP Secret Manager
- Replace the values in the
.env.staging
file with the staging values - Remove
POSTGRES_DB
,POSTGRES_USER
,POSTGRES_PASSWORD
fields - Start the server with
ENVIRONMENT=staging
:
ENVIRONMENT=staging docker compose up backend --build
NOTE: Only run in production mode for read-operations and debugging.
For connecting to the production database, credentials should be retrieved from GCP Secret Manager, or via a secure message channel.
- Create a file called
.env.production
in the root of thebackend/
project:
cp .env.example .env.production
- Retrieve the production database URL from GCP Secret Manager
- Retrieve the production discord keys from GCP Secret Manager
- Replace the values in the
.env.production
file with the production values - Remove
POSTGRES_DB
,POSTGRES_USER
,POSTGRES_PASSWORD
fields - Start the server with
ENVIRONMENT=production
:
ENVIRONMENT=production docker compose up backend --build
- This will connect to the production database so you don't need to run the local database container
To deploy the frontend / backend to GCP Cloud Run (production) there are a few involved steps. Namely:
- Building images
- Pushing to Container Registry
- Deploying the latest image to Cloud Run
This is automated via GCP Cloud Build. cloudbuild-frontend.yaml
will automatically do the above steps if there are changes to the frontend/
directory, and cloudbuild-backend.yaml
will do the same if there are changes to the backend/
directory.
For local dev, create a .env.analytics-instruct-multilingual-app.local
file at the root of the analytics/
directory and add the following environment variables to it:
INSTANCE_CONNECTION_NAME="" # Cloud SQL instance connection name
DB_USER="" #user name to access DB
DB_PASS="" #password for DB
DB_NAME="" #name of the DB
C4AI_PROJECT_ID="" #C4AI GCP project ID
GRADIO_SERVER_NAME="0.0.0.0"
GRADIO_SERVER_PORT=8080
APP_ENVIRONMENT="local"
Create a python virtual environment and install the necessary libraries using requirements.txt
file
pip install -r requirements.txt
Run the app using app.py
:
python app.py
OR
You can also run the app in reload mode using below command:
gradio app.py