-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Rollout Infra] Start reading the GCS feature flags and expose them in a global Var #4869
Conversation
…al var declared in base.py. Creates a helper to interact with them from both the TS and py code. Reads the flags for autocomplete, place page experiment and place page GA.
env_for_bucket = os.environ.get('FLASK_ENV') | ||
if env_for_bucket in ['local', 'integration_test', 'test', 'webdriver']: | ||
env_for_bucket = 'autopush' | ||
elif env_for_bucket == 'production': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also think about what to do for the 'staging' environment and various legacy custom data commons (e.g., 'feedingamerica', and others defined in server/app_env/*.py)
For staging, i think we should read the 'prod' bucket.
For the legacy custom dcs, we can either read the prod bucket (but that assumes the right read permissions are in place), or maybe we can just read a local feature flag file. Another option could be we keep the prod feature flag file checked into this repo, and for all other environments beyond what you have listed here, we just read that locally checked in version? wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would argue that staging should still read its own bucket, but that we should aim to keep the staging flags and prod flags in sync. They should only differ if we need to test something in staging before launching it to prod. So temporary diffs.
Do you agree?
For custom dc, I agree with your take that it should read the local file. If custom DC customers want to support launches on the fly, they can make their own bucket, store their flag and update the target. I ll leave that conversation for the following PR where I actually add the local files!
logging.error("Bucket not found: " + bucket_name) | ||
return {} | ||
|
||
blob = bucket.get_blob("feature_flags.json") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might need a try/except block too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so actually if you look at the method implementation, it already nests the read call within a try/catch and returns None if there's a failure.
So this should never raise any exception.
logging.warning("Feature flag file not found in the bucket.") | ||
|
||
# Create the dictionary using a dictionary comprehension | ||
feature_flag_dict = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just store the feature flag configuration as a object/dictionary in json?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I did load it up in json but doing this just plays it a little safer since we now know that if we have a feature flag, it must have a name and enabled.
so in case the file that was read in had bad data this would ensure it doesn't get added to the flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Gabe!
Last thing (can be in a follow up)- can you add some instructions, either to the readme or inline in one of the feature_flags files on how to enable feature flags and how to update the feature flag files (including what gcp account they're in)
Yeah absolutely, i ll do that after im done implementing. And will clean up the doc too! |
Uploaded feature_flags.json in datcom-website-[ENV]-resources GCS buckets in format as:
[{ "name": "autocomplete", "enabled": true }, { "name": "dev_place_experiment", "enabled": false }, { "name": "dev_place_ga", "enabled": false }]
On server startup, it ingests this file, and exposes the feature status through api/features.
Also adds an api/features/{feature_name} endpoint to expose whether it's enabled.
Starts reading the feature flags for place autocomplete, place page experiment rollout and place page GA rollout.
The feature flags were uploaded as:
This implementation is as outlined in https://docs.google.com/document/d/19CqgKt6aIbFs5v2zhm3TagDfPJF-K_SpH-9VPP9FeRw/edit?resourcekey=0-qM2XvR7dYp4j0jXshva04g&tab=t.0
In a followup, I will add the fallback flag values, ingest them in the load_feature_flags method, and write the script to auto update the GCS buckets. See PR - #4871