You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user, I want to see changes updated in the core schema files in S3 after updating a metadata record in DocDB, so changes are synced in s3 and docdb.
Currently, the aind_buckets_indexer.py job will check for updates to records in DocDB and update the metadata.nd.json files in S3.
We also want the individual metadata JSONs (subject, rig, etc) updated in the S3 buckets.
Acceptance criteria
Given the populate_s3_with_metadata_files.py job is run, then the core fields from the metadata.nd.json get saved to json files.
Given the populate_s3_with_metadata_files.py job is run and there already is a {core_schema}.json, the original contents are copied to another file as {core_schema}.old.json.
Given the aind_bucket_indexer.py job is run and there were updates to a metadata record in docdb, the core schema jsons get updated in S3 as well.
Given the aind_bucket_indexer.py job is run and a metadata.nd.json is found or created in S3, also ensure core jsons are copied and in sync.
Sprint Ready Checklist
1. Acceptance criteria defined
2. Team understands acceptance criteria
3. Team has defined solution / steps to satisfy acceptance criteria
4. Acceptance criteria is verifiable / testable
5. External / 3rd Party dependencies identified
6. Ticket is prioritized and sized
Notes
Add any helpful notes here.
The text was updated successfully, but these errors were encountered:
Discussed with @dyf and @saskiad, we can write the original core schema jsons to: s3://{bucket}/{s3_prefix}/original_metadata/{core_schema}.{date_stamp}.json.
User story
As a user, I want to see changes updated in the core schema files in S3 after updating a metadata record in DocDB, so changes are synced in s3 and docdb.
Currently, the
aind_buckets_indexer.py
job will check for updates to records in DocDB and update themetadata.nd.json
files in S3.We also want the individual metadata JSONs (subject, rig, etc) updated in the S3 buckets.
Acceptance criteria
populate_s3_with_metadata_files.py
job is run, then the core fields from the metadata.nd.json get saved to json files.populate_s3_with_metadata_files.py
job is run and there already is a {core_schema}.json, the original contents are copied to another file as {core_schema}.old.json.aind_bucket_indexer.py
job is run and there were updates to a metadata record in docdb, the core schema jsons get updated in S3 as well.aind_bucket_indexer.py
job is run and a metadata.nd.json is found or created in S3, also ensure core jsons are copied and in sync.Sprint Ready Checklist
Notes
Add any helpful notes here.
The text was updated successfully, but these errors were encountered: