-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(aci milestone 3): updates to dual writing data conditions #83275
Conversation
Codecov ReportAttention: Patch coverage is ✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## master #83275 +/- ##
==========================================
+ Coverage 87.54% 87.55% +0.01%
==========================================
Files 9483 9475 -8
Lines 537769 537546 -223
Branches 21181 21122 -59
==========================================
- Hits 470776 470651 -125
+ Misses 66637 66537 -100
- Partials 356 358 +2 |
@@ -120,13 +138,75 @@ def migrate_metric_data_condition( | |||
alert_rule_trigger_data_condition = AlertRuleTriggerDataCondition.objects.create( | |||
alert_rule_trigger=alert_rule_trigger, data_condition=data_condition | |||
) | |||
return data_condition, alert_rule_trigger_data_condition | |||
return detector_trigger, data_condition, alert_rule_trigger_data_condition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 it might be better for us to do the detector's data_condition migrations with the detector and the workflow's action filters in another. that's mostly just a nit to have this be a little clearer or help incase we have issues with the migration (like a workflow migrated correctly, but not a detector, we could then reuse the migrate detector
method to just migrate the detector and all of it's associated conditions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be difficult because of where the dual write methods are called in logic.py
? @ceorourke you might have a better sense of whether this is possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the serializer first calls create_alert_rule
and then later calls create_alert_rule_trigger
so we don't have any access to the trigger stuff at the rule creation step
detector_trigger = DataCondition.objects.create( | ||
comparison=resolve_threshold, | ||
condition_result=DetectorPriorityLevel.OK, | ||
type=threshold_type, | ||
condition_group=detector_data_condition_group, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💯
When dual writing `AlertRuleTrigger`s to the ACI tables, we must create 2 data conditions: 1. The "detector trigger," which compares against the trigger threshold value and returns a `DetectorPriorityLevel` as its condition result 2. The "action filter," which checks the detector status and returns `True` as its condition result. Each of these "action filters" has its own `DataConditionGroup` as well, which is linked back to the legacy alert rule's associated workflow. Implement this paradigm. Also add logic to automatically set a resolve threshold if it's not set by the user on the legacy alert rule.
When dual writing
AlertRuleTrigger
s to the ACI tables, we must create 2 data conditions:DetectorPriorityLevel
as its condition resultTrue
as its condition result. Each of these "action filters" has its ownDataConditionGroup
as well, which is linked back to the legacy alert rule's associated workflow.Implement this paradigm. Also add logic to automatically set a resolve threshold if it's not set by the user on the legacy alert rule.