You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
jgraham: 100% score would imply that we dont carryover by default. Is that 100% or some lower threshold?
chrishtr: popover, accessibility
bkardell: can we specifically chose a different approach (than the score) - it would depend on the type of work that is remaining
stubbornella: also worth discussing why it stopped at a certain point. Long tail. Might be helpful to talk through the cost benefit
nairnandu: For the cutoff score - do we look at stable or experimental?
chrishtr: agree to look at stable
gsnedders: there’s the risk that some work will carryover - stable being a lagging indicator
jgraham: agree. If there’s a case we have reasonable confidence that we will hit 100% in stable, we can look at experimental scores as well
stubbornella: we can share with reasonable confidence areas that will hit stable
chrishtr: same
jgraham: there have been situations where couple of release cycles were needed to push something from experimental to stable
stubbornella: okay to use experimental scores as a cutoff
jgraham: as an optimization, let’s reduce the amount of work required later on. Whichever channel we pick, we can be a bit more aggressive with the cutoff
Consensus: carryover evaluation proposals will be created for all focus areas that are less than 99% in stable
2025 selection process - next steps for sharing of signals
jgraham: each org can come up with the list of areas they want to champion. We can resolve any conflicts (on scope or ownership) live in the meeting. Once the champions are identified, we can share signals or ask for data as needed.
Here is the proposed agenda for Oct 3rd, 2024
The text was updated successfully, but these errors were encountered: