You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we already know that the output of some metrics changes slightly when we switch to numpy 2.0 (see #42), we should protect against this early on. The best way I see to catch this is to have regression tests and check for exact results on some sample data.
Create a regression test branching out from main. Just use random data for each of the main algorithms.
(Optional) check if there are any major branches in any of the algorithms that only trigger under specific conditions that are not met with the random data. (We can use the debugger for that.)
(Optional) also test with algorithms that include randomness by keeping seed fixed.
Store the exact values produced by the algorithms on this data.
Document the tests.
Merge tests into dev branch.
Switch back to the poetry branch (using updated scikit-learn) and check if regression tests still run.
The text was updated successfully, but these errors were encountered:
Since we already know that the output of some metrics changes slightly when we switch to numpy 2.0 (see #42), we should protect against this early on. The best way I see to catch this is to have regression tests and check for exact results on some sample data.
The text was updated successfully, but these errors were encountered: