You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Datasets are indicating results with failed assertions (ex. Dataset is failing 3 / 464 assertions) but validation tab only returns a random 100 results and shows “All assertions have passed” based on 100 out of 100 assertions. There is no way to see the failed assertions.
To Reproduce
Steps to reproduce the behavior:
Go to dataset which has more than 100 assertions with some failed assertions.
Click on validation tab.
Observe X mark with "failed assertions" description next to dataset title and compare against validation results below dataset title.
Compare to see error.
Expected behavior
I expect to be able to easily see/identify the failed assertions on the Validation tab. Perhaps there should be an option to show all assertions and sort by or filter to passed and failed assertions. I also expect a consistency in the counts, so if there are 464 assertions, it shouldn't just show 100 on the validation tab.
Screenshots
Desktop (please complete the following information):
OS: Windows
Browser: Chrome
Version: 109.0.5414.75
The text was updated successfully, but these errors were encountered:
it looks like its hardcoded in GraphQL to only pull 100 assertions. Here is the code. A quick fix may be to increase this count to 1000 maybe.. a better fix may be to leave actively load next 100 rows as the use is scrolling the list. In any case, the assertion count needs to be consistent with count displayed with the X mark near the dataset title.
hey @esxkang and @atulsaurav ! thanks so much for this issue and pointing this out. I think there are two options here:
a quick fix to increase that hardcoded number from 100 -> 1000
keep the request number 100, but paginate our assertions while also fixing the header on the validations tab to be correct.
I think 2 is the better solution but will take a bit more effort. So what I did was implement 1 to fix this situation in the short term and then I filed a ticket for our team to pick up number 2 in our quality improvement work.
Describe the bug
Datasets are indicating results with failed assertions (ex. Dataset is failing 3 / 464 assertions) but validation tab only returns a random 100 results and shows “All assertions have passed” based on 100 out of 100 assertions. There is no way to see the failed assertions.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expect to be able to easily see/identify the failed assertions on the Validation tab. Perhaps there should be an option to show all assertions and sort by or filter to passed and failed assertions. I also expect a consistency in the counts, so if there are 464 assertions, it shouldn't just show 100 on the validation tab.
Screenshots
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: