Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve support for snapshotting views with text field cursors to prevent tests from flaking #15

Closed
allisonmoyer opened this issue Aug 4, 2020 · 5 comments · Fixed by #206
Assignees
Labels
enhancement Request for a new feature or improvement to an existing feature

Comments

@allisonmoyer
Copy link
Collaborator

No description provided.

allisonmoyer added a commit to allisonmoyer/AccessibilitySnapshot that referenced this issue Aug 4, 2020
allisonmoyer added a commit to allisonmoyer/AccessibilitySnapshot that referenced this issue Aug 4, 2020
@NickEntin NickEntin linked a pull request Aug 4, 2020 that will close this issue
@NickEntin NickEntin added the enhancement Request for a new feature or improvement to an existing feature label Aug 4, 2020
@NickEntin NickEntin changed the title Add the ability to hide the cursor on a text field to prevent tests from flaking Improve support for snapshotting views with text field cursors to prevent tests from flaking Aug 5, 2020
@NickEntin
Copy link
Collaborator

Updated the title to be less prescriptive about the solution to this problem. The root problem here is that snapshotting views that have layer animations can lead to flaky tests, since the view may be snapshotted at different points in the animation across different test runs. I can see a few solutions to this:

  • Hide the text field cursor. This can be done by modifying the tint color (what's in Hide the cursor on text fields to fix flaky behavior of tests #16), using private APIs to set the cursor view to hidden, etc.
  • Pause the layer animation. We can set the layer's speed to 0 and the timeOffset to some fixed number to pause the animation at a specific point. Depending on what timeOffset we use, we can set the cursor to be hidden or visible.

The second approach could potentially apply much broader than text fields, since this same problem exists for any layer that has an animation applied to it. We might want a targeted solution for now though. In either case, I think we should make sure we:

  • Make it obvious what the behavior will be and make it easy to opt out. Consumers may be using the tintColor other places in their code, or already set layer animations to a specific point, so we should let them control this behavior.
  • Restore the original state after snapshotting the view. Since there may be multiple snapshot calls (or other assertions) in each test method, we should make sure to restore any state that we set before the snapshot method returns.

@fbernutz
Copy link
Contributor

Another idea might be to use the precision value which is used in swift-snapshot-testing to prevent flaky tests for things like cursors in textfields. For some reason in the project I'm currently working in (where we use this a11y snapshot library), we need a 99,9% of precision to make the tests pass in the CI, so to have the ability to adjust the precision would be a great advantage.

@NickEntin
Copy link
Collaborator

I think it's still worth solving the cursor problem separately from adding support for specifying a precision for the comparison.

I'm generally not a fan of the precision approach, since it can end up letting unintentional (small) changes slip through. This is especially bad when you have a series of very small changes slip through, then whoever happens to tip it over the precision limit has to go address all of the built up issues. Unfortunately, with iOS 13, Apple changed the simulator to use exclusively GPU-based rendering, which means that the resulting snapshots may differ slightly across machines (see pointfreeco/swift-snapshot-testing#313 and uber/ios-snapshot-test-case#109). The only effective workaround that I've seen for this is bumping the precision down (or tolerance up, for iOSSnapshotTestCase) to a level that ignores the differences between GPUs. I think it's still worth getting the snapshots as close as we can, though, so that the precision can be raised to a point where hopefully only the GPU differences are allowed (although I haven't found any magic number yet that accounts for the rendering differences without also letting regressions in shadows, pixel rounding, etc. slip through).

@fbernutz
Copy link
Contributor

This is interesting! I didn't know the reasons for the differences across machines. 💡 Thanks for explaining @NickEntin !

@kkorouei
Copy link

kkorouei commented May 12, 2021

Disabling monochrome fixed the precision issue for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment