Identifying errors in a certain window and choosing comparison strictness
Last Updated -
This article explains how to :
Make the best use of the available tools when reviewing one of the steps within a test.
All errors are marked in purple, but should you fail to see them it is possible to highlight them.
Using the “Highlight Differences” (a flashlight icon) option, the user can view a short burst of circles highlighting all the conflicts of the current test step.
Exact - Only tests that are exact, pixel based, replicas of each other will be considered a success.
Strict – The default mode where a smart comparison is performed between the actual result and the baseline, taking into consideration both layout and content changes while using Applitools Eyes smart image analysis tools to ignore minor changes that might fail a test when using pixel based comparison.
Content – Applitools Eyes will only identify content differences while ignoring layout based differences.
Layout – Applitools Eyes will only identify layout differences while ignoring content differences.
Hide – Applitools Eyes will ignore all differences for this specific screen and will not fail a test based on them.
Currently, the different strictness levels will only be applied to your current review (and won’t be saved as the default comparison level for every run of that test from that point on). In future releases, you will be able to select the strictness level for different regions in each window and set this as the baseline for future tests (e.g. ignore only content changes in a region of the screen that is constantly changing like date field).
- You can also select the level of strictness via the eyes.open method.
Did you find this article helpful?