Verifications are important in any test automation. Without proper verifications some would say a test isn’t even a test. The importance of verifications when testing with machine learning is even more critical.
Selectors are not used when testing with machine learning, which is a fundamental shift in how test automation executes. This advancement allows the automation to be more flexible and to understand incremental changes to the site under test, which drastically reduces maintenance.
What does this mean?
This means our platform will select the most likely element for interactions, like clicks and inputs, based on the state of the page. The automation is not tied to a selector. By design, if the desired element is not on the page or the page has changed significantly, the platform will select the next best element for a click or input.
An element will always be interacted with for clicks and inputs. This is an important fundamental difference and improvement over traditional automation. It also requires a mental change to the way we, as testers, think about automation in order to be successful.
If a test contains improper verifications and subsequently false passes, the platform will assume the desired flow was completed successfully. This, in turn, rewrites the machine learning data and reinforces training the system to select elements incorrectly.
Verifications are critical to provide a feedback loop to the platform to help it understand your desired goals and increase test stability. You are working with a teammate when testing with machine learning and like any partnership communication is essential. Verifications are that communication.
What Makes a Good Verification?
Verifications are used as a way to strategically ensure that a test is proceeding correctly through it's expected workflow. The idea is to use the actual control logic of your application as a way to show all is well. But what makes for a good verification? In general, these fall into one of three categories.
- Page Load. Does the test step take you to a new page, such as the shopping basket? If so, make this a verification.
- Element Update. Will some element on the page change as a result of the test step? For instance, after a user logs in, their profile is now displaying at the top right. It makes sense to include a verification to ensure this has happened. Add a 'verify' step to confirm their profile is displayed at the top right.
- Application Logic. Does your code have some inbuilt verification logic? For instance, does it check to see whether a postcode field has been filled in correctly and display an error if not? If so, check that the error message isn’t shown by adding a conditional action.
Ideally, you should aim to include enough verifications to help your test fail as early as possible. Remember, the quicker a test fails, the fewer wasted resources and the quicker you can start to debug. It’s difficult to put exact figures on when to include verifications, but certainly you should have one every 10-15 test steps.
Best Practices for Using Verifications
Use verifications throughout test case creation and ensure the verifications produce the desired effect.
Here are a few examples:
- Verifications should always occur after a new page load (even if the URL stays the same).
- If the state of a page changes after a given set of actions, a verification should be applied.
- In almost all scenarios, the last action of a test case should be a verification.
- Verifications should be on elements that are static and unique.
- Verifications must be on elements that are generated after a set of actions has completed successfully and not on elements that were already present on the page.
- Verify only a single element at a time, not a div that contains multiple elements as this will limit the self healing properties of the test and make it harder to diagnose.
See also Verifications & the Elements Tab.
What Other Verifications Can Be Done in Architect?
One of the most powerful features in Architect is the ability to create custom verifications. For instance, Architect allows you to specify complex verifications based on image processing. So, you can ask it to compare a page against either the previous test run or against a previous test step. You can then specify the acceptable variance—how precise the comparison must be. Uniquely, the system is able to cope with computed CSS values and can apply logic to spot when these changed unexpectedly.