Verifications are important in any test automation. Without proper verifications some would say a test isn’t even a test. The importance of verifications when testing with machine learning is even more critical.
Selectors are not used when testing with machine learning, which is a fundamental shift in how test automation executes. This advancement allows the automation to be more flexible and to understand incremental changes to the site under test, which drastically reduces maintenance.
What does this mean?
This means our platform will select the most likely element for interactions, like clicks and inputs, based on the state of the page. The automation is not tied to a selector. By design, if the desired element is not on the page or the page has changed significantly, the platform will select the next best element for a click or input.
An element will always be interacted with for clicks and inputs. This is an important fundamental difference and improvement over traditional automation. It also requires a mental change to the way we, as testers, think about automation in order to be successful.
If a test contains improper verifications and subsequently false passes, the platform will assume the desired flow was completed successfully. This, in turn, rewrites the machine learning data and reinforces training the system to select elements incorrectly.
Verifications are critical to provide a feedback loop to the platform to help it understand your desired goals and increase test stability. You are working with a teammate when testing with machine learning and like any partnership communication is essential. Verifications are that communication.
What Makes a Good Verification?
Verifications are used as a way to strategically ensure that a test is proceeding correctly through it's expected workflow. The idea is to use the actual control logic of your application as a way to show all is well. But what makes for a good verification? In general, these fall into one of three categories.
- Page Load. Does the test step take you to a new page, such as the shopping basket? If so, make this a verification.
- Element Update. Will some element on the page change as a result of the test step? For instance, after a user logs in, their profile is now displaying at the top right. It makes sense to include a verification to ensure this has happened. Add a 'verify' step to confirm their profile is displayed at the top right.
- Application Logic. Does your code have some inbuilt verification logic? For instance, does it check to see whether a postcode field has been filled in correctly and display an error if not? If so, check that the error message isn’t shown by adding a conditional action.
Ideally, you should aim to include enough verifications to help your test fail as early as possible. Remember, the quicker a test fails, the fewer wasted resources and the quicker you can start to debug. It’s difficult to put exact figures on when to include verifications, but certainly you should have one every 10-15 test steps.
Best Practices for Using Verifications
Use verifications throughout test case creation and ensure the verifications produce the desired effect.
Here are a few examples:
- Verifications should always occur after a new page load (even if the URL stays the same).
- If the state of a page changes after a given set of actions, a verification should be applied.
- In almost all scenarios, the last action of a test case should be a verification.
- Verifications should be on elements that are static and unique.
- Verifications must be on elements that are generated after a set of actions has completed successfully and not on elements that were already present on the page.
- Verify only a single element at a time, not a div that contains multiple elements as this will limit the self healing properties of the test and make it harder to diagnose.
Other Verifications in Architect
One of the most powerful features in Architect is the ability to create custom verifications. For instance, Architect allows you to specify complex verifications based on image processing. So, you can ask it to compare a page against either the previous test run or against a previous test step. You can then specify the acceptable variance—how precise the comparison must be. Uniquely, the system is able to cope with computed CSS values and can apply logic to spot when these changed unexpectedly.
Architect also allows you to test two factor authentication flows, even ones that require an SMS for verification. It includes advanced tools like a database explorer (allowing you to create tests that verify what happens on the backend). You can even store variables from one test that then get used in another test. Finally, Architect offers an advanced attribute editor and custom JavaScript, allowing you fine-grained control over your tests as you record them.
Functionize collects test case data during test creation and execution for the purposes of self healing. Our Machine Learning (ML) Engine is a self healing selection engine, this is the root of ML Deep Analysis.
In Architect settings, ML Deep Analysis is on by default for all users in Architect version 1.1.71+.
- In Architect, toggle ADV to ON.
- Click on Settings gear icon.
- ML Deep Analysis = ON
Best Practices for Performance Management
If you notice significant performance issues while using Architect to create a test case on a complex site, here are some tips and best practices for diagnosis.
- Close all other open tabs. If the browser is low on memory, that can affect Architect.
- Restart the active browser keeping only one tab open.
- Debug tabs. To do this go to Windows>>Task Manager. See what tabs are using up extensive amounts of memory and end those applications. Sometimes extensions can interfere with Architect performance.
- Turn off ML Deep Analysis in Architect settings.NOTE: ML Deep Analysis setting = ON is preferred. This is not an ideal scenario because the initial execution of the test will be using an older ML self-heal engine. Although the engine is good, it is not as good! However, when the test is executed for the first time, and as long as the execution is successful, the Functionize ML engine pulls all the same data as if you had the setting on in Architect. Having this additional data allows our self-healing to be extremely robust when the site changes structure, when your elements change names, and so on.
- The user can Show/Hide Architect if a site is trying to interact with Architect due to a library or other code used. This can be done to temporarily hide Architect by clicking on the Architect icon in the Chrome plugin area.
If all other recommended trouble shooting options have been exhausted including turning off ML Deep Analysis setting, please submit a support ticket to us.