The goal with debugging is to figure out why the test failed and then once the problem is identified, fixing the problem.
What Types of Problems Can Be Encountered?
It is important to first determine what the problem is in order to then figure out how to fix the issue. We have identified two different categories of problems:
CATEGORY 1 - The product did not do what you expected.
- The wrong element was selected for clicking/text/input/checkbox/etc
- A click didn't work (nothing changed after the click)
- An input didn't work (nothing typed/erased)
- A dropdown didn't select the right item
- The site did not wait long enough for an element to appear
CATEGORY 2 - The site under test did not do what you expected.
- The site had an unexpected popup
- The site had an unexpected error
- The site took a significantly longer time to load than expected
- The site workflow changed
We have many tools available to figure out what the problem is by allowing the user to see what happened during the test's execution.
Screenshots are typically the easiest way to track down the cause of failure. We capture 4 screenshots per action: Pre, Mid, Post and Full Screen
2. Anomaly/Self Heal/ML Tags
These tags only show up if the test has previously passed. ML Tag indicates robust machine learning data has been collected on the step , which is used for object/element identification. This data is used to interact with that element at the time of execution.
Anomaly Tag appears only on failed test cases and shows the attributes that have changed from the previous execution.
3. Previous Successful Run vs Current Run
This comparison of screenshots shows a side-by-side view of the previous successful run versus the current run.
4. Current Successful Run vs Architect
This comparison of screenshots shows a side-by-side view of the current successful run versus the architect run.
See Video: Current Successful Run vs Architect comparison *NOTE: This is just on architect runs. NLP is more challenging to figure out because the test may not have been created as you expected.
5. Common Attributes
6. CV Failures - Element Screenshots
Computer Vision is based off of the Element screenshots. If these screenshots don't show the expected value, CV will likely not work as expected. To locate the Element screenshot, look in the footer of the slider view and click the Element tab to see the previous successful run (Last Sucessfull Execution) and current screenshots (Current Execution) in a side-by-side view. See below example:
7. Video Enabled
If all other methods don't provide enough information, you can turn on video recording for the test, rerun, and then determine the root of the issue.
Fix the Issue
Once a user has determined the reason for the problem through sleuthing, there are many methods to updating the failed test. We will even attempt to suggest updates!
Our ML will suggest ways to fix the test with a one click fix. This can help update the test when the wrong element is selected, when it looks like you may have entered an incorrect password, or to update verifications.
2. Update Settings
There are times where popups will show up unexpectedly and the user may want an 'optional' action, or perhaps a website changed in one environment and it may result in wanting to 'suppress' an action so that it doesn't run.
3. Update the Executor
If there is a click that didn't actually click, or an input that didn't input text as selected, the user will want to change the executor.
4. Update the Selection Method
We are at least 99.9% accurate in element selection, but we also attempt to find the wrong element more often than failing - especially when clicking! This can mean we select the wrong element on occasion however the ability to overrule this is available via selectors.
5. Force Fail a Passing Test
There are times when we've attempted to self-heal for the user and it didn't do what was expected. However, if the test does not have a verification following the clicks that was unique to the new page, or new area on the page, the test can show a False Negative. In this cases, the test case most likely is showing a "Self-Heal" flag. When you see these, you can force the test to be counted as failed; this ensures that we don't learn that this was the correct 'click' when the test is re-executed.
6. Live Debug
We recommend Live Debug as the most robust method to update a test case. Live Debug lets you interact with a test while it is running on our machines. This means you can quickly diagnose test failures or modify tests in the clean execution environments.
7. Live Edit
When performing a live edit, all of the ML data used to execute the tests is recreated. Live edit allows you to use your local Architect to make changes in both NLP and Architect test workflows, element selections, and verifications without remodeling.
7. Update the Action Log values
See User Guide: Action Log *Note: MOST of these should NOT be edited