The goal with debugging is to figure out why the test failed and then once the problem is identified, fixing the problem.
What Types of Problems Can Be Encountered?
It is important to first determine what the problem is in order to then figure out how to fix the issue. We have identified two different categories of problems:
CATEGORY 1 - The product did not do what you expected.
- The wrong element was selected for clicking/text/input/checkbox/etc
- A click didn't work (nothing changed after the click)
- An input didn't work (nothing typed/erased)
- A dropdown didn't select the right item
- The site did not wait long enough for an element to appear
CATEGORY 2 - The site under test did not do what you expected.
- The site had an unexpected popup
- The site had an unexpected error
- The site took a significantly longer time to load than expected
- The site workflow changed
Sleuthing
We have many tools available to figure out what the problem is by allowing the user to see what happened during the test's execution.
1. Screenshots
Screenshots are typically the easiest way to track down the cause of failure. We capture 4 screenshots per action. The Pre, Mid, and Post Action screenshots are found in the Screenshot Interval menu. The fourth screenshot, Full Screen, is available by clicking the Expand button.
2. Anomalous Attribute Tags
These tags will display on the Test Detail page and Slider View for Actions with attributes that have changed from the previous execution.
3. Previous Successful Run vs Current Run
This comparison of screenshots shows a side-by-side view of the previous successful run versus the current run.
See Video: Previous Successful Run vs Current Run comparison
4. Current Successful Run vs Architect
This comparison of screenshots shows a side-by-side view of the current successful run versus the architect run.
See Video: Current Successful Run vs Architect comparison *NOTE: This is just on architect runs. NLP is more challenging to figure out because the test may not have been created as you expected.
5. Common Attributes
6. CV Failures - Element Screenshots
Computer Vision is based off of the Element screenshots. If these screenshots don't show the expected value, CV will likely not work as expected. To locate the Element screenshot, look in the footer of the slider view and click the Element tab to see the previous successful run (Last Successful Execution) and current screenshots (Current Execution) in a side-by-side view. See below example:
7. Video Enabled
If all other methods don't provide enough information, you can turn on video recording for the test, rerun, and then determine the root of the issue.
See User Guide: Recording Video of Test Executions
Fix the Issue
Once a user has determined the reason for the problem through sleuthing, there are many methods to updating the failed test. We will even attempt to suggest updates!
1. SmartFix
Our ML will suggest ways to fix the test with a one click fix. This can help update the test when the wrong element is selected, when it looks like you may have entered an incorrect password, or to update verifications.
2. Update Action Flags
There are times where popups will show up unexpectedly and the user may want an 'optional' action, or perhaps a website changed in one environment and it may result in wanting to 'skip' an action so that it doesn't run.
See User Guide: Optional and Skipped Actions
3. Update the Executor
If there is a click that didn't actually click, or an input that didn't input text as selected, the user will want to change the executor.
See User Guide: Customizing Executors
See Video: How to Update an Executor
4. Update the Selection Method
We are at least 99.9% accurate in element selection, but we also attempt to find the wrong element more often than failing - especially when clicking! This can mean we select the wrong element on occasion however the ability to overrule this is available via selectors.
See User Guide: Customizing Tests with Selectors
5. Force Fail a Passing Test
There are times when we've attempted to self-heal for the user and it didn't do what was expected. However, if the test does not have a verification following the clicks that was unique to the new page, or new area on the page, the test can show a False Negative. In this cases, the test case most likely is showing a "Self-Heal" flag. When you see these, you can force the test to be counted as failed; this ensures that we don't learn that this was the correct 'click' when the test is re-executed.
See User Guide: Force Fail a Passing Test Case
See Video: Force Fail a Passing Test Case
6. Live Debug
We recommend Live Debug as the most robust method to update a test case. Live Debug lets you interact with a test while it is running on our machines. This means you can quickly diagnose test failures or modify tests in the clean execution environments.
7. Local Edit
When performing a local edit, all of the ML data used to execute the tests is recreated. Local edit allows you to use your local Architect to make changes in both NLP and Architect test workflows, element selections, and verifications without remodeling.
7. Update the Action Settings
Each Action in your test has certain Settings, Flags, and Information associated with it. These can be accessed via the Test Details page or Slider View. A complete list of Action Types and the Settings available to each is linked below.
See User Guide: Action Settings