Debugging and Diagnosing Failed Test Cases Overview
The biggest pain point in test automation is test case maintenance, when changes occur in the application under test. This is one of the main challenges that Functionize aims to help you solve. In most cases, we will self-heal the test for you using a robust set of machine learning algorithms. If the test fails for other reasons, we provide several methods for you to view and debug the test or update it as needed.
The goal with debugging is to diagnose why test(s) failed and then once the problem(s) are identified, fix the problem(s). Problems can be categorized as either an automated test issue or an application under test (AUT) issue.
Learn more about test diagnosis in this webinar.
What Types of Problems Can Be Encountered?
It is important to first determine what the problem is in order to then figure out how to fix the problem. We have identified two different categories of problems:
CATEGORY 1 - The automated test did not do what you expected.
- The wrong element was selected for clicking/text/input/checkbox/etc
- A click did not work, nothing changed after the click action was executed
- An input did not work, nothing enter in the AUT
- A drop-down field did not select the correct item from the list
- The test did not wait long enough for an element to appear in the AUT
CATEGORY 2 - The AUT did not do what you expected, a defect was found.
- The AUT had an unexpected popup
- The AUT had an unexpected error
- The AUT took significantly longer time to load than expected
- The AUT workflow changed
Self Healing Tests
Self-Healing is completed in the cloud at the time of test execution and can be viewed on the Test Details page and Slider View. There are small, yellow tags that indicate a Self-Heal has taken place for a given action.
Test Details page
When you hover your mouse on the yellow tag, the Anomalous Attributes appear that changed from the Previous to the Current run.
Slider View
If a test case fails in Functionize, it means the test case has not Self-Healed for one reason or another, and human input is required to resolve the failed action(s). Human input is typically needed if the site has changed significantly, the workflow has changed, but the test hasn't been updated or there is a valid defect in the application under test.
Note: Selenium and many other test automation tools capture only one data point (XPath or some kind of Selector) for every element. Functionize identifies not just a single Selector but thousands of data elements, forming the basis of our machine learning. Functionize collects all of the data and analyzes it to find discrepancies and changing patterns from one test execution to the next for the purpose of rating, ranking, and categorizing the elements, ultimately resulting in the elimination of Selector maintenance altogether, the basis of our Self-Heal feature.
Debugging and Diagnosing Aides
We have many tools available to figure out what the problem is by allowing the user to see what happened during the test's execution.
-
Screenshots
Screenshots are typically the easiest way to track down the cause of failure. We capture 4 screenshots per action. The Pre, Mid, and Post Action screenshots are found in the Screenshot Interval menu. The fourth screenshot, Full Screen, is available by clicking the Expand button.
The Pre, Mid, and Post-Action screenshots of a test case involves capturing screenshots of an application at various stages of its use to ensure correct functionality of the AUT.
Pre-Action Screenshot: The test captures a screenshot of the application before performing any actions. This establishes a baseline and ensures that the application is in the expected state before any actions are taken.
Mid-Action Screenshot: The test captures a screenshot during the performance of an action. This ensures the AUT functions correctly during the action and captures any errors or unexpected behavior.
Post-Action Screenshot: The test captures a screenshot after the action has been completed. This ensures that the AUT is in the expected state and verifies that any changes made during the action are saved correctly.
By comparing these three screenshots, the tester can identify discrepancies or issues that may have occurred during the action execution and can verify the correct function of the AUT. The screenshots are particularly useful for verifying the functionality of the AUT user interfaces, as it allows the tester to see exactly what the user sees at each stage of the interaction with the AUT.
-
Live Debug
This gives the user the ability to set Breakpoint(s) and have access to the Virtual Machine on the Functionize Cloud to watch and interact with the test during execution.
-
Anomalous Attribute Tags
These Tags will be displayed on the Test Detail page and Slider View for actions with attributes that have changed from the previous execution and have Self-Healed.
See Video: Self Heal example
-
Previous Successful Run vs Current Run
Functionize also offers, most importantly, the ability to view the Previous Successful Run along with multiple screenshots per step, allowing the user to visually compare what passed previously with the current failed execution. This can be accomplished across multiple steps and is accessible at the top left of any test action in the Slider View.
Note: The Previous Successful Run is possibly the most important and useful diagnostic tool offered because it allows the comparison of a failed execution to a passed execution and makes it easy to see the visual difference between the two screenshots, allowing for a side-by-side comparison .
In the screenshot below, Prev stands for Previous, Cur stands for Current, and Arc stands for Architect. Toggling between these will show the screenshots for the corresponding execution. Look at multiple steps leading up to the failed step to see when screenshots start to go wrong.
The comparison of screenshots shows a side-by-side view of the Previous Successful Run versus the Current Run.
See Video: Previous Successful Run vs Current Run comparison
-
Current Successful Run vs Architect
This comparison of screenshots shows a side-by-side view of the current successful run versus the architect run.
-
Common Attributes
Compare Data Attributes from when the test was Modeled, Previous Successful Run and Current run.
-
CV Failures - Full Page Screenshots
Computer Vision (CV) is based on the Full Page screenshots. If these screenshots don't show the expected value, CV will likely not work as expected. To locate the Full Page screenshot, look in the header of the Slider View and click the Cur, side-by-side or Base tabs to see the Current and/or Baseline screenshots in a side-by-side view. See below example:
-
Video Enabled
If all other methods don't provide enough information, you can turn on video recording for the test, rerun, and then determine the root of the issue.
See User Guide: Recording Video of Test Executions
How to Debug and Diagnose Failed Test Cases
- Open any Project
- The Test Listing page includes a column called Browsers where you can quickly see the Pass/Fail status of your test and the last run information details are on this page as well
- Open a test and then open the Browser tab for the failed execution
- Each action displays its Pass/Fail status in the banner. Green indicates passed, red indicates failed, and yellow indicates
-
Expand the Failed action in order to see the Execution Error details
-
If a Verify Action fails, the test will continue executing, so there may be a series of passed steps, a failed step, and then additional passed steps - this is expected and allows maximum value to be extracted from test results and is controlled under Test Settings > Advanced tab
-
From the failed step, click the View button to the right of the failed test step, which will open the Slider View to offer a more inclusive context, with screenshots, to the reason for the failure
-
Using the Arrows, click through before and after the failed action, looking for visual clues in the screenshots of the failure, these can include error messages, page not loading, or already registered accounts
-
Stop at the failed test action screenshot, usse the information shown on and around the screenshot to diagnose the test case and in the example, Functionize is highlighting that the Found and Expected text does not match
-
On the left side slider panel, information about the failed step is displayed
-
On the bottom, highlights what data was found on Previous executions of this step, which in this case matched the Expected text
-