The biggest pain point in test automation is test case maintenance when things change in the application under test. This is one of the main things that Functionize aims to help solve for you. In most cases, we will self-heal the test for you using a robust set of Machine Learning algorithms. If the test fails for other reasons, we provide a number of methods for you to view and debug the test, or update the test as needed.
Self Healing Tests
Self-healing is completed in the cloud at the time of test execution - if there is a failed test case in Functionize that means the test case hasn't "self-healed" for one reason or another and human input is required. Typically human input is required if the site has changed significantly, the workflow has changed, but the test hasn't been updated, or there is a valid bug.
Note: Selenium and many other tools capture only one data point (XPath or some kind of selector) for every element. Functionize identifies not just a single selector but thousands of data elements, which is the basis for machine learning. Functionize collects all of the data and analyzes it to find discrepancies and changing patterns over time for the purpose of rating, ranking and categorizing to result in the elimination of selector maintenance altogether.
Diagnosis and Maintenance
- Go to any Project
- When viewing a project, the Functional Tests tab will be the main page.
- Locate the failed test case after it has been executed. See the example below, highlighting all passed/failed test cases.
Red indicates a failed test
Green indicates a passed test
- Next to a failed test case, click the Actions button.
- Select Details
- The test steps will be displayed. If the test was created with NLP or the test had text-based steps added while Architecting, that text along with the executed test steps will all be displayed.
- Passed steps are highlighted in green, failed steps in red.
- If a verification fails, the test will continue executing so there may be a series of passed steps, a failed step, and then additional passed steps. This is expected and allows maximum value to be extracted from test results.
- Scroll down to the failed step. Click the screenshot on a test step a few steps prior to the failed step. This will open Slider View to offer a more inclusive context to the reason for the failure.
- Using the blue arrows, click through to the failed step, looking for visual clues of the failure. These can include error messages, page not loading, or already registered accounts.
- Stop at the failed test step screenshot. Use the information shown on and around the screenshot to diagnose the test case. In the example shown below, Functionize is highlighting an execution error.
- On the left-hand side, information about the failed step is displayed.
- On the bottom, the specific operator and verification type is outlined in addition to what data was found on previous executions of this step, which in this case are the same.
Previous Successful Run
Functionize also (and most importantly) offers the ability to view the previous successful run along with multiple screenshots per step, allowing the user to visually compare what passed with the failed execution. This can be accomplished across multiple steps and is accessible at the top right of any test step.
NOTE: Previous Successful Run is possibly the most important and useful diagnose tool offered because it allows the comparison of failed execution to a passed execution and to easily see the visual difference between the two.
In the screenshot below, P stands for Previous, C stands for Current and A stands for Architect. Toggling between the these will show the screenshots for the corresponding execution. Look at multiple steps leading up to the failed step to see when screenshots start to diverge.
Learn more about test diagnosis in this webinar.