Test case quality is often overlooked when people consider test automation. Yes, test automation will transform your testing and increase your productivity. However, the most intelligent test automation in the world cannot save you if you write bad test cases.
Essentials of an Effective Test Case
Writing effective test cases isn’t rocket science. A well-written test case needs to meet certain requirements. It should:
• Be easy to understand and execute. Every test case must make sense to anyone that reviews it. This is just equally true for automatic or manual test cases.
• Have a specific objective. Every test should aim to test certain specific aspects of your UI. It’s no good just having tests that replicate someone randomly interacting.
• Have clear requirements. It is really important to know the requirements for your test case. E.g. does it rely on the user being logged in?
• Have clear pass/fail criteria. One of the easiest traps to fall into is creating ambiguous test cases. That is test cases where it isn’t clear whether it has passed or failed.
• Be repeatable. It may sound obvious, but there is no point in creating a test case that will give different results every time it runs!
• Include regular verifications throughout. Test cases can often be dozens or even hundreds of steps long. Frequent verifications can help confirm that the test is proceeding correctly.
Best Practices for Writing Test Cases
Effective test cases are concise, targeted, and easy to execute and maintain. Above all, they should be effective and efficient. Follow our 10 best practices to help you write better test cases:
1. Don’t assume any existing knowledge. Write the test case for someone that has never used the application before and don’t skip steps.
2. Use assertive language like “go to home page”, “enter data”, or “click submit button”.
3. Use clear naming so anyone knows exactly what each test does. For instance:
Search with invalid Keyword
Login and logout verification existing user
Add edit and delete bank account
4. Make sure any preconditions or assumptions are clearly stated. For instance, you can only test a user login flow if a user exists.
5. Describe any special setup that is needed. For instance, “Load the test database”. List any dependencies on other tests. For instance, does this test case need to be run after another test case?
6. Make sure test steps are natural and easy to follow. Test steps shouldn’t be too long, but equally, you shouldn’t make them too small. So, combining actions into a single step is fine as long as it is still easy to follow.
7. Give details of any test data. Often, a test relies on specific test data work. For instance, it may need a particular test user account.
8. Make tests reusable. Many tests rely on the same set of steps with the same data. In these cases, test data provided for one test case can be used for other test cases.
9. Include frequent verifications to ensure the test is working. Where possible, any action in your test case should have a corresponding verification step.
10. Always provide the expected result of the test case. For example, “user ‘test case’ exists in user database”.
Creating Effective Test Cases with Functionize
Here at Functionize, we simplify test automation by applying machine learning (ML). ML allows us to create intelligent models of your site that are constantly evolving as your site grows and changes. Our system takes your test case and creates an intelligent test. Each time the test runs, the system learns more about your UI. This ensures your tests are robust, low-maintenance, and effective.
There are two approaches for creating Functionize tests. The first is to use Architect. This Chrome plugin is ideal for creating complex tests, for updating existing tests, or for adding single tests to your test suite. By contrast, NLP Test Creation is used for creating tests in bulk from a set of test plans. It uses natural language processing (NLP) to parse your test cases and create a model of your site. The following sections will help you create robust test cases for using both approaches.
Creating Tests with Architect
At its heart, Architect is an intelligent test recorder powered by machine learning. However, it is much more than this. Architect also offers cutting edge features that allow it to create incredibly complex test cases. Architect is available as a Google Chrome extension that you can install from the Chrome Web Store. Of course to use it, you will need a Functionize account.
It’s really important to plan your test before you try to create it in Architect. You need to start with a clear step-by-step test case as described above. The vital thing is to make sure you identify good points for verifications throughout the test case.
What Makes a Good Verification?
Verifications are used as a way to strategically ensure that a test is proceeding correctly through it's expected workflow. The idea is to use the actual control logic of your application as a way to show all is well. But what makes for a good verification? In general, these fall into one of three categories.
1. Page Load. Does the test step take you to a new page, such as the shopping basket? If so, make this a verification.
2. Element Update. Will some element on the page change as a result of the test step? For instance, after a user logs in, their profile is now displaying at the top right. It makes sense to include a verification to ensure this has happened. Add a 'verify' step to confirm their profile is displayed at the top right.
3. Application Logic. Does your code have some inbuilt verification logic? For instance, does it check to see whether a postcode field has been filled in correctly and display an error if not? If so, check that the error message isn’t shown by adding a conditional action.
Ideally, you should aim to include enough verifications to help your test fail as early as possible. Remember, the quicker a test fails, the fewer wasted resources and the quicker you can start to debug. It’s difficult to put exact figures on when to include verifications, but certainly you should have one every 10-15 test steps.
What Other Verifications Can Be Done in Architect?
One of the most powerful features in Architect is the ability to create custom verifications. For instance, Architect allows you to specify complex verifications based on image processing. So, you can ask it to compare a page against either the previous test run or against a previous test step. You can then specify the acceptable variance—how precise the comparison must be. Uniquely, the system is able to cope with computed CSS values and can apply logic to spot when these changed unexpectedly.
Creating NLP Test Cases
The Functionize NLP Test Creation system is highly advanced and is specifically designed to understand test cases. It can cope with unstructured tests, but it is far better to provide clear, unambiguous test cases.
The following advice will help you create better test cases for the system to process.
4. Review your test plan before submitting it and ensure it meets the guidelines above.
5. Remove any test steps that are obsolete and/or unnecessary.
6. Ensure that verification actions are included for each step. Here are some common examples:
If the test step is “Open the URL https://example.com/login” the next step should check that the page opened correctly. For instance, “verify the page title is login” or “verify the login form is displayed”.
If the test step is “enter username ‘test’ and password ‘password’, then click on the submit button”, the next step should check whether the ‘test’ user is correctly logged in. For instance, “verify the username ‘test’ is displayed top right”.
If the test requires a long form to be filled in, you should add intermediate verification steps. For instance, on an address form, after the street, town, and postcode fields have been entered, verify that the postcode field displays a valid postcode. It will also help to capture a screenshot of the input steps for later verification.
7. Double-check every URL in the test case. In particular, the initial URL used during test job creation must be exactly correct. Pay particular attention to any variables being passed in the URL.
8. Don’t leave logical gaps in the flow. Our system is intelligent, but it isn’t a human! If there are any missing steps, it won’t know what to do. For instance, a human will just dismiss a popup warning message about weak passwords, but the NLP system won’t.
Format for Submitting NLP Test Cases
Functionize NLP Test Cases need to be submitted in a particular format.
1. The test plan should be a spreadsheet with a minimum of three columns with headers given in CAPITALS as below:
b. TEST DATA
c. EXPECTED RESULT
The TEST DATA and EXPECTED RESULT columns can be empty, but must be present. Without these, the system won’t be able to read your test case.
2. The document must be either a CSV (.csv) or excel file (.xlsx).
3. Test cases must not contain any blank rows.
4. Any instructions or comments relating to the test case should be added as rows above the three mandatory headers listed above. These rows must use a double forward slash ‘//’ to indicate they are not part of the actual test.
5. Remove any unicode or special characters that may not load or display properly.
Note: If the system is unable to identify the correct column headers, or if they are not written correctly, you will be brought to a mapping modal to let our system know which columns contain the correct information. Use this to tell the system what the correct columns are.
Modifying Test Cases
Sometimes, you may want to modify or update a test case. For instance, if you know your application logic has changed in some key way, Architect makes it really easy to go in and adjust the test case or record a new test. Moreover, with Architect you can even choose to override the builtin intelligence within the system. This allows you to adapt tests to cope with specific changes that might otherwise create ambiguity.
For example, you may be redesigning your checkout flow. Originally, your system just assumed that the billing and delivery addresses are the same. When a user enters their address, they click ‘Next’ and are taken straight to the card payment page. In the new flow, there are two buttons. One “Deliver to same address” and the other “Add billing address”. But now you have given our machine learning system a challenge. Which button should it choose? 99 times out of 100, the system is clever enough to make the correct choice, so you could just hope for the best. But Architect allows you to specify exactly which button you mean using a selector.
Testing is only ever as good as your test planning. Key to this is creating the best test cases you can. Poor test cases will result in inefficient and ineffective testing. Hopefully, you now understand what makes an effective test case and feel confident creating more effective tests with Functionize. Our system applies a huge amount of intelligence when running tests but you can help it perform even better.