Test case quality is often overlooked when people consider test automation. Yes, test automation will transform your testing and increase your productivity. However, the most intelligent test automation in the world cannot save you if you write bad test cases.
Essentials of an Effective Test Case
Writing effective test cases isn’t rocket science. A well-written test case needs to meet certain requirements. It should:
- Be easy to understand and execute. Every test case must make sense to anyone that reviews it. This is equally true for automated or manual test cases.
- Have a specific objective. Every test should aim to test certain specific aspects of your UI. It’s no good relying on tests that replicate someone randomly interacting.
- Have clear requirements. It is really important to know the requirements for your test case. E.g. does it rely on the user being logged in?
- Have clear pass/fail criteria. One of the easiest traps to fall into is creating ambiguous test cases where it isn’t clear whether the tests have passed or failed.
- Be repeatable. It may sound obvious, but there is no point in creating a test case that will give different results every time it runs!
- Include regular verifications throughout. Test cases can often be dozens or even hundreds of steps long. Frequent verifications can help confirm that the test is proceeding correctly.
Best Practices for Writing Test Cases
Effective test cases are concise, targeted, and easy to execute and maintain. Above all, they should be effective and efficient. Follow our 10 best practices to help you write better test cases:
- Don’t assume any existing knowledge. Write the test case for someone that has never used the application before and don’t skip steps.
- Use assertive language like “go to home page”, “enter data”, or “click submit button”.
- Use clear naming so anyone knows exactly what each test does. For instance:
-Search with invalid Keyword
-Login and logout verification existing user
-Add, edit, and delete bank account
- Make sure any preconditions or assumptions are clearly stated. For instance, you can only test a user login flow if a user exists.
- Describe any special setup that is needed. For instance, “Load the test database”. List any dependencies on other tests. For instance, does this test case need to be run after another test case?
- Make sure test steps are natural and easy to follow. Test steps shouldn’t be too long, but equally, you shouldn’t make them too small. So, combining actions into a single step is fine as long as it is still easy to follow.
- Give details of any test data. Often, a test relies on specific test data work. For instance, it may need a particular test user account.
- Make tests reusable. Many tests rely on the same set of steps with the same data. In these cases, test data provided for one test case can be used for other test cases.
- Include frequent verifications to ensure the test is working. Where possible, any action in your test case should have a corresponding verification step.
- Always provide the expected result of the test case. For example, “user ‘test case’ exists in user database”.
Creating Effective Test Cases with Functionize
Here at Functionize, we simplify test automation by applying machine learning (ML). ML allows us to create intelligent models of your site that are constantly evolving as your site grows and changes. Our system takes your test case and creates an intelligent test. Each time the test runs, the system learns more about your UI. This ensures your tests are robust, low-maintenance, and effective.
There are two approaches for creating Functionize tests. The first is to use Architect. This Chrome plugin is ideal for creating complex tests, for updating existing tests, or for adding single tests to your test suite. By contrast, NLP Test Creation is used for creating tests in bulk from a set of test plans. It uses natural language processing (NLP) to parse your test cases and create a model of your site.
See also: Best Practices for Creating NLP Test Cases, Best Practices for Using Architect
Modifying Test Cases
Sometimes, you may want to modify or update a test case. For instance, if you know your application logic has changed in some key way, Architect makes it really easy to go in and adjust the test case or record a new test. Moreover, with Architect you can even choose to override the built-in intelligence within the system. This allows you to adapt tests to cope with specific changes that might otherwise create ambiguity.
For example, you may be redesigning your checkout flow. Originally, your system just assumed that the billing and delivery addresses are the same. When a user enters their address, they click ‘Next’ and are taken straight to the card payment page. In the new flow, there are two buttons. One “Deliver to same address” and the other “Add billing address”. But now you have given our machine learning system a challenge. Which button should it choose? 99 times out of 100, the system is clever enough to make the correct choice, so you could just hope for the best. However, Architect allows you to specify exactly which button you mean using a selector.
Testing is only ever as good as your test planning. Key to this is creating the best test cases you can. Poor test cases will result in inefficient and ineffective testing. Hopefully, you now understand what makes an effective test case and feel confident creating more effective tests with Functionize. Our system applies a huge amount of intelligence when running tests but you can help it perform even better.