Test automation enables organizations to reduce tedious, repetitive tasks so that your QA team can focus on new feature testing. It means fewer human errors, less risk of bugs entering production, improved efficiencies, reduce costs, and enables organizations to scale testing efforts much more rapidly.
LEAPWORK is committed to making test automation an effortless task. With that, following best practices to avoid challenges serves as icing on the cake. This article covers some general best practices that can help you achieve test automation in the best possible way.
In the following article, we'll present the best practices and solutions for tackling some of the most common challenges observed while implementing and executing automated testing.
Identify What to Automate
This is one of the most critical activities you should do before implementing test automation. Automating everything is not possible or recommended because it can lead to more maintenance, and, counter-intuitively, more effort. Hence it is advised to identify the list of cases you need to automate before you implement automation. Some of the factors you should consider are:
- Automate predictable scenarios only. The test case selected for automation should have a clearly defined objective
- Try automating those tests which are repetitive in nature. Regression tests that are repeated for all builds are a good place to begin
- Automate data-intensive tests where a large set of data testing is needed. This way, you can reduce the burden on your test team and test any application thoroughly with all data combinations
- High-risk features with complex calculations should be automated to avoid critical mistakes due to human error
- Consider automating tests that require testing combinations on multiple hardware/ software/ platforms and browsers
- The time-consuming test should be automated to save unnecessary manual efforts
We strongly recommend considering the above points before you start with test automation.
Every automated test should be designed for a specific purpose. If the test fails, it should be easy to identify what needs to be fixed. When test flows and sub-flows are designed to serve a specific purpose, it makes tests more readable and maintainable. so your teammates can understand the context easily.
Another aspect you should keep in mind is the autonomy of the test flows and sub-flows. Every flow should run completely independently of other tests, and there should be no dependency on other tests in terms of test output or data. The dependency of one test over others makes them more prone to failures and hard to maintain and scale.
Naming conventions serve various purposes like creating a consistent view, understanding flows/ sub-flows quickly by making assumptions based on previous experience, facilitate copying, changing, and maintaining the flows and offers easy collaboration in the team.
We recommend defining your internal naming convention before you being with full-blown automation, the team should sit together and agree on naming conventions followed by documenting them to refer and use in all future flow creation activities and conversation.
Avoid Using the 'Wait' Block
The behavior of the application under test depends upon many factors, such as your machine's RAM and processor, network speed, or the current load on the application. So it is hard to predict how much time it will take to load a specific web or desktop element.
Sometimes you might try to fix these timing issues by using LEAPWORK's static 'Wait' block to the test flow. However, by doing this you could end up with more test failures and poor test performance due to the fact that your test has to wait for a specified amount of time. If the application takes a second more than the time defined in the 'Wait' block, your test will fail.
To help you with application synchronization issues, LEAPWORK has 'dynamic waits' defined as "Timeout", "Await Dom Changes", "Await Timeout", "Await Requests", and "Await Timeout". You can use these independently or in combination to get your desired result. For more information on how to use them, see this article.
Take actions when test flows fail
Automated tests can fail for various reasons. Test environment slowness, network issues, database latency, or a bug in the application under test can all contribute to test failure. For this reason, it is very important to identify why a test failed so you do not get false-positive results and resolve issues so you get more stable results and give confidence to stakeholders.
When a test fails, we recommend following the below process as a minimum:
Test Failed due to the application under test or external factors
Log in to LEAPWORK Studio, go to the reporting section, and check the failure reason using LEAPWORK's video of the run and corresponding log messages. Once the reason for test failure has been identified, take the necessary actions - for example:
- If the test fails due to a bug in the application, log a bug and assign it to the development team
- If the test fails due slowness in the application, hardware, or network- log the bug for the network or environment team
- If it is a database latency issue, log the bug and assign it to the database team
- If the test fails due to flow issues, log a flow bug and try to fix it
The test fails due to a flows issue (false positive)
If the test fails due to flow issues then follow the below process:
- Rerun the flow a couple of times or look at previous test run results to check the failure pattern
- If the test is failing due to timing issues then use the 'Await' options. For more information, refer to this article
- Ensure proxy configurations are applied in the LEAPWORK studio to avoid test failure due to proxy misconfiguration
- Once you have done all of the above, run the flow for 5-10 times and make sure it passes consistently, for more information, refer to this blog.
Ideally, you must treat all test automation flows as production-grade and only put tests into production after they have passed deployment criteria (i.e. ran and passed a couple of times).
Create change-resistant strategies
The user interface of an application under development is subjected to change for various reasons. Early testing is important, but if it isn't done properly it could lead to increased maintenance efforts. Here are two examples of using the object capture mechanism that should be avoided:
- Capturing position or coordinates to find an object is not recommended when an application is subject to change or controls are subject to move. Instead, hook up with a small image that can be fetched from anywhere in the screen or with a back-end object of the application, such as CSS, or the control name
- Text-based object strategies are also subject to change from region to region, so it is suggested to avoid them as much as possible
While creating new tests, try to pick the strategy that is the least likely to be impacted by any application change.
When it comes to automation, valid test data is essential. When designing automated test flows, you should be careful to avoid test data miss-management, as this can lead to unstable and inaccurate test results.
While working with test data in LEAPWORK, you should always ensure the following:
- Test data should be part of the main test flow only. Sub-flows should not have test data
- Data to sub-flows should be passed as parameters, this helps to increase sub-flow re-usability
- Ideally, you should create test data first then execute the test. At the end of the test, delete the generated test data
The other crucial aspects of test data are it's quality, consistency, and availability. We, therefore, recommend to always ensure data is ready before you commence test automation.
Stop Testing if Test Failure Reaches a Certain Threshold
If your test environment is not working or there is a bug in the application that does not allow you to test properly, then it makes sense to stop or skip the execution of your automated tests to save resources and ensure the team can take corrective actions quickly. This can be achieved using LEAPWORK Schedules.
Handle Test Dependencies Using Run Lists
There could be multiple situations where the execution of your test flows will demand a dependency on the previously executed flows. For example, there could be an order validation test flow that will require execution only after the order creation test flow has been executed. These types of test dependencies can be handled by keeping the flows in different steps within a LEAPWORK run list. Please refer to the article working with run lists.
Test Frequently and Often
Reducing time to market is one of the key objectives of test automation. This can only be achieved by getting fast feedback and fixing bugs as soon as possible, hence it's important to test faster and as frequently as possible.
Testing faster means running more test execution cycles. If it takes a lot of time due to large test suites, you can set up more parallel runs to get faster feedback to developers. The faster you get feedback, the sooner you will catch bugs in the initial phase. This means less time and costs fixing bugs, thus faster time to market.
In the ideal scenario, all builds deployed by developers should be tested by automated test suites so that you have an up-to-date quality status on all builds and you can take corrective measures immediately if you see a critical test failing.
We have based these best practices on our experience implementing test automation at customer locations, as well as the experience of our partners. Whilst we cannot state that these practices are valid in all cases, many of our customers have already used them to successfully implement test automation within their organization. Please note that over time, these best practice guides might evolve alongside our product.
If you require further clarification or assistance, please contact our Priority Support.