A run-list is a collection of one or more automation flows that are executed on one or more schedules. The execution of a run-list is controlled by the Leapwork Controller. Run lists are used in many scenarios, but the most common are:
- Executing larger collections of test flows against development and/or test environments
- Grouping the flows in steps and controlling the test execution order ("setup" and "teardown")
- Running one or more automation flows to trigger a business process on a timely basis
- Executing the automation flows through the Continuous Integration and Delivery (CI-CD) Pipeline
- Executing the same automation flows in different configurations, environments, machines, etc.
Using the run list and scheduler features in Leapwork is straightforward, but in large projects it can be challenging if the run lists are not organized properly. Here are some best practices to consider while creating a run list:
- Naming conventions used in a run-list
- Organizing run lists
- Ordering test cases in run-list
- Using Setup and teardown
1. Naming conventions used in run lists
In order to achieve good readability and collaboration, naming conventions are one of the important rules that should be kept in mind while creating run lists. Always give a logical name to each run list and be sure to define steps with correct names. This will ensure everyone in the team can understand what the run list is going to do, so it can easily be used when needed. Additionally, it also helps to identify and filter test results quickly.
2. Organizing run lists
Organizing run lists is equally important to organizing flows in Leapwork, as a run list provides an overview of what needs to be tested, the order of test execution, and helps to generate execution reports. Here are a few best practices when organizing run lists:
Agile sprint run lists
The agile methodology is one of the most commonly followed methodologies in modern software development. Agile teams usually have bi-weekly sprint closures or releases, so one way to arrange run lists in agile is to create a run-list that is dedicated to a sprint. The run list can then include tests that have been selected based on the feature or change request that is covered in the release or sprint. This type of execution is used when we don't want to run all tests, but rather when we want to run only the tests relevant to a particular release.
The image below shows an example of a sprint-focused execution/run list:
Run lists based on features under test
Enterprise projects are implemented at multiple locations, which means it could be possible that features are developed and tested by independent teams. These teams might therefore need to produce a feature-based test status. With Leapwork, users can create a feature-based run list to achieve feature-based testing and reporting. The image below shows an example of a feature-focused execution/run list:
Run lists based on test type
During application development, we can perform various types of functional testing e.g. smoke, regression, and end-to-end testing. It is good to have these types of tests converted into run lists for easy understanding, and to ensure all teams can have access. This also helps to produce good test reports based on the test run type. The figure below shows an example of a test type execution/ run list:
Hybrid run lists
Enterprise projects need applications to be tested at multiple levels. For example, new features need to be tested at the feature level, then there must be multi-level testing such as smoke, build verification, and regression testing. It might be possible that testing at sprint/release level is also taking place, hence enterprises require run list organization that supports all of these scenarios. The figure below shows an example of a hybrid run list:
3. Ordering flows in run lists
The scheduler executes automated test cases in the order they have been arranged in the run list, from top-down. However, we can rearrange the flow order in a run list by dragging and dropping the flows as required. See below:
4. Using setup and teardown
Some scenarios require test data to be set up before executing test cases. For example, if we are using virtual machines, these need to be up and running with the data baseline done before our test cases are executed. This would become the first step in our run list (see "Setup everything" in the below screenshot).
Then, after the test is completed, the data created needs to be cleared. This clean-up activity ensures there is no data redundancy, or simply we may need to close down virtual machines after all tests are completed. This can be achieved by having a "Teardown" step implemented in our run list.
See here for an example of a setup and teardown step in a run list: