Please enable JavaScript.
Coggle requires JavaScript to display documents.
CHAPTER 6, Transitioning Manual Testing to an Automated Environment -…
CHAPTER 6
(2) Identify Steps Needed to Implement Automation within Regression Testing
Test interdependency
Test preconditions
Data sharing
Functional overlap
SUT coverage
Test execution time
Executable tests
Frequency of test execution
Large regression test sets
(3) Factors to Consider when Implementing Automation within New Feature
Testing
In general it is easier to automate test cases for new functionality as the implementation is not yet finished
(or better: not yet started).
As new features are introduced into an SUT, testers are required to develop new tests against these new
features and corresponding requirements.
Changes to the TAS must be evaluated against the existing automated testware components so that changes or additions are fully documented, and do not affect the behavior (or performance) of existing TAS functionality.
If a new feature is implemented with, as an example, a different class of object, it may be necessary to
make updates or additions to the testware components
New test requirements may affect existing automated tests and testware components. Therefore, prior to making any changes, existing automated tests should be run against the new/updated SUT to verify and record any changes to proper operation of the existing automated tests
Finally, one needs to determine if the existing TAS will continue to meet current SUT needs
(4) Factors to Consider when Implementing Automation of Confirmation Testing
Confirmation tests are prime candidates for automation
Automated confirmation tests can be incorporated into a standard automated regression suite or, where practical subsumed into existing automated tests
Tracking automated confirmation tests allows for additional reporting of time and number of cycles
expended in resolving defects.
Impact analysis may be required to determine the appropriate
scope of regression testing.
(1) Criteria for Automation
Prior to commencing an automated testing effort, one needs to consider the applicability and viability of
creating automated vs. manual tests. The suitability criteria may include, but are not limited to:
Maturity of test process
Suitability of automation for the stage of the software product lifecycle
Compatibility of tool support
Sustainability of the automated environment
Complexity to automate
Controllability of the SUT
Frequency of use
Technical planning in support of ROI analysis.
Prior to incurring the time and effort to develop automated tests, an assessment should be conducted to evaluate what the intended and potential overall benefit and outcome of implementing test automation might be.
Transitioning requires:
Education of test team to the paradigm shift
Roles and responsibilities
Scope of the test automation effort
Cooperation between developers and test automation engineers
Correctness of test data and test cases
Parallel effor
Availability of tools in the test environment for test automation
Test automation reporting
Transitioning Manual Testing to an Automated Environment