Please enable JavaScript.
Coggle requires JavaScript to display documents.
CHAPTER 5, Test Automation Reporting and Metrics - Coggle Diagram
CHAPTER 5
Test Automation Reporting
test logs give detailed information about the execution steps, actions and responses of a test case
and/or test suite
the logs alone cannot provide a good overview of the overall execution result.
Content of the reports
test execution report must contain a summary giving an overview of the execution results, the system being tested and the environment in which the tests were run which is appropriate for each of the stakeholders.
Publishing the reports
The report should be published for everyone interested in the execution results.
(2) Implementation of Measurement
Features of automation that support measurement and report generation
The reporting on each of a series of test runs needs to have in place an analysis feature to take into account
the results of the previous test runs so it can highlight trends (such as changes in the test success rate).
Distinguishing between expected differences in the actual and expected outcome of a test is not always trivial though tool support can help greatly in defining comparisons that ignore the expected differences while highlighting any unexpected differences
Integration with other third party tools
When information from the execution of automated test cases is used in other tools, it is possible to provide the information in a format that is
suitable for these third party tools.
Visualization of results
Test results should be made visible in charts.
Management is particularly interested in visual summaries to
see the test result in one glance; in case more information is needed, they can still dive in to the details.
(1) Selection of TAS Metrics
External TAS metrics
Ratio of failures to defects
Time to execute automated tests
Effort to maintain automated tests
Number of automated test cases
Effort to analyze automated test incidents
Important logging features include:
The TAS should log the expected and actual behavior
The TAS should log the actions to be performed
SUT logging and TAS logging should be synchronized
Number of pass and fail results
Effort to build automated tests
Number of false-fail and false-pass results
Automation benefits
Code coverage
Internal TAS metrics
Automation code defect density
Speed and efficiency of TAS components
Tool scripting metrics
(3) Logging of the TAS and the SUT
Logging is very important in the TAS, including logging for both the test automation itself and the SUT.
TAS logging
In the case of reliability testing / stress testing (where numerous cycles are performed) a counter
should be logged,
When test cases have random parts (e.g., random parameters, or random steps in state-machine
testing), the random number/choices should be logged
Dynamic information about the SUT (e.g., memory leaks) that the test case was able to identify
with the help of third party tools.
Screenshots and other visual captures can be saved during test execution for further use during
failure analysis
Details of the test log at a high level (logging significant steps) including timing information
All actions a test case performs should be logged in such a way that the log file (or parts of it) can
be played back to re-execute the test with exactly the same steps and the same timing.
The status of the test case execution because, while failures can easily be identified in log files, the
framework itself should also have this information and should report via a dashboard
the TAS should make sure that all information needed
to analyze the problem is available/stored,
Which test case is currently under execution, including start and end time
Use of color can help to distinguish different types of logged information
SUT logging:
The SUT can log all user interaction (directly via the available user interface, but also via network
interfaces, etc.).
At startup of the system, configuration information should be logged to a file, consisting of the different software/firmware versions, configuration of the SUT, configuration of the operating system, etc.
When the SUT identifies a problem, all necessary information needed to analyze the issue should
be logged, including date and time stamps, source location of issue, error messages, etc.
Test Automation Reporting and Metrics