Please enable JavaScript.
Coggle requires JavaScript to display documents.
CTFL - Chapter 5: Managing the Test Activities - Coggle Diagram
CTFL - Chapter 5: Managing the Test Activities
Test Planning
Break-down
A
test strategy
or a
test policy
provides a generalized description of the test process, usually at the product or organizational level
The
test approach
is the implementation of the
test strategy
for a specific project or release
Definition
The project plan for the testing work to be done in development or maintenance projects
Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
Communication
Assumptions & constraints of the test project
Risk register (e.g., product risk, project risk)
Context of testing (Define the test objectives - Testing scope - Test basis)
Budget & schedule
Purpose
Means for communication with team members & other stakeholders.
List of tasks & milestones in a baseline plan to track progress, as well as defining the shape & size of the test effort.
Identify & agree on the objectives of the testing.
Agile planning vs. traditional planning
Agile plan
Done at a high level at the beginning of the project
Refine throughout the project, allowing changes & regular feedback to be factored in
The
first level
is product vision, where we have a very high level of what we want to build, & we would have a high-level budget & schedule to release the product.
The
second level
is product planning, where we have a big picture of all the user stories & the roadmap of what to deliver in each release until we reach the final desired product
Release planning & iteration planning
Release planning
A release is a
group of iterations
that results in the completion of a valuable deliverable on the project.
The product owner uses the release plan to
provide visibility to all project people
by giving them what to expect & when
After release planning is
done
, iteration planning for the
first iteration
starts
The release plan is generally outlined by the
product owner
A release often takes a few months, 3 - 9 months
Release planning defines & redefines the product backlog & may involve refining larger user stories into a collection of smaller stories.
Project & quality risks are identified based on user stories & high-level effort estimation is performed
Iteration planning
An iteration is a short development period, typically two to four weeks in duration.
The release plans have no details other than a list of stories to be done by a specific date
Focus on the subset of the release plan stories that will be done in the next iteration or sprint & nothing beyond
During the
iteration planning meeting
(delivery team, testers, the scrum master, the product owner & other relevant stakeholders), the product owner comes with a
backlog of user stories
prioritized based on business value & describes the iteration goal
The business representatives must answer the team's questions about each story so the team can understand what they should implement & how to test each story
Tester's Contribution to Iteration & Release Planning
Release Planning
Estimate Test Effort Associated with User Stories
Determine the Test Approach
Participate in Project & Quality Risk Analyses
Plan the Testing for the Release
Participate in Writing Testable User Stories & Acceptance Criteria
Iteration Planning
Determine the Testability of User Stories
Break Down User Stories into Tasks
Participate in the Detailed Risk Analysis of User Stories
Estimate Test Effort for All Testing Tasks
Identify & Refine Functional & Non-functional Aspects of the Test Object
Entry Criteria and Exit Criteria
Entry Criteria
Preconditions
for undertaking a given test activity
Availability of test data and other necessary resources
Availability of budget
Availability of necessary test tools
Acceptable initial quality level of a test object
Availability of testable requirements, user stories, and/or test cases
Exit Criteria (Definition of Done)
Determine when
a given test activity
has been
completed
or when it should
stop
The defined level of coverage has been achieved
All defects found are reported
The evaluated levels of quality characteristics are sufficient
All regression tests are automated
The number of unresolved defects is within an agreed limit
Static testing has been performed
Define what conditions must be achieved to declare a test level or a set of tests completed
Estimation Techniques
Estimation Categories
Metric-based Techniques
Collect
data
from previous projects or even previous iteration & drive the estimate from such data
The accuracy of this technique will
depend
heavily on the
accuracy
of the
collected data
Types
Estimation Based on Ratios
Project Efforts: 1000, Testing Effort: 300 => Project Efforts: 2000, Testing Effort: 600
Extrapolation
May extrapolate the test effort in the forthcoming iteration as the
averaged effort
from the
last three iterations
.
Expert-based Techniques
Depend on using the
experience
of some stakeholders (Business experts, Test process consultants, Developers, Analysts & designers) to derive an estimate
Types
Wideband Delphi
Select a team of experts and provide each with a description of the problem to be estimated
Make a new estimation based on
feedback
, again in isolation
Provide
estimate of the effort
(a breakdown of the problem into a list of tasks & an effort estimate for each task)
This
process
is repeated until a
consensus
is reached
Planning Poker (Three-point estimation)
"P": Pessimistic, called a worst-case scenario
"M": Most likely estimate, which falls between the optimistic & pessimistic estimates
"O": Optimistic estimate, called a best-case scenario
"E": The final estimate is their weighted arithmetic mean [E = (O + 4*M + P) / 6] & Deviation SD = (P - O) / 6
Benefits
Estimate other elements like the time needed, the number of resources, & the budget needed
Estimate the effort needed to execute the plan
Test Case Prioritization (Read more)
The test execution schedule - Factors
Confirmation tests
Regression tests
Dependencies
The most efficient sequence for executing the tests (Add record -> Modify record -> Print record -> Delete record)
Prioritization
Prioritization
Risk-based Prioritization
The idea is to identify & prioritize test cases that address the most critical or high-impact risks
Cover the most important risks are executed first (Ex: Payment process is higher priority since they address the critical risk of payment failure)
The order of test execution is determined by the results of a risk analysis
Coverage-based Prioritization
The goal is to maximize coverage by executing test cases that achieve the highest coverage first.
Test cases that achieve the highest coverage are executed first.
Requirements-based Prioritization
This ensures that critical functionalities or features are thoroughly tested early in the testing process.
Test cases related to the most important requirements, as defined by stakeholders are executed first.
Test Pyramid
Characteristics
Typical test levels are unit, integration, system, & acceptance, from the base of the pyramid to the top
The pyramid layers represent groups of tests
Test Granularity
A large number of tests at the lower level, & as development moves to the upper levels, the number of tests decreases
At the base
of the pyramid, Unit tests have a fine granularity, are small in size, & focus on testing individual components or units of code in isolation. Each unit test typically targets a specific function or method
At the middle
of the pyramid, Integration tests have a coarser granularity compared to unit tests. They focus on verifying the interactions & collaborations between units or components
End-to-end or UI tests
have the coarsest granularity, as they simulate end-to-end user interactions with the application & These tests validate the system's overall behavior from the user's perspective.
Test Isolation
The tests shouldn't rely on external systems, databases, or services
Mocks and stubs often isolate the unit under test from its dependencies
Integration tests involve multiple components & may interact with external systems such as databases or APIs
Test Execution Time
Run tests frequently during development to ensure that individual units of code behave as expected
Integration tests take longer to execute than unit tests but are generally faster than UI tests
Unit tests at the base of the pyramid are fast to execute since they don't involve the complexities of broader system interactions
End-to-end tests are the slowest to execute due to their end-to-end nature
Test Automation
Answer how much effort should be allocated by showing that different levels of test automation support different goals
According to the shape of the pyramid, unit & integration level tests are automated & are created using API-based tools
Used to support test automation
At the top levels, the automated tests are created using GUI-based tools.
Benefits
Optimize Feedback Loop
Unit tests provide rapid feedback to developers, enabling them to iterate quickly
Maintainability
Unit tests contribute to code maintainability by acting as documentation for how individual units of code are expected to behave
Early Detection of Defects
Unit tests, being fine-grained & executed frequently, catch defects early in the development process when they are less costly to fix.
Cost-Efficiency
Unit tests are less expensive to write & maintain compared to higher-level tests
Focus on Integration
The middle layer of the pyramid represents Integration Tests, which ensure that different components or modules work together as expected.
End-to-End Confidence
UI tests, while slower, provide confidence that the entire system, including the user interface, is functioning correctly from the user's perspective
Scalability
The Test Pyramid is scalable, accommodating a larger number of unit tests, a moderate number of integration tests, & a smaller number of UI tests.
Testing Quardrants
Characteristics
Quadrant 1: Technology facing tests that support the team <Unit & Automation Testing>
Component Testing
Integration Testing
Unit Testing
Quarant 2: Business facing tests that support the team <Automation & Manual Testing>
Examples
Story Tests
Functional Test
Prototypes
Simulations
Product Behavior
Apply dynamic testing rather than static testing
Quarant 3: Business facing tests that critique the product <Manual Testing>
Scenarios
Usability Testing
Exploratory
User Acceptance Testing
Alpha/Beta Testing
Realistic Scenarios & Data
Quarant 4: Technology facing tests that critique the product <Special Tools>
Load Testing
Security Testing
Performance Testing
Scalability Testing
Reliability Testing
Benefits
Balanced Testing Strategy
Alignment with Agile Principles
Comprehensive Test Coverage
Improved Communication & Collaboration
Early & Continuous Testing
Risk Definition and Risk Attributes
Risk
Involve the
possibility
of an
event
in the
future
that has
negative consequences
The level of risk is determined by the likelihood of the event and the impact (the harm) from that event
Decide where & when to start testing & identify areas that need more attention
Risk Management Activities Approach
Determine which risks are important to deal with
Implement actions to mitigate those risks
Analyze what can go wrong (risks) ?
Make contingency plans to deal with the risks should they become actual events
Types
Product Risks <Software>
Definition
Product risk involves the
possibility
that a
work product
(e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users &/or stakeholders
Characteristics
Associated with specific quality characteristics of a product (e.g, functional suitability, reliability, performance efficiency, usability, security)
E.g. The software might not perform its intended functions according to the specification
E.g. Missing or wrong functionality
E.g. User experience (UX) feedback might not meet product expectations
Product Risk Analysis
Risk Identificaiton
Brainstorming
Facilitator lead the team & help turn their ideas into a list of risks
Interviews
Try to
find
everyone who might have an opinion & ask them what could
cause the project's trouble
Risk workshops
Do some workshops where both the development team & the customer representatives work together to come up with risks.
Risk templates & Checklists
Huge list of quesitons built up over the years. Those questions are good mind opener to situations revealing future risks
Calling on past experience
Have previous worked on project in the same applicaiton domain as the new one, the same client, the same tools, & the same process.
Cause-effect diagrams
Some tools & diagrams can be used to help find the possible risks in specific situations
Risk Characterizarion Factors
Risk likelihood - the probability of the risk occurrence (greater than zero & less than one)
Risk impact (harm) - the consequences of this occurrence
Risk Analysis
Level of risk = Probability of the risk x Impact if it did happen
Product Risk Control
Risk Response
Mitigate
You'll lower the risk level
Transfer
Move the risk from your side to another side
Avoid
Doing anything to make the risk level 0
Accept
Passively accept by simply waiting for risk to happen & wondering what to do then. You can accept
actively
by putting a plan
Project Risks <Activities>
Definition
Project risk involves
situations
that, should they occur, may have a
negative effect
on a
project's ability
to achieve its
objectives
Characteristics
Organizational Issues
Delays may occur in delivery, task completion, or satisfaction of exit criteria or definition of done
Inaccurate estimates
People Issue
Skills, training, & staff may not be sufficient
Personnel issues may cause conflict & problem
Impact on the project schedule, budget or scope, which affects the project's ability to achieve its objectives.
Test Monitoring, Test Control and Test Completion
Test Control
Purpose
Re-evaluating whether a test item meets an entry or exit criteria due to rework
Change the test schedule due to availability or unavailability of a test environment or other resources
Re-prioritizing tests when an identified risk occurs
Test Completion
Purpose
The end of a test project
An agile iteration
The end of a maintenance release
Test Monitoring
Purpose
Gather information & provide feedback & visibility about test activities
Measure whether the test exit criteria or the testing tasks associated with an Agile project's definition of done are satisfied.
Metrics Used in Testing
Project Progress Metrics
Measures the progress of tasks in the testing project, indicating how much work has been completed
Tracks the overall effort invested in the testing process, helping assess efficiency & productivity
Test Progress Metrics
Test Case Implementation Progress: Indicates the completion status of test cases design & development
Number of Test Cases Run/Not Run: Tracks the execution status of planned test cases
Test Metrics
Show progress against the test plan
Product Quality Metrics
Availability: Reflects the system's uptime & accessibility
Response time: Measures the time taken for the system to respond to a user request
Mean time to Failure: Calculates the average time until a failure occurs, providing insights into system reliability
Test Reports
Definition
Summarize & communicate test activity information to project stakeholders, both during & at the end of a test activity
Help with the ongoing control of the testing process
Purposes
Help stakeholders understand & analyze the results of a test period.
Assure that the original test plan will lead us to achieve our testing goals.
Notify project stakeholders about test results & exit criteria status.
Types
Test progress reports
A test report prepared during a test activity may be referred to as a test progress report
Support the ongoing control of the testing
Provide enough information to make modifications to the test schedule, resources, or test plan when such changes are needed due to deviation from the plan or changed circumstances
Key Components
Test Progress
: Summarize the overall progress of testing activities; Highlights any significant deviations from the planned schedule or milestones
Impediments for Testing & Their Workarounds
: Lists obstacles, challenges, or issues that have affected or could potentially affect testing progress
Test Period
: The time frame covered by the report indicates when testing activities took place
Test Metrics
: Metrics related to test execution, defect counts, coverage, & other relevant indicator
New & Changed Risks Within the Testing Period
: Identifies any new risks that have emerged during the testing phase
Testing Planned for the Next Period
: Outlines the testing activities planned for the next reporting period
Test completion reports
A comprehensive document that
summarizes
the
testing effort
& outcomes of a software testing project prepared during test completion when a project, test level, or test type is
complete
The test manager issues the test completion report when the exit criteria are reached
Provide a
summary of the testing
performed based on the
latest test progress report
& any other
relevant data
Test Summary
Briefly outlines the purpose & scope of the testing activities
Defines the features, functionalities, or areas covered during testing
Provide the time frame during which testing occurred
List of members of the testing team involved in the process
Communicating the Status of Testing
Verbal Communication
Audience Interaction
: Engaging in face-to-face or virtual meetings to verbally communicate the test status is a common & effective method
Allow
for
immediate feedback
, clarification, & the opportunity to address any questions or concerns team members or stakeholders may have
Dashboards
Visual Representation
: Dashboards provide a visual representation of key metrics & progress
(CI/CD) dashboards can display build & deployment status, task boards showcase work progress
A quick & easily understandable snapshot of the project's health
Electronic Communication Channels
Real-Time Updates
: Utilizing electronic channels such as email or chat enables real-time updates
Useful for conveying brief. time-sensitive information, discussing issues, or coordinating activities
Online Documentation
Comprehensive Information
: Online documentation serves as a comprehensive repository of information related to test status
Accessible
to team members & stakeholders,
online documentation
provides a
detailed view of the testing process
&
outcomes
Management
Configuration Management
Purposes
Establish
and
maintain
the
integrity
of the component or system, the testware, and their relationships to one another through the project and product lifecycle
Include test items & testware, are uniquely identified, version controlled, tracked for changes, & related to each other to maintain traceability throughout the test process.
Ex:
DevOps
is that
automated Configuration Management
(Continuous integration, continuous delivery, continuous deployment as part of an automated DevOps pipeline)
Defect Management
Purposes
Provide those responsible for handling and resolving reported defects with sufficient information
to resolve the issue.
Provide a means of tracking the quality of the work product.
Provide ideas for improvement of the development and test process
Include
Title with a short summary of the anomaly being reported
Date when the anomaly was observed, issuing organization, and author, including their role
Unique identifier
Identification of the test object and test environment
Context of the defect
Expected results and actual results
Severity of the defect on the interests of stakeholders or requirements
Status of the defect
References