Please enable JavaScript.
Coggle requires JavaScript to display documents.
test management - Coggle Diagram
test management
test organization
independent
testing
degree of independence make tester more effective at finding defects due to differences btw author's & tester's cognitive biases (thanh kien nhan thuc)
Independent test team or group within the organization, reporting to project management or executive management
No independent testers; the only form of testing available is developers testing their own code
Independent testers from business organization or user community, or with specializations (chuyen mon) in specific test types such as usability, security, performance, regulatory/compliance, or portability
Independent developers or testers within the development teams or the project team; this could be developers testing their colleagues’ products
Independent testers external to the organization, either working on-site (in-house) or off-site (outsourcing)
benefit of test
independence
An independent tester can verify, challenge, or disprove assumptions made by stakeholders
during specification and implementation of the system
Independent testers of a vendor can report in an upright (ngay thang) and objective (khanh quan) manner about the system
under test without (political) pressure of the company that hired them
Independent testers are likely to recognize (co kha nang nhan ra) different kinds of failures compared to developers because of their different backgrounds, technical perspectives, and biases
drawbacks of test
independence (han che)
Developers lose a sense of responsibility for quality (tinh than trach nhiem)
Independent testers are seen as a bottleneck (nut that co chai)
Isolation (co lap) from development team, leads to a lack of collaboration, delays in providing feedback to development team, or an adversarial relationship (doi nghich) with development team
Independent testers lack some important information (e.g., about the test object)
task of test
manager & tester
tester
task
Prepare and acquire test data
Create the detailed test execution schedule
Design and implement test cases and test procedures
Execute tests, evaluate the results, document deviations from expected results
Design, set up, verify test environment(s), often coordinating with system administration & network management
Use appropriate tools to facilitate the test process (tao thuan loi)
Identify, document test conditions, capture traceability between test cases, test conditions, the test basis (nam bat)
Automate tests as needed (may be supported by a developer or a test automation expert)
Analyze, review, assess requirements, user stories, acceptance criteria, specifications, models for testability (i.e., the test basis)
Evaluate non-functional characteristics such as performance efficiency, reliability, usability, security, compatibility, and portability (tinh tuong thich)
Review and contribute to test plans
Review tests developed by others
test manager
overall responsibility for test process and successful leadership of
test activities
is performed by a professional test manager, project manager, development manager, quality assurance manager, test manager, test coach, test coordinator, each team being headed by a test leader or lead tester.
task
Initiate the analysis, design, implementation, execution of tests, monitor test progress & results, check the status of exit criteria (or definition of done), facilitate test completion activities (tao dieu kien)
Prepare and deliver test progress reports and test summary reports based on the information
gathered
Adapt planning based on test results & progress (sometimes documented in test progress reports, and/or in test summary reports for other testing already completed on the project) and take any actions necessary for test control
Coordinate the test plan(s) with project managers, product owners, and others
Write and update the test plan(s)
Support setting up the defect management system and adequate configuration management of testware
Introduce suitable metrics for measuring test progress & evaluating the quality of the testing & the product
Support the selection and implementation of tools to support the test process, including recommending the budget for tool selection (and possibly purchase and/or support), allocating time and effort for pilot projects, and providing continuing support in the use of the tool(s)
Develop or review a test policy & test strategy for the organization
Plan the test activities by considering the context & understanding the test objectives & risks. Select test approaches, estimate test time, effort & cost, acquire resources, define test levels & test cycles, plan defect management
Share testing perspectives with other project activities, such as integration planning
Decide about the implementation of test environment(s)
Promote, advocate the testers, the test team, and the test profession within the organization (thuc day, van dong)
Develop the skills and careers of testers (e.g., through training plans, performance evaluations,
coaching, etc.)
The activities and tasks depend on the project, product context, the skills of the people in the roles, the organization
Test Planning
& Estimation
Test Strategy &
Test Approach
test strategies
types
Directed (or consultative): is driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts, who may be outside the test team or outside the organization itself.
Methodical: relies on systematic use of some predefined set of tests or test conditions, such as a taxonomy (su phan loai) of common or likely (co kha nang xr) types of failures, a list of important quality characteristics, or company-wide look-and-feel standards for mobile apps or web pages.
Process-compliant (or standard-compliant): involves analyzing, designing, and implementing tests based on external rules and standards, such as those specified by industry-specific standards, by process documentation, by the rigorous identification (nghiem ngat) and use of the test basis, or by any process or standard imposed on or by the organization
Analytical: based on an analysis of some factor (requirement or risk).
Model-Based: based on some model of some required aspect of the product (a function, a business process, an internal structure, a non-functional characteristic (e.g., reliability)). Some models include business process models, state models, and reliability growth models.
Regression-averse: is motivated by a desire to avoid regression of existing capabilities. includes reuse of existing testware (especially test cases and test data), extensive automation of regression tests, and standard test suites
Reactive: testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned (as the preceding strategies are). Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results
test strategy provides a generalized description of the test process, usually at the product or organizational level.
test approach
tailors (adjust) test strategy for a particular project or release based on decision made in relation to the complexity & goals of project, type of product being developed, product risk analysis
is the starting point for selecting test techniques, test levels, and test types, for defining the entry criteria and exit criteria (or definition of ready and definition of done, respectively)
depends on the context and may consider factors such as risks, safety, available resources and skills, technology, the nature of the system (e.g., custom-built versus COTS), test objectives, and regulations.
Entry Criteria
& Exit Criteria
Entry and exit criteria should be defined for each test level and test type, and will differ based on the test objectives
Entry Criteria:
type:
Availability of test items that have met the exit criteria for any prior test levels
Availability of test environment
Availability of testable requirements, user stories, and/or models
Availability of necessary test tools
Availability of test data and other necessary resources
If entry criteria are not met -> the activity will prove more difficult, more time-consuming, more costly, and more risky
define the preconditions for
undertaking a given test activity (thuc hien)
Exit Criteria
type
A defined level of coverage (requirements, user stories, acceptance criteria, risks, code) has been achieved
The number of unresolved defects is within an agreed limit
Planned tests have been executed
The number of estimated remaining defects is sufficiently low
The evaluated levels of reliability, performance efficiency, usability, security, and other relevant
quality characteristics are sufficient
define what conditions must be achieved in order to declare a test level or a set of tests completed
to exercise effective control over the quality of the software, and of the testing, it is advisable to have criteria which define when a given test activity should start and when the activity is complete
Purpose & Content
of a Test Plan
Test planning is a continuous activity and is performed throughout the product's lifecycle. (Note that the product’s lifecycle may extend beyond a project's scope to include the maintenance phase.)
Feedback from test activities is used to recognize changing risks -> adjust planning
Planning is documented in a master test plan and in separate test plans for test levels
Planning is influenced by the test policy, test strategy of the organization, development lifecycles, methods being used, scope of testing, objectives, risks, constraints, criticality, testability, availability of resources
Test planning activities
Integrate & coordinate test activities into the software lifecycle activities
Make decisions about what to test, people and other resources required to perform test activities, how test activities will be carried out (thuc hien)
Defining testing overall approach
Determine scope, objectives, risks of testing
Schedule of test analysis, design, implementation, execution, evaluation activities, either on particular dates (e.g., in sequential development) or in context of each iteration (e.g., in iterative development)
Selecting metrics for test monitoring and control
Budgeting for the test activities
Determine level of detail and structure for test documentation (e.g., by providing templates or example documents)
test plan outlines test activities for development and maintenance projects (khai quat)
Factors Influencing
the Test Effort
4 Factors
Product
characteristics
complexity of the product domain
requirements for quality characteristics (security, reliability)
size of the product
required level of detail for test documentation
quality of the test basis
Requirements for legal and regulatory compliance
risk associated with product
Test results
number and severity of defects found (muc do nghiem trong)
amount of rework required
People
characteristics
skills and experience of the people involved, especially with similar projects and products (domain knowledge)
Team cohesion and leadership (su gan ket)
Development process
characteristics
test approach
tools used
development model in use
test process
stability and maturity of the organization (su on dinh, truong thanh)
Time pressure
Test effort estimation involves predicting amount of test-related work that will be needed to meet the objectives of testing for a particular project, release or iteration
Test Execution
Schedule
Once the various test cases and test procedures are produced (with some test procedures potentially automated) and assembled into test suites, the test suites can be arranged in a test execution schedule that defines the order in which they are to be run
The test execution schedule should take into account such factors as prioritization, dependencies, confirmation tests, regression tests, and the most efficient sequence for executing the tests (can tinh den)
test estimation
technique
2 techniques
metrics-based technique: estimating the test effort based on metrics of former similar projects, or based on typical values (so lieu, dien hinh)
expert-based technique: estimating the test effort based on the experience of the owners of
the testing tasks or by experts
Configuration
Management
involve ensuring:
All items of testware are uniquely identified, version controlled, tracked for changes, related to each other and related to versions of the test item(s) so that traceability can be maintained throughout the test process
All identified documents and software items are referenced unambiguously in test documentation
All test items are uniquely identified, version controlled, tracked for changes, and related to each other
During test planning, configuration management procedures and infrastructure (tools) should be identified
and implemented
purpose: establish and maintain the integrity (tinhs toàn vẹn) of the component or system, the testware, and their relationships to one another through the project and product lifecycle
Test Monitoring
& Control
Test
control
Example
Re-prioritizing tests when an identified risk occurs (software delivered late)
Changing the test schedule due to availability or unavailability of a test environment or other resources
Re-evaluating whether a test item meets an entry or exit criterion due to rework
describes any guiding or corrective actions taken as a result of information and metrics gathered and (possibly) reported
test
monitoring
Purpose: gather information and provide feedback and visibility about test activities (kha nang hien thi)
assess test progress
measure whether the test exit criteria, or the testing tasks associated with an Agile project's definition of done, are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria
Metrics Used
in Testing
Metrics are collected during and at the
end of test activities to assess: Progress against the planned schedule and budget, current quality, adequacy of test approach, effectiveness of test activity with respect to objective
Common test
metrics
Test coverage of requirements, user stories, acceptance criteria, risks, or code
Task completion, resource allocation, usage, effort
Defect information (e.g., defect density, defects found and fixed, failure rate, and confirmation test results) - mat do
Cost of testing, including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test
Test case execution (number of test cases run/not run, test cases passed/failed, and/or test conditions passed/failed)
Percentage of planned work done in test case/test environment preparation (or percentage of planned test cases implemented)
Purposes, Contents,
Audiences for Test Reports
2 test
report types
a test progress report
status of the test activities and progress against the test plan
Factors impeding progress (can tro)
Testing planned for the next reporting period
quality of the test object
a test summary report
Status of testing and product quality with respect to the exit criteria or definition of done
Factors that have blocked or continue to block progress
Deviations from plan, including deviations in schedule, duration, or effort of test activities
Metrics of defects, test cases, test coverage, activity progress, and resource consumption
Information on what occurred during a test period
Residual risks (tồn đọng)
Summary of testing performed
Reusable test work products produced
test reports based on the context of the project & audience
purpose: summarize and communicate test activity information, both during and at the end of a test activity
Risks & Testing
Product &
Project Risks
Product
Risk
When the product risks are associated with specific quality characteristics of a product (e.g., functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability, and portability), product risks are also called quality risks (tinh tuong thich)
example
A system architecture may not adequately support some non-functional requirement(s)
A particular computation may be performed incorrectly in some circumstances
Software might not perform its intended functions according to the specification/ user, customer, and/or stakeholder needs
A loop control structure may be coded incorrectly
Response-times may be inadequate for a high-performance transaction processing system
User experience (UX) feedback might not meet product expectations
involves the possibility that a work product (e.g., a specification, component, system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders (legal)
project
risk
k involves situations that, should they occur, may have a negative effect on a project's ability to achieve its objectives
example
Technical issues
The test environment may not be ready on time
Data conversion, migration planning, and their tool support may be late (di chuyen)
The requirements may not be met, given existing constraints
Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configuration, test data, and test cases
Requirements may not be defined well enough
Poor defect management and similar problems may result in accumulated defects and other technical debt (tich luy, no nan)
Organizational issues
Skills, training, and staff may not be sufficient
Personnel issues may cause conflict and problems
Users, business staff, or subject matter experts may not be available due to conflicting business priorities
Political issues
Testers may not communicate their needs and/or the test results adequately
Developers and/or testers may fail to follow up on information found in testing and reviews (e.g., not improving development and testing practices)
There may be an improper attitude toward, or expectations of, testing (e.g., not appreciating the value of finding defects during testing)
Supplier issues
A third party may fail to deliver a necessary product or service, or go bankrupt (pha san)
Contractual issues may cause problems to the project
Project issues
Delays occur in delivery, task completion, or satisfaction of exit criteria or definition of done
Inaccurate estimates, reallocation of funds to higher priority projects, or general cost-cutting across the organization may result in inadequate funding
Late changes result in substantial re-work (dang ke)
affect both development activities and test activities
In some cases, project managers are responsible for handling all project risks, but it is not unusual for test managers to have responsibility for test-related project risks
Risk-based Testing
& Product Quality
Testing is used to reduce the probability of an adverse event occurring, or to reduce the impact of an adverse event (bat loi). Testing is used as a risk mitigation activity (giam thieu), to provide information about identified risks & provide information on residual (unresolved) risks (sót lại)
A risk-based approach to testing provides proactive opportunities (chu dong) to reduce the levels of product risk. It involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood (kha nang) and impact. The resulting product risk information is used to guide test planning, the specification, preparation and execution of test cases, and test monitoring and control. Analyzing product risks early contributes to the success of a project
Risk is used to decide where and when to start testing
and to identify areas that need more attention
the results of product risk
analysis are used to
Determine the test techniques to be employed (su dung)
Determine the particular levels and types of testing to be performed
Determine the extent of testing to be carried out (muc do)
Prioritize testing in an attempt to find the critical defects as early as possible
Determine whether any activities in addition to testing (ngoài) could be employed to reduce risk (providing training to inexperienced designers)
Risk is used to focus the effort required during testing
Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis
To ensure that the likelihood
of a product failure is minimized,
risk management activities provide
a disciplined approach to
Determine which risks are important to deal with
Implement actions to mitigate those risks (giảm thiểu)
Analyze (and re-evaluate on a regular basis) what can go wrong (risks)
Make contingency plans to deal with the risks should they become actual events (dự phòng)
In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks
Definition
of Risk
Risk involves the possibility of an event in the future which has negative consequences
The level of risk is determined by the likelihood of the event and the impact (the harm) from that event (khar năng xr)
Defect
Management
. Defects may be reported for issues in code or working systems, or in any type of documentation including requirements, user stories and acceptance criteria, development documents, test documents, user manuals, or installation guides
defect reports
have objectives
Provide developers and other parties with information about any adverse event (bất lợi) that occurred, to enable them to identify specific effects, to isolate (cô lập) the problem with a minimal reproducing test (tái tạo), and to correct the potential defect(s), as needed or to otherwise resolve the problem
Provide ideas for development and test process improvement
Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)
In order to have an effective and efficient defect management process, organizations may define standards for the attributes, classification, and workflow of defects
A defect report filed during
dynamic testing typically includes:
Expected and actual results
Scope or degree of impact (severity) of the defect on the interests of stakeholder(s) (mức độ nghiêm trọng)
The development lifecycle phase(s) in which the defect was observed
Urgency/priority to fix
Identification of the test item (configuration item being tested) and environment
State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed) - hoãn lại
Date of the defect report, issuing organization, and author
Change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair, and confirm it as fixed (liên quan đến, cô lập)
A title and a short summary of the defect being reported
Conclusions, recommendations and approvals
An identifier
Global issues, such as other areas that may be affected by a change resulting from the defect
A description of the defect to enable reproduction and resolution, including logs, database dumps (kết cấu), screenshots, or recordings (if found during test execution)
References, including the test case that revealed the problem (tiết lộ)
The way in which defects are logged may vary, depending on the context of the component or system being tested, the test level, and the software development lifecycle model
Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product