Please enable JavaScript.
Coggle requires JavaScript to display documents.
Foundation Of Software Testing (Chap 5: Test Management (Test planning and…
Foundation Of Software Testing
Chap 1: Fundamentals of testing
Necessary of testing
Defect in software can:
Deal with frustration, wasted time
Company: financial loss, damage to their personal or business reputations
Causes of software defect:
software is made by human, human are fallible being (A programmer makes a
mistake (or error) ---> puts a defect (or fault or
bug) into the program ---> system exhibits a failure)
under time pressure, working with complex systems, interfaces, orc ode, dealing with changing technologies or highly interconnected
systems
environmental conditions
Role of Testing
Reduce risk
Provide a way of measuring the system’s quality.
Find defect.
Provide learning opportunity
How much testing?
Depend on risks
What test first?
What test most?
How thoroughly to test each item?
What is testing?
Objective
Finding defects
Gaining confidence in the level of quality
Providing information for decision-making
Preventing defects
Activities in the test process
Planning and control
Analysis and design
Implementation and execution
Evaluating exit criteria and reporting
Test closure activities
Seven testing principles
Testing shows the presence of defects
Exhaustive testing is impossible
Early testing
Defect clustering
Pesticide paradox
Testing is context dependent
Absence-of-errors fallacy
Code of ethics
public
Client and employer
Product
Judgment
Management
Profession
Colleagues
Self
Psychology of testing
Curiosity
A critical eye
Professional pessimism
Attention to detail
Experience.
Good communication skills
independence of testing
Chap 2: Testing throughout the
software life cycle
Software development model
V-Model
Testing needs to
begin as early as possible in the life cycle
These activities should be carried out
in parallel with development activities and testers need to work with developers
and business analysts
Iterative life cycles
the delivery is divided into increments or builds with each increment adding new functionality. The initial increment will contain the infrastructure required to support the initial build functionality
more testing will be required at each
subsequent delivery phase
Examples of iterative or incremental development models
Rapid Application Development
is formally a parallel development of
functions and subsequent integration
quickly give the customer something to see and use and to provide
feedback regarding the delivery and their requirements
The customer gets early visibility of the product, can provide feedback on
the design and can decide, based on the existing functionality
Agile development
Agile software development is a group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams
Testing within a life cycle model
Test level
Component testing
Based on : requirements and detailed design specifications applicable to the component
under test, as well as the code itself
Component under test: fuction, procedure, class, method, object,..
Stub and Driver
Stub: Stub là một chương trình hoặc thành phần giả lập (thay thế cho chương trình hoặc thành phần chưa code xong để kiểm thử)
Driver: là một thành phần của phần mềm hoặc công cụ kiểm thử thay thế cho một thành phần mà sẽ điều khiển hoặc gọi đến một thành phần hoặc hệ thống khác
The testing of individual software components
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems
based on: software and system design, system architecture, workflows or use cases
Item under test: infrastructure, Interface,..
more than one level of integration testing
component integration testing
system integration testing
Big-Bang integration
Incremental integration
Top-down
Bottom-up
Functional incremental
System testing
The process of testing an integrated system to verify that it meets specified requirements
Based on : risk analysis reports, system, functional, or software requirements specification, business processes, use cases,...
The system under test: the entire integrated system, system, user and operation manuals, system configuration information, and configuration data.
Functional Test
Non-Function test
test environment
Acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Based on : user requirements, system requirements, use cases, business processes, and risk analysis reports
The system under test: the business, operational, and maintenance processes, user procedures, applicable forms, reports, and configuration data
Alpha testing
Beta testing
Test type
Functional Testing
requirements-based
business-process-based.
Black-box
Non-functional testing
Performance testing
Load testing
Stress testing
Usability testing
Maintainability testing
Reliability testing
Portability testing
Structural testing
Code coverage
White-box
Testing related to changes
Confirmation testing (re-testing)
Regression testing
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Impact analysis and regression testing
Triggers for maintenance testing
Modifications
Planned modifications
perfective modifications
adaptive modifications
corrective planned modifications
Ad-hoc corrective modifications
Chap 3: Static techniques
Define : Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis.
Advantages
Review Process
Informal Review
Formal Review
Phases of a formal review
Planning
Step:
Defining the review criteria.
Selecting the personnel.
Allocating roles
Defining the entry and exit criteria for more formal review types (e.g. inspections)
Selecting which parts of documents to review
Checking entry criteria (for more formal review types).
Kick-off
Step:
Distributing documents
Explaining the objectives, process and documents to the participants.
Preparation
Step:
Preparing for the review meeting by reviewing the document(s)
Noting potential defects, questions and comments
Review meeting
Step:
Discussing or logging, with documented results or minutes (for more formal review types).
Noting defects, making recommendations regarding handling the defects, making decisions about the defects.
Examining, evaluating and recording issues during any physical meetings or tracking any group electronic communications.
Rework
Step:
Fixing defects found (typically done by the author).
Recording updated status of defects (in formal reviews)
Follow-up
Step:
Checking that defects have been addressed
Gathering
metrics
.
Checking exit criteria (for more formal review types)
Roles and responsibilities
The moderator
The author
The scribe
The reviewers
The manager
Types of review
Walkthrough
Technical review
Inspection
Success factors for reviews
Find a ‘champion’
Pick things that really count
Pick the right techniques
Explicitly plan and track review activities
Train participants
Manage people issues
Follow the rules but keep it simple
Continuously improve process and tools
Report results
Use testers
Static analysis by tools
Static analysis
Kinds of defects can programmers find during static analysis of code
Coding standards
Coding standard consists of a set of programming rules, naming conventions , and layout specifications
Code metrics
such as comment frequency, depth of nesting,
cyclomatic complexity number and number of lines of code.
cyclomatic complexity: (Độ phức tạpp) The number of independent paths through a program :
L – N + 2P
Code structure
control flow structure
: the sequence in which the instructions are executed.
Data flow structure
An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction
Data structure:
refers to the organization of the data itself, independent of the program
Chap 4: Test design techniques
The test development process
Formality of test documentation
formal testing
have extensive documentation which is well controlled, and would expect the documented detail of the tests to include the exact and specific input and expected outcome of the test
informal testing
have no documentation at all, or only
notes kept by individual testers
Test analysis: identifying test conditions
Test condition
: is simply something that we
could test
When identifying test conditions, we want to ‘throw the net wide’ to identify as many as we can, and then we will start being selective about which ones to take forward to develop in more detail and combine into test cases.
The test conditions that are chosen will depend on the test strategy or detailed test approach
Should be able to be linked back to their sources in the test basis – this is called
traceability
:
horizontal
through all the test documentation for a given test level
vertical
through the layers of development documentation
Test design: specifying test cases
Test case:
needs to have input values
Once a given input value has been chosen, the tester needs to determine what the expected result of entering that input would be and document it as part of the test case.
Ideally expected results should be predicted before the test is run
Test cases can now be prioritized
Test cases need to be detailed
Test implementation: specifying test procedures
or scripts
test procedure/test script:
The document that describes the steps to be taken in running a set of tests
test execution schedule
: The test schedule would say when a given script should be run and by whom . The schedule could vary depending on newly perceived risks affecting the priority of a script that addresses that risk
Categories of test design technique
Static testing techniques(see in chap 3)
Dynamic testing techniques
Specification-based (
black-box
) testing techniques
Equivalence partitioning
Boundary value analysis
Designing test cases:
one test case can cover one or more test conditions.
Do both:
- every boundary is in some partition
choosing partition values that are NOT boundary values
equivalence partitioning with two value boundary value analysis is more efficient than three-value boundary value analysis.
Decision table testing
Why use decision tables?
is a good way to deal with combinations of things
referred to as a ‘cause–effect’ table
provide a systematic way of stating complex business rules
Decision tables aid the systematic selection of effective test cases and can have the beneficial side-effect of finding problems and ambiguities in the specification
works well in conjunction with equivalence partitioning
Using decision tables for test design
Step1
: identify a suitable function or subsystem that has a behaviour which reacts according to a combination of inputs or events
Step2
: we will identify all of the combinations of True and False
Step3:
identify the correct outcome for each
combination
Step4:
write test cases to exercise each of the
rules in our table
State transition testing
Four basic parts:
the states that the software may occupy (open/closed or funded/insufficient funds);
the transitions from one state to another (not all transitions are allowed);
the events that cause a transition (closing a file or withdrawing money);
the actions that result from a transition (an error message or being given your cash).
State transition testing
is used where some aspect of the system can be described in what is called a ‘finite state machine’. This simply means that the system can be in a (finite) number of different states, and the transitions from one state to another are determined by the rules of the ‘machine’
Testing for invalid transitions
State table
A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions
Use case testing
- Use case testing
is a technique that helps us identify test cases that exercise the whole system on a transaction by transaction basis from start to finish
Experience-based testing techniques
Error guessing and fault attacks
Exploratory testing
Structure-based (white-box) testing techniques
test coverage measurement
Types of coverage
Coverage can be measured at component-testing level, integration-testing level or at system- or acceptance-testing levels
We can measure coverage for each of the specification-based techniques
How to measure coverage
1 Decide on the structural element to be used, i.e. the coverage items to be counted.
2 Count the structural elements or items.
3 Instrument the code.
4 Run the tests for which coverage measurement is required.
5 Using the output from the instrumentation, determine the percentage of elements or items exercised
Statement coverage and statement testing
Decision coverage and decision testing
branch coverage
Structure-based test case design
If you are aiming for a given level of coverage (say 95%) but you have not reached your target (e.g. you only have 87% so far), then additional test cases can be designed with the aim of exercising some or all of the structural elements not yet reached
Choosing Test Techniques
Models used
Tester knowledge/experience
Likely defects
Test objective
Documentation
Life cycle model
Risk
Customer/contractual requirements
Regulatory requirements
Time and budget
Chap 5: Test Management
Test Organization
Independent and integrated testing
Team of testers who are independent and outside
the development team
Working as a test leader
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
Working as a tester
A skilled professional who is involved in the testing of a component or system.
The skills test staff need
People involved in testing need basic professional and social qualifications
Application or business domain:
A tester must understand the intended behaviour, the problem the system will solve, the process it will automate and so forth,
Technology:
A tester must be aware of issues, limitations and capabilities of the chosen implementation technology
Testing:
A tester must know the testing topics discussed in this book – and often more advanced testing topics
Test planning and estimation
The purpose and substance of test plans
writing a test plan guides our thinking
The test planning process and the plan itself serve as vehicles for communicating with other members
The test plan also helps us manage change
What to do with your brain while planning tests
What is in scope and what is out of scope for this testing effort?
What are the test objectives?
What are the important project and product risks?
What constraints affect testing- What is most critical for this product and project?
Which aspects of the product are more (or less) testable?
What should be the overall test execution schedule and how should we decide the order in which to run specific tests?
(
entry criteria - exit criteria
)
Estimating what testing will involve and what it will cost
To identify the activities and tasks, we work both forward and backward.
Work forward:
we start with the planning activities and then move forward in time step by step, asking, ‘Now, what comes next?’
Word backward:
we consider the risks that we identified during risk analysis. For those risks which you intend to address through testing, ask yourself, ‘So, what activities and tasks are required in each stage to carry out this testing?’
Estimation techniques
consulting the people who will do the work and other people with expertise on the tasks to be done.
analyzing metrics from past projects and from industry data.
Factors affecting test effort
sufficient project documentation
non-functional quality characteristics
Complexity
Process factors include the availability of test tools
The life cycle
Process maturity, including test process maturity
Time presure
people factors
The test strategies or approaches
Test approaches and strategies
A test strategy
is the general way in which testing will happen, within each of the levels of testing, independent of project, across the organization.
Analytical
Model-based
Methodical
Process- or standard-compliant
Dynamic
Consultative or directed
Regression-averse:
Test approach:
implementation of the test strategy on a specific project.
Test progress monitoring and control
Monitoring the progress of test activities
Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and improve the testing and the project.
Provide the project team with visibility about the test results
Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work is done.
Gather data for use in estimating future test efforts
failure rate
The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.
defect density
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).
common metrics:
The extent of completion of test environment preparation
The extent of test coverage achieved, measured against requirements, risks, code, configurations or other areas of interest.
The status of the testing (including analysis, design and implementation) compared to various test milestones.
The economics of testing, such as the costs and benefits of continuing test
execution in terms of finding the next defect or running the next test.
Reporting test status
How will you assess the adequacy of the test objectives for a given test level and whether those objectives were achieved?
How will you assess the adequacy of the test approaches taken and whether they support the achievement of the project’s testing goals?
How will you assess the effectiveness of the testing with respect to these objectives and approaches?
Test control
Test control is about guiding and corrective actions to try to achieve the best possible outcome for the project
Configuration management
configuration management
is in part about determining clearly what the items are that make up the software or system. These items include source code, test scripts, third-party software (including tools that support testing), hardware, data and both development and test documentation.
Risk and Testing
Risks and levels of risk
it’s the possibility of a negative or undesirable outcome. In the future, a risk has some likelihood between 0% and 100%; it is a possibility, not a certainty.
Project risk
factors relating to the way the work is carried out
Product risk
: factors relating to what is produced
by the work
Product risks
the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation
Risk-based testing
is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system ships
Risk-based testing starts with product risk analysis
For any risk, product or project, you have four typical options:
Mitigate
: Take steps in advance to reduce the likelihood (and possibly the impact) of the risk.
Contingency
: Have a plan in place to reduce the impact should the risk become an outcome.
Transfer
: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk.
Ignore
: Do nothing about the risk, which is usually a smart option only when there’s little that can be done or when the likelihood and impact are low.
Project risks
A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc
Tying it all together for risk management
Incident Management
What are incident reports for and how do I write
good ones?
An incident
is any situation where the system exhibits questionable behaviour
Causes
: misconfiguration or failure of the test environment, corrupted test data, bad tests, invalid expected results and tester mistakes
An incident report
contains a description of the misbehaviour that was observed and classification of that misbehaviour.
Wirte a good incident report
What goes in an incident report?
What happens to incident reports after you file them?
Tool support for testing
Type of test tool
Meaning and Purpose
we can use tools directly in testing
We can use tools to help us manage the testing process
We can use tools as part of what’s called reconnaissance, or, to use a simpler term, exploration
We can use tools in a number of other ways, in the form of any tool that aids in testing
Purpose
We might want to improve the efficiency of our testing.
We might want to automate activities that would otherwise require significant resources to do manually
We might need to carry out activities that simply cannot be done manually, but which can be done via automated tools.
We might want to increase the reliability of our testing
Tool support for management of testing and tests
Test management tools
Requirements management tools
Incident management tools
Configuration management tools
Tool support for static testing
Review tools
Static analysis tools (D)
Modelling tools (D)
Tool support for test specification
Test design tools
Test data preparation tools
Tool support for test execution and logging
Test execution tools
Test harness/unit test framework tools (D)
Test comparators
Coverage measurement tools (D)
Security testing tools
Tool support for performance and monitoring
Dynamic analysis tools (D)
Performance-testing, load-testing and stress-testing tools
Monitoring tools
Tool support for specific application areas (K1)
Tool support using other tools
Tool support for specific testing needs
Effective of use tools
Potential benefits of using tools
reduction of repetitive work;
greater consistency and repeatability;
objective assessment;
ease of access to information about tests or testing.
Risks of using tools
Special considerations for some types of tools
Performance testing tools
Test execution tools
Static analysis tools
Test management tools
INTRODUCING A TOOL INTO AN
ORGANIZATION
Main principles
assessment of the organization’s maturity
identification of the areas within the organization where tool support will help to improve testing processes;
evaluation of tools against clear requirements and objective criteria;
proof-of-concept to see whether the product works as desired and meets the requirements and objectives defined for it;
evaluation of the vendor (training, support and other commercial aspects) or open-source network of support;
identifying and planning internal implementation (including training, coaching and mentoring for those new to the use of the tool);
estimation of the return on investment (cost-benefit ratio) based on a concrete and realistic business case
Pilot project
to learn more about the tool (more detail, more depth);
to see how the tool would fit with existing processes or documentation, how those would need to change to work well with the tool and how to use the tool to streamline existing processes;
to decide on standard ways of using the tool that will work for all potential users (e.g. naming conventions, creation of libraries, defining modularity, where different elements will be stored, how they and the tool itself will be maintained);
to evaluate the pilot project against its objectives (have the benefits been achieved at reasonable cost?)
Success factors
incremental roll-out (after the pilot) to the rest of the organization
adapting and improving processes, testware and tool artefacts to get the best fit and balance between them and the use of the tool;
providing adequate support, training, coaching and mentoring of new users;
defining and communicating guidelines for the use of the tool, based on what was learned in the pilot;
implementing a continuous improvement mechanism as tool use spreads through more of the organization;
monitoring the use of the tool and the benefits achieved and adapting the use of the tool to take account of what is learned;
provide continuing support for anyone using test tools, such as the test team;
continuous improvement of tool use should be based on information gathered from all teams who are using test tools.