Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 2: Testing throughout the software life cycle (Software…
Chapter 2: Testing throughout the software life cycle
Maintenance testing
Objectives
Compare maintenance testing (testing an existing system) to testing a new application with respect to test types, triggers for testing and amount of testing. (K2)
Recognize indicators for maintenance testing (modification, migration and retirement). (K1)
Describe the role of regression testing and impact analysis in maintenance. (K2)
Terms
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Impact analysis
The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
Maintenance testing
When does it occur?
Once deployed, a system is often in service for years or even decades.
During this time the system and its operational environment is often corrected, changed or extended.
Testing that is executed during this life cycle phase is called ‘maintenance testing’
Types of maintenance testing
Modifications of the software or system
can result from planned enhancement changes such as
those referred to as ‘minor releases’ that include new features and accumulated (non-emergency) bug fixes.
can also result from corrective and more urgent emergency changes.
can also involve changes of environment, such as
planned operating system
or database upgrades,
planned upgrade of Commercial-Off-The-Shelf software,
or patches to correct newly exposed or
discovered vulnerabilities of the operating system.
Migrations of the software or system
What does it do?
involves moving from one platform to another.
can involve abandoning a platform no longer supported
adding a new supported platform
What does it include?
operational tests of the new environment as well as of the changed software
conversion testing, where data from another application will be migrated into the system being maintained.
Retirement of the software or system
Note that maintenance testing is different from maintainability testing, which defines how easy it is to maintain the system).
Impact analysis and regression testing
Two parts of maintenance testing
Testing the changes
regression tests to show that the rest of the system has not been affected by the maintenance work.
Impact analysis
What is it?
In addition to testing what has been changed, maintenance testing includes extensive regression testing to parts of the system that have not been changed.
major and important activity within maintenance testing is impact analysis.
What does it do?
During impact analysis, together with stakeholders, a decision is made on what parts of the system may be unintentionally affected and therefore need careful regression testing.
Risk analysis will help to decide where to focus regression testing – it is unlikely that the team will have time to repeat all the existing tests.
Trigger for maintenance testing
Trigger
Modification
There are modifications in which testing may be planned
and there are ad-hoc corrective modifications
, which cannot be planned at all
takes place when the search for solutions to defects cannot be delayed
modifications are most often the main part of maintenance testing for most organizations
Migration
Retirement
Planned modification
Types of
perfective modifications
adapting software to the user’s wishes,
for instance by supplying new functions or enhancing performance
Adaptive modifications
adapting software to environmental changes such as new hardware,
new systems software or new legislation
Corrective planned modifications
deferrable correction of defects
Ad-hoc corrective modification
What are it concerned with?
defects requiring an immediate solution
Example
a production run which fails late at night,
a network that goes down with a few hundred users on line
a mailing with incorrect addresses.
There are different rules and different procedures for solving problems of this kind
It will be impossible to take the steps required for a structured approach to testing
If, however, a number of activities are carried out prior to a possible malfunction, it may be possible to achieve a situation in which reliable tests can be executed in spite of ‘panic stations’ all round.
To some extent this type of maintenance testing is often like first aid – patching up – and at a later stage the standard test process is then followed to establish a robust fix, test it and establish the appropriate level of documentation.
How to deal with those problems?
A risk analysis of the operational systems should be performed in order to establish which functions or programs constitute the greatest risk to the operational services in the event of disaster
programs constitute the greatest risk to the operational services in the event of disaster. It is then established – in respect of the functions at risk – which (test) actions should be performed if a particular malfunction occurs
Several types of malfunction may be identified and there are various ways of responding to them for each function at risk
A possible reaction might be that a relevant function at risk should always be tested, or that, under certain circumstances, testing might be carried out in retrospect (the next day, for instance).
If it is decided that a particular function at risk should always be tested whenever relevant, a number of standard tests, which could be executed almost immediately, should be prepared for this purpose.
The standard tests would obviously be prepared and maintained in accordance with the structured test approach.
Software Development Models
Objectives
Explain the relationship between development, test activities and work products in the development life cycle, by giving examples using project and product types. (K2)
Recognize the fact that software development models must be adapted to the context of project and product characteristics. (K1)
Recall characteristics of good testing that are applicable to
any life cycle model. (K1)
Terms
Verification
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
V-model
A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.
Test level
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.
Integration
The process of combining components or systems into larger assemblies.
Off-the-shelf software (commercial off-theshelf software, COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Performance
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Incremental development model
A development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this lifecycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.
Iterative development model
A development lifecycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
Agile software development
A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing crossfunctional teams.
Agile manifesto
A statement on the values that underpin agile software development. The values are: – individuals and interactions over processes and tools – working software over comprehensive documentation – customer collaboration over contract negotiation – responding to change over following a plan.
The waterfall model
How it work?
Tasks are executed in sequential fashion.
We start at the top of the waterfall with a feasibility study and flow down through the various project tasks finishing with implementation into the live environment.
Design flows through into development, which in turn flows into build, and finally on into test
What are it's weaknesses?
Testing tends to happen towards the end of the project life cycle so defects are
detected close to the live implementation date
it has been difficult to get feedback passed backwards up the waterfall
there are difficulties if we need to carry out numerous iterations for a particular phase.
Defects were being found too late in the life cycle
testing was not involved until the end of the project
Testing also added lead time due to its late involvement
V-model
What are it's advantages?
Defects are often found early
Characteristics of V-model
Testing begin as early as possible in the life cycle
Testing contains variety activities
These activities need to be performed before the end of coding phase
These activities should be carried out in parallel with development activities
Testers need to work with developer and BAs so they can perform these activities and tasks and produce a set of test deliverables
The work products produced by the developers and business analysts during development are the basis of testing in one or more levels
The V-model is the model that illustrates how testing activities (verification and validation) can be integrated into each phase of the life cycle.
Within the V-model, validation testing takes place especially during the early stages,
e.g. reviewing the user requirements, and late in the life cycle, e.g. during user
acceptance testing.
Four test levels of V-model
Component testing
searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes, etc.) that are separately testable;
Integration testing
tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems;
System testing
concerned with the behaviour of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
Acceptance testing
validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system.
Although variants of the V-model exist, a common type of V-model uses four test levels.
V-model In practice
A V-model may have more, fever or different levels of development and testing, depending on the project and the software product.
For example, there may be component integration testing after component testing and system integration testing after system testing
Test levels can be combined or reorganized depending on the nature of the project or the system architecture
For the integration of a commercial off-the-shelf (COTS) software product into a system,
a purchaser may perform only integration testing at the system level (e.g. integration to the infrastructure and other systems) and at a later stage acceptance testing
This acceptance testing can include both testing of system functions but also testing of quality attributes such as performance and other non-functional tests.
The acceptance testing may be done from the perspective of the end user and may also be done from an operation point of view.
Iterative life cycles
Introduction
Features of iterative approach
delivery is divided into increments or builds with each increment adding new functionality
The initial increment will contain the infrastructure required to support the initial build functionality.
The increment produced by an iteration may be tested at several levels as part of its development
Subsequent increments will need testing for the new functionality, regression testing of the existing functionality, and integration testing of both new and existing parts
Regression testing is increasingly important on all iterations after the first one
This means that more testing will be required at each subsequent delivery phase which must be allowed for in the project plans.
This life cycle can give early market presence with critical functionality, can be simpler to manage because the workload is divided into smaller pieces, and can reduce initial investment although it may cost more in the long run.
Also early market presence will mean validation testing is carried out at each increment, thereby giving early feedback on the business value and fitness-for-use of the product.
Examples
Prototyping
Rapid Application Development
Rational Unified Process
Agile Development (Scrum)
Rapid Application Development
What is it?
(RAD) is formally a parallel development of functions and subsequent integration.
How it work?
Components/functions are developed in parallel as if they were mini projects
The development are time-boxed, delivered and then assembly in working prototype
What are benefits of RAD?
Quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements
Characteristics of RAD
the product specification will need to be developed for the product at some point
the project will need to be placed under more formal controls prior to going into production
This methodology allows early validation of technology risks and a rapid response to changing customer requirements.
Dynamic System Development Methodology(DSDM)
What is it?
DSDM is a refined RAD process that allows controls to be put in place in order to stop the process from getting out of control.
Why do we need DSDM?
We need to have the essentials of good development practice in place in order for these methodologies to work.
We need to maintain strict configuration management of the rapid changes that we are making in a number of parallel development cycles.
From the testing perspective we need to plan this very carefully and update our plans regularly as things will be changing very rapidly (see Chapter 5 for more on test plans).
Agile Development
What is it?
Agile software development is a group of software development methodologies based on iterative incremental development
where requirements and solutions evolve through collaboration between self-organizing cross-functional teams
Charateristics of Agile
Most agile teams use Scrum, a management framework for iterative incremental development projects.
Typical agile teams are 5 to 9 people
the agile manifesto describes ways of working that are ideal for small teams
that counteract problems prevalent in the late 1990s with its emphasis on process and documentation.
Agile manifesto
individuals and interactions over processes and tools
working software over comprehensive documentation
customer collaboration over contract negotiation
responding to change over following a plan.
Characteristics of team using Scrum and XP
The generation of business stories (a form of lightweight use cases) to define the functionality, rather than highly detailed requirements specifications.
The incorporation of business representatives into the development process, as part of each iteration (called a ‘sprint’ and typical lasting 2 to 4 weeks), providing continual feedback and to define and carry out functional acceptance testing.
The recognition that we can’t know the future, so changes to requirements are welcomed throughout the development process, as this approach can produce a product that better meets the stakeholders’ needs as their knowledge grows over time.
The concept of shared code ownership among the developers, and the close inclusion of testers in the sprint teams.
The writing of tests as the first step in the development of a component, and the automation of those tests before any code is written. The component is complete when it then passes the automated tests. This is known as Test-Driven Development.
Simplicity
building only what is necessary, not everything you can think of.
The continuous integration and testing of the code throughout the sprint, at
least once a day.
Testing in Agile approach
Each iteration (sprint) culminates in a short period of testing, often with an independent tester as well as a business representative.
Developers are to write and run test cases for their code, and leading practitioners use tools to automate those tests and to measure structural coverage of the tests (see Chapters 4 and 6).
Every time a change is made in the code, the component is tested and then integrated with the existing code, which is then tested using the full set of automated component test cases.
This gives continuous integration, by which we mean that changes are incorporated continuously into the software build.
What benefits and challenges does Agile development provide for tester?
Benefits
the focus on working software and good quality code;
the inclusion of testing as part of and the starting point of software development (test-driven development);
accessibility of business stakeholders to help testers resolve questions about expected behaviour of the system;
self-organizing teams where the whole team is responsible for quality and giving testers more autonomy in their work; and
simplicity of design that should be easier to test.
Challenges
Documents
Testers used to working with well-documented requirements will be designing tests from a different kind of test basis: less formal and subject to change.
The manifesto does not say that documentation is no longer necessary or that it has no value, but it is often interpreted that way.
The essential of tester
Because developers are doing more component testing, there may be a perception that testers are not needed.
But component testing and confirmation based acceptance testing by only business representatives may miss major problems.
System testing, with its wider perspective and emphasis on nonfunctional testing as well as end to end functional testing is needed, even if it doesn’t fit comfortably into a sprint.
The tester's role is different
since there is less documentation and more personal interaction within an agile team,
testers need to adapt to this style of working, and this can be difficult for some testers
Testers may be acting more as coaches in testing to both stakeholders and developers, who may not have a lot of testing knowledge.
Subtopic 1
Time pressure
Although there is less to test in one iteration than a whole system
there is also a constant time pressure and less time to think about the testing for the new features.
Adequate regression suite
Because each increment is adding to an existing working system, regression testing becomes extremely important, and automation becomes more beneficial.
However, simply taking existing automated component or component integration tests may not make an adequate regression suite.
Testing within a life cycle model
Characteristics of good testing
for every development activity there is a corresponding testing activity;
each test level has test objectives specific to that level;
the analysis and design of tests for a given test level should begin during the corresponding development activity;
testers should be involved in reviewing documents as soon as drafts are available in the development cycle.
Test Levels
Introduction
Objectives
Compare the different levels of testing (K2)
Major objectives
Typical objects of testing
Typical targets of testing (e.g. functional or structural)
Related work products
People who test
Types of defects
Failures to be identified
Terms
Component testing (unit test, module testing)
The testing of individual software components. Note: the ISTQB Glossary also lists
Stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
Driver (test driver)
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system
Robustness testing
Testing to determine the robustness of the software product.
Efficiency testing
The process of testing to determine the efficiency of a software product.
Test-driven development
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
System testing
The process of testing an integrated system to verify that it meets specified requirements. Note: The ISTQB Glossary derives from Hetzel’s book The Complete Guide to Software Testing, and the implied objective of verification might not be adequate (or even appropriate) for all projects when doing system testing.
Functional requirement
A requirement that specifies a function that a component or system must perform.
Non-functional requirement
A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
Test environment
(Test bed)
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
Acceptance testing
(acceptance, user acceptance testing)
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Maintenance
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
Alpha testing
Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing
(Field testing)
Operational testing by potential and/ or existing users/ customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/ customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Indentified of test level
The process, product and project objectives, ideally with measurable effectiveness and efficiency metrics and targets.
The test basis, which are the work products used to derive the test cases.
The item, build, or system under test (also called the test object).
The typical defects and failures that we are looking for.
Any applicable requirements for test harnesses and tool support.
The approaches we intend to use.
The individuals who are responsible for the activities required to carry out the fundamental test process for the test level.
Component testing
What is it?
also known as unit, module and program testing, searches for defects in, and verifies the functioning of software items (e.g. modules, programs, objects, classes, etc.) that are separately testable.
What does it base on?
the requirements and
detailed design specifications applicable to the component under test,
as well as the code itself (which we’ll discuss in Chapter 4 when we talk about white box testing).
What components would be tested?
(component under test or test object)
Individual components (or even entire programs)
The data convention
Migration program used to enable new release
database tables, joins, views, modules, procedures, integrity and field constraints, and even whole databases.
How do they make component testing in isolation possible?
Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system.
Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner.
A stub is called from the software component to be tested;
a driver calls a component to be tested (see Figure 2.5).
What testing activities does it include?
Testing of functionality
Testing of non-functionality
Resource-behavior
performance
Robustness testing
Structural testing
Where does test cases are derived?
from work products such as the software design or the data model.
How does it work?
Typically, component testing occurs with access to the code being tested
and with the support of the development environment,
Unit test frame-work
Debugging tool
programmers who wrote the code
Sometimes, depending on the applicable level of risk, component testing is carried out by a different programmer thereby introducing independence
Defects are typically fixed as soon as they are found, without formally recording the incidents found.
Component testing in XP
Prepare and automate test case before coding
This is called test-first approach or test-driven development
How does it work?
This approach is highly iterative and is based on cycles of developing test cases
then building and integrating small pieces of code
and executing the component tests until they pass.
Integration testing
What does it do?
tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems
What are it base on?
Software and system design (both low-level and high-level)
The system architecture (relationships between components or objects)
the workflows or use cases by which the stakeholders will employ the system
What does test object include?
(item under test)
builds including some or all of the components or objects in the system
the database elements applicable to the item under test
the system infrastructure
the interfaces between components or objects
the system configuration
Configuration data
Who do the test
often carried out by the integrator
but preferably by a specific integration tester or test team.
Features
There may be more than one level of integration testing and it may be carried out on test objects of varying size.
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk
This leads to varying approaches to integration testing
Two approaches of Integration testing
Big-bang integration testing
How does it work?
all components or systems are integrated simultaneously, after which everything is tested as a whole
Advantage
everything is finished before integration testing starts.
There is no need to simulate (as yet unfinished) parts.
Disadvantage
time-consuming and difficult to trace the cause of failures with this late integration.
When does we use big-bang approach?
when planning the project, being optimistic and expecting to find no problems.
If one thinks integration testing will find defects, it is a good practice to consider whether time might be saved by breaking the down the integration test process.
Incremental testing
How does it work?
all programs are integrated one by one
and a test is carried out after each step (incremental testing)
Advantage
defects are found early in a smaller assembly when it is relatively easy to detect the cause.
Disadvantage
A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test.
architecture of integration
top-down
testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
bottom-up
testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
Functional incremental
integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification.
hints on how to use this test efficiency
Integration sequence and the number of integration steps
The preferred integration sequence and the number of integration steps required depend on the location in the architecture of the high-risk interfaces.
prevents major defects at the end of integration test stage
The best choice is to start integration with those interfaces that are expected to cause most problems.
Ideally tester
Ideally testers should understand the architecture and influence integration planning.
Reduce the risk of late defect discovery
integration should normally be incremental rather than ‘big-bang’.
If integration tests are planned before components or systems are built
they can be developed in the order required for most efficient testing.
At each stage of integration
testers concentrate solely on the integration itself
Example
if they are integrating component A with component B they are interested in testing the communication between the components,
not the functionality of either one.
Testing of specific non-functional characteristics (e.g. performance) may also be included in integration testing.
Both functional and structural approaches may be used.
Who is carry out the test?
Developer
or by a separate team
specialist integration testers
specialist group of developers/integrators including non-functional specialists.
System testing
What is it?
System testing is concerned with the behaviour of the whole system/product as defined by the scope of a development project or product.
What does it include?
Tests based on risk analysis reports
system
functional
software requirements specification
business
Processes
Use cases
High level descriptions of system behavior
interactions with the operating system
system resources
What does test object include?
(the system under test)
The entire integrated system
system
user and optional manuals
system configuration information
configuration data
What is the purpose of system testing?
System testing is most often the final test on behalf of development to verify that the system to be delivered meets the specification
and to find as many defects as possible.
Who is carried out the test?
most often, specialist testers that form a dedicated
sometime, independent, test team within development
reporting to the development manager or project manager.
In some organization, it is carried out by
a third party team
or business analysts
the required level of independence is based on the applicable risk level and this will have a high influence on the way system testing is organized.
What does it investigate?
functional requirement
techniques
black-box
specification-based
techniques for the aspect of the system to be tested.
For example, a decision table may be created for combinations of effects described in business rules.
white-box
structure-based
used to assess the thoroughness of testing elements such as menu dialog structure or web page navigation (see Chapter 4 for more on the various types of technique).
non-functional requirement
include
performance
reliability
tester's challenges
incomplete requirement
undocument requirement
What does it required?
a controlled test environment with regard to
among other things
control of software versions
testware
test data
system test is executed by the development organization in a (properly controlled) environment.
The test environment should correspond to the final target or production environment as much as possible
in order to minimize the risk of environment-specific failures not being found by testing.
Who is executed the test?
by the development organization in a (properly controlled) environment
Acceptance testing
When does it occur?
When the development organization has performed its system test
and has corrected all or most defects,
the system will be delivered to the user or customer for acceptance testing (or user acceptance testing)
What does it base on?
user requirements
system requirements
use cases
business processes
risk analysis report
What does test object include?
(the system under test)
The business operational
maintenance processes (evaluated on a fully integrated system)
user procedures
applicable forms
reports
configuration data
What questions does it answer?
Can the system be release?
What, if any, outstanding (business) risk?
Has development met their obligation?
Who take responsibility for this test
most often the responsibility of the user or customer
although other stakeholders may be involved as well
What does execution of this test require?
a test environment that is for most aspects, representative of the production environment (‘as-if production’).
What is goal of acceptance testing?
to establish confidence in the system, part of the system or specific non-functional characteristics, e.g. usability, of the system.
What is the thing that acceptance testing most focus on?
a validation type of testing
whereby we are trying to determine whether the system is fit for purpose.
Finding defects should not be the main focus in acceptance testing.
Although it assesses the system’s readiness for deployment and use,
it is not necessarily the final level of testing
may occur at more than just a single level
example
A Commercial Off The Shelf (COTS) software product may be acceptance tested when it is installed or integrated.
Acceptance testing of the usability of a component may be done during component testing.
Acceptance testing of a new functional enhancement may come before system testing.
two main test types of acceptance testing of business-supporting system
The user acceptance test
purpose
focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user
Who does
is performed by the users and application managers.
In terms of planning, it usually links tightly to the system test and will, in many cases, be organized partly overlapping in time.
acceptance test for subsystem
complies to the exit criteria of the system test can start while another subsystem may still be in the system test phase.
The operational acceptance test
(production acceptance test)
purpose
validates whether the system meets the requirements for operation
Who does
system administration
When does
shortly before the system is released
include testing of
backup/restore
data load and migration tasks
disaster recovery
user management
maintenance tasks
periodic check of security vulnerabilities.
Other types of acceptance testing
contract acceptance testing
is performed against a contract’s acceptance criteria for producing custom-developed software.
Acceptance should be formally defined when the contract is agreed.
regulatory acceptance testing
(regulation acceptance testing)
Regulatory acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.
Two stages of Acceptance test of system for mass market
reason
testing it for individual users or customers is not practical or even possible in some cases.
two stages
alpha testing
it is performed before beta testing
where does it occur?
developer's site
who is invited to use the system to provide feedback
A cross-section of potential users
members of the developer’s organization
how it work?
The user is invited to use the system
Developers observe the users and note problems
who carry out the test
Developer
or independent test team
beta testing
(field testing)
how it work
sends the system to a cross-section of users who install it and use it under real-world working conditions.
The users send records of incidents with the system to the development organization where the defects are repaired.
Other terms
factory acceptance testing
site acceptance testing
for systems that are tested before and after being moved to a customer’s site.
Test Types
Objectives
Compare four software test types
Functional
Non-functional
Structure
Change related
Recognize that functional and structural test occur at any test level
Identify and describe non-functional test types based on non-functional requirements
Identify and describe test types based on the analysis of a software system structure or architecture
Describe the purpose of confirmation testing and regression testing
Terms
Test type
A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc. A test type may take place on one or more test levels or test phases.
Functional testing
Testing based on an analysis of the specification of the functionality of a component or system.
Black-box testing
(Specification based testing)
Testing, either functional or non-functional, without reference to the internal structure of the component or system.
Functionality testing
The process of testing to determine the functionality of a software product.
Interoperability testing
The process of testing to determine the interoperability of a software product.
Security
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
Security testing
Testing to determine the security of the software product.
Performance testing
The process of testing to determine the performance of a software product.
Load testing
A type of performance testing conducted to evaluate the behaviour of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system
Stress testing
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers.
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions
Maintainability testing
The process of testing to determine the maintainability of a software product.
Reliability testing
The process of testing to determine the reliability of a software product.
Portability testing
The process of testing to determine the portability of a software product.
Functionality
The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions
Robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
Usability
The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
Efficiency
The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions
Maintainability
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
Potability
The ease with which the software product can be transferred from one hardware or software environment to another.
Black-box (specification-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or nonfunctional, of a component or system without reference to its internal structure.
White-box testing
(Structural testing, Structural-based testing)
Testing based on an analysis of the internal structure of the component or system.
Code coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
White-box (structural, structural-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Testing of functional (functional testing)
Where is it described?
In a requirement specification
A functional specification
Use cases
In Agile
User stories
Implicit requirement
There may be some functions that are ‘assumed’ to be provided that are not documented that are also part of the requirement for a system,
though it is difficult to test against undocumented and implicit requirements.
Functional tests are based on these functions, described in documents or understood by the testers and may be performed at all test levels
Functional testing considers the specified behaviour and is often also referred to as black-box testing (specification based testing). This is not entirely true, since black-box testing also includes non-functional testing (see Section 2.3.2).
What is it focus on?
Suitability
Interoperability Testing
Security
Accuracy
Compliance
What are two perspectives which it can be done from?
Requirement-based testing
uses a specification of the functional requirements for the system as the basis for designing tests.
A good way to start is to use the table of contents of the requirements specification as an initial test inventory or list of items to test (or not to test).
We should also prioritize the requirements based on risk criteria (if this is not already done in the specification) and use this to prioritize the tests.
This will ensure that the most important and most critical tests are included in the testing effort.
Business-process-based testing
uses knowledge of the business processes.
Business processes describe the scenarios involved in the day-to-day business use of the system.
Example
a personnel and payroll system may have a business process along the lines of:
someone joins the company,
he or she is paid on a regular basis,
and he or she finally leaves the company.
Use cases originate from object-oriented development, but are nowadays popular in many development life cycles.
They also
take the business processes as a starting point, although they start from tasks to be
performed by users.
Use cases are a very useful basis for test cases from a business perspective.
Technique
The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used (see Chapter 4 for more on test techniques).
Test conditions and test cases are derived from the functionality of the component or system.
As part of test designing, a model may be developed, such as a process model, state transition model or a plain-language specification.
Testing of software product characteristics
(Non-functional testing)
What is it?
is the testing of the quality characteristics, or nonfunctional attributes of the system (or component or integration group).
Here we are interested in how well or how fast something is done.
We are testing something that we need to measure on a scale of measurement, for example time to respond.
What does it include?
Performance testing
Load testing
Stress testing
Usability testing
Maintainability testing
Reliability testing
Portability testing
Software quality characteristic and sub-characteristics
Functionality
Suitability
Accuracy
Security
Interoperability
compliance
Reliability
Maturity (Robustness)
Fault-tolerance
Recoverability
Compliance
Usability
Understandability
Learnability
Operability
Attractiveness
Compliance
Efficiency
Time behaviour (Performance)
Resource
utilization
Compliance
Maintainability
Analyzability
Changeability
Stability
Testability
Compliance
Portability
Adaptability
Installability
Co-existence
Replaceability
Compliance
Misconception
A common misconception is that non-functional testing occurs only during higher levels of testing such as system test, system integration test, and acceptance test
In fact
non-functional testing may be performed at all test levels; the higher the
level of risk associated with each type of non-functional testing, the earlier in the
life cycle it should occur.
Ideally, non-functional testing involves tests that quantifiably measure characteristics of the systems and software.
Example
in performance testing we can measure transaction throughput, resource utilization, and response times
Generally, non-functional testing defines expected results in terms of the external behaviour of the software. This means that we typically use blackbox test design techniques.
Testing of software structure/architecture
What is it?
Structural testing is often referred to as ‘white-box’ or ‘glass-box’ because we are
interested in what is happening ‘inside the box’.
Structural testing is often referred to as ‘white-box’ or ‘glass-box’ because we are
interested in what is happening ‘inside the box’.
What is it used for?
is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage items.
Which test level does it occur?
It can occur at any test level
although it is true to say that it tends to be mostly applied at component and integration and generally is less likely at higher test levels, except for business-process testing.
At component integration level it may be based on the architecture of the system, such as a calling hierarchy.
The test basis for system, system integration or acceptance testing could be a business model or menu structure.
Testing related to changes
(Confirmation and regression testing)
Confirmation testing (re-testing)
What is it?
When a test fails and we determine that the cause of the failure is a software defect,
the defect is reported,
and we can expect a new version of the software that has had the defect fixed.
In this case we will need to execute the test again to confirm that the defect has indeed been fixed.
This is known as confirmation testing (also known as re-testing).
What would we prepare for confirmation testing?
ensure that the test is executed in exactly the same way as it was the first time, using the same inputs, data and environment
What does it mean when the re-test is passed?
we now know that at least one part of the software is correct – where the defect was
But this is not enough. The fix may have introduced or uncovered a different defect elsewhere in the software.
The way to detect these ‘unexpected side-effects’ of fixes is to do regression testing.
Regression testing
What is it?
Like confirmation testing, regression testing involves executing test cases that have been executed before.
The difference is that, for regression testing, the test cases probably passed the last time they were executed (compare this with the test cases executed in confirmation testing – they failed the last time).
What is the purpose of regression testing?
to verify that modifications in the software or the environment have not caused unintended adverse side effects and that the system still meets its requirements
Regression test suite
(Regression test pack)
What is it?
This is a set of test cases that is specifically used for regression testing.
What does it do?
to collectively exercise most functions (certainly the most important ones) in a system but not test any one in detail.
It is appropriate to have a regression test suite at every level of testing (component testing, integration testing, system testing, etc.).
All of the test cases in a regression test suite would be executed every time a new version of software is produced and this makes them ideal candidates for automation.
If the regression test suite is very large it may be more appropriate to select a subset for execution.
When would we execute regression test?
Regression tests are executed whenever the software changes, either as a result of fixes or new or changed functionality.
It is also a good idea to execute them when some aspect of the environment changes, for example when a new version of a database management system is introduced or a new version of a source code compiler is used.
Maintenance of a regression test suite
Maintenance of a regression test suite should be carried out so it evolves over
time in line with the software.
As new functionality is added to a system new regression tests should be added and as old functionality is changed or removed so too should regression tests be changed or removed.
As new tests are added a regression test suite may become very large.
If all the tests have to be executed manually it may not be possible to execute them all every time the regression suite is used.
In this case a subset of the test cases has to be chosen.
This selection should be made in light of the latest changes that have been made to the software
Sometimes a regression test suite of automated tests can become so large that it is not always possible to execute them all.
It may be possible and desirable to eliminate some test cases from a large regression test suite for example if they are repetitive (tests which exercise the same conditions) or can be combined (if they are always run together).
Another approach is to eliminate test cases that have not found a defect for a long time (though this approach should be used with some care!).