Please enable JavaScript.
Coggle requires JavaScript to display documents.
Testing (Test Design techniques (Structure-based or white-box techniques,…
Testing
Test Design techniques
Identifying test conditions and designing test cases
Formality of test documentatio
Test analysis:
identifying test conditions
Why is traceability important?
A set of tests that has run OK in the past has started to have
serious prob lems.
Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed - was every requirement tested?
The requirements for a given function or feature have changed
Test design
: specifying test cases
Test implementation
: specifying test procedures or scripts
Categories of test design techniques
Recall reasons that both specification-based (
black-box
) and structure-based (
white-box
) approaches to test case design are useful, and list the common techniques for each. (Kl)
Explain the characteristics and differences between specification-based testing, structure-based testing and experience-based testing. (K2)
Specification-based or black-box techniques
Equivalence partitioning: is a good all-round specification-based blackbox technique. It can be applied at any level of testing and is often a good technique to use first
Equivalence partitions : are also known as equivalence classes - the two terms
mean exactly the same thing.
Boundary value analysis is based on testing at the boundaries between partitions. If you have ever done 'range checking', you were probably using the boundary value analysis technique, even if you weren't aware of it. Note that we have both valid boundaries (in the valid partitions) and invalid boundaries (in the invalid partitions).
Structure-based or white-box techniques
Structure- based testing
: See white-box testing
Code coverage:
An analysis method that determines which parts of the software have been
executed (covered) by the test suite and which parts have not been executed, e.g. statement
coverage, decision coverage or condition coverage
Code coverage
Decision coverage:
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage
Statement coverage
:The percentage of executable statements that have been exercised by a test suite
Structural testing
: See white box testing
White-box testing
: Testing based on an analysis of the internal structure of the component or system
Experience-based techniques
error guessing
: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
exploratory testing:
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests
Fault attack
: See attack
Attack
: Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur
Choosing a test technique
Tester knowledge I experience
: How much testers know about the system and about testing techniques will clearly influence their choice of testing techniques.
Models used:
Since testing techniques are based on models, the models available will to some extent govern which testing techniques can be used.
Likely defects
Knowledge of the likely defects will be very
helpful in choosing testing techniques
Test objective:
- If the test objective is simply to gain confidence that the software will cope with typical operational tasks then use cases would be a sensible approach
Documentation
: Whether or not documentation exists and whether or not it is up to date will affect the choice of testing techniques
Life cycle model:
A sequential life cycle model will lend itself to the use of more formal techniques whereas an iterative life cycle model may be better suited to using an exploratory testing approach
Fundamentals of testing
Why is testing necessary?
Keyword
bug
defect
error
failure
fault
mistake
quality
risk
software
testing
exhaustive testing
Testing is necessary because we all make mistakes.
Software systems context
Testing is context dependent
A risk is something that has not happened yet and it may never happen; it is a potential problem.
Causes of software defects
Do our mistakes matter?
what might go wrong
errors in the specification, design and implementation of the software and system;
errors in use of the system;
environmental conditions;
intentional damage;
potential consequences of earlier errors, intentional damage, defects and failures.
When do defects arise?
cost of defects
What is testing?
Is the software defect free
The driving test - an analogy for software testing
We use the words test and testing in everyday life and earlier we said testing could be described as 'checking the software is OK'
The test is therefore carried out to show that the driver satisfies the requirements for driving and to demonstrate that they are fit to drive.
Defining software testing
First, let's look at
testing as a process
Process - Testing is a process rather than a single activity - there are a series of activities involved.
All life cycle activities (chapter 4)
Both static and dynamic (chapter 3)
Planning - Activities take place before and after test execution.
Preparation - We need to choose what testing we'll do, by selecting test conditions and designing test cases. (chapter 4)
Evaluation - As well as executing the tests, we must check the results and evaluate the software under test and the completion criteria, which help us decide whether we have finished testing and whether the software product has passed the tests.
Software products and related work products - We don't just test code
The second part of the definition covers the some of the objectives for testing - the reasons why we do it
Determine that (software products) satisfy specified requirements - Some of the testing we do is focused on checking products against the specification for the product;
Demonstrate that (software products) are fit for purpose - This is slightly different to the point above; after all the specified requirements might be wrong or incomplete.
Detect defects - We most often think of software testing as a means of detecting faults or defects that in operational use will cause failures.
Software test and driving test compared
Planning and preparation
Static and dynamic - Both dynamic (driving the car or executing the software) and static (questions to the driver or a review of the software) tests are useful.
Evaluation
Determine that they satisfy specified requirements
Demonstrate that they are fit for purpose
Detect defects - The examiner and tester both look for and log faults.
Testing principles
Testing shows presence of defects
Exhaustive testing is impossible
Early testing
Defect clustering
Pesticide paradox
Testing is context dependent
Absence-of-errors fallacy
Fundamental test process
Test planning and control
Determine the scope and risks and identify the objectives of testing
Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware)
Implement the test policy and/or the test strategy
Determine the required test resources (e.g. people, test environment, PCs)
Schedule test analysis and design tasks, test implementation, execution and evaluation
Determine the exit criteria
Test analysis and design
Review the test basis (such as the product risk analysis, requirements, architecture, design specifications, and interfaces)
Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure.
Design the tests (Chapter 4)
Evaluate testability of the requirements and system.
Design the test environment set-up and identify any required infrastructure and tools.
Test implementation and execution
Implementation
Develop and prioritize our test cases, using the techniques you'll see in Chapter 4, and create test data for those tests.
Create test suites from the test cases for efficient test execution.
Implement and verify the environment.
Execution
Execute the test suites and individual test cases, following our test procedures.
Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware.
Compare actual results (what happened when we ran the tests) with expected results (what we anticipated would happen).
Where there are differences between actual and expected results,
report discrepancies as incidents.
The psychology of testing
Independent testing - who is a tester?
tests by the person who wrote the item under test;
tests by another person within the same team, such as another programmer;
tests by a person from a different organizational group, such as an independent test team;
tests designed by a person from a different-organization or company, such as outsourced testing or certification by an external body.
Why do we sometimes not get on with the rest of
the team?
Communicate findings on the product in a neutral, fact-focused way without criticizing the person who created it.
Explain that by knowing about this now we can work round it or fix it so the delivered system is better for the customer.
Start with collaboration rather than battles. Remind everyone of the
common goal of better quality systems.
Static testing
static testing advantage
early feedback
increase development productivity
low rework costs
increased awareness of quality issues
Preparation
Review meeting
Kick-off
Rework
Planning
Follow-up
identified of reviews
focus on standards, e.g. naming conventions
focus on related documents at the same level, e.g. interfaces
focus on higher-level documents, e.g. does the design comply to requirements
focus on usage, e.g. for testability or maintainability
performing the entry check
The document to be reviewed is available with line numbers.
The document has been cleaned up by running any automated checks that apply.
short check of a product by the moderator
The document needed for the inspection are stable and available.
The document author is prepared to join the review team and feels confident
with the quality of the document.
Roles and responsibilities
The author: learn as much as possible with regard to improving the quality of the
document
The scribe: record each defect mentioned and any suggestions for process improvement
The moderator: leads the review process
The reviewers: check any material for defects, mostly prior to the meeting
The manager: allocates time in project schedules and determines whether process objectives have been met
Testing throughout the software
life cycle
Software Development Models
V-Models
• component testing: searches for defects in and verifies the functioning of
software components (e.g. modules, programs, objects, classes etc.) that are
separately testable;
integration testing: tests interfaces between components, interactions to dif
ferent parts of a system such as an operating system, file system and hard
ware or interfaces between systems;
system testing: concerned with the behavior of the whole system/product as
defined by the scope of a development project or product. The main focus of
system testing is verification against specified requirements;
acceptance testing: validation testing with respect to user needs, require
ments, and business processes conducted to determine whether or not to
accept the system
Iterative life cycles
Rapid Application Development
Agile development
It promotes pair programming and shared
code ownership amongst the developers.
It states that component test scripts shall be written before the code is
written and that those tests should be automated.
It demands an on-site customer for continual feedback and to define and
carry out functional acceptance testing.
It states that integration and testing of the
code shall happen several times a day
It promotes the generation of business stories to
define the functionality.
It states that we always implement the
simplest solution to meet today's problems.
Test within a
life cycle model
• the analysis and design of tests for a given test level should begin during the
corresponding development activity;
testers should be involved in reviewing documents as soon as drafts are avail
able in the development cycle.
for every development activity there is a
corresponding testing activity
each test level has test objectives specific to that level;
Test types: the targets of
testing
Testing of function (functional testing)
Testing functionality can be done from two perspectives: requirements-based or businessprocess-based.
The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used
Functional testing considers the specified behavior
and is often also referred to as black-box testing
Function (or functionality) testing can, based upon ISO 9126, be done focus- ing on suitability, interoperability, security, accuracy and compliance.
Testing of software product characteristics
(non-functional testing)
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.
This standard is getting more and more recognition in the industry, enabling development, testing and
their stakeholders to use a common terminology for quality characteristics and thereby for non-functional testing
usability, which is divided into the sub-characteristics understandability,
learnability, operability, attractiveness and compliance;
• efficiency, which is divided into time behavior (performance), resource uti
lization and compliance;
reliability, which is defined further into the sub-characteristics maturity
(robustness), fault-tolerance, recoverability and compliance;
• maintainability, which consists of five sub-characteristics: analyzability,
changeability, stability, testability and compliance;
functionality, which consists of five sub-characteristics: suitability, accuracy, security, interoperability and compliance; this characteristic deals with functional testing as described in Section 2.3.1;
portability, which also consists of five sub-characteristics: adaptability,
installability, co-existence, replaceability and compliance.
Non-functional testing, as functional testing, is
performed at all test levels
Testing of software structure/architecture
(structural testing)
Structural testing is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage
items
The techniques used for structural testing are structure-based techniques, also referred to as white-box techniques
Control flow models are often used to support
structural testing
Testing related to changes (confirmation and
regression testing)
Confirmation testing (re-testing)
When a test fails and we determine that the cause of the failure is a software defect, the defect is
reported, and we can expect a new version of the software that has had the defect fixed.
Regression testing
For regression testing, the test cases probably passed the last time they were executed (compare this with the test cases executed in confirmation testing - they failed the last time)
The final target of testing is the testing of changes.
Test levels
Component testing
Component testing may be done in isolation from the rest of the system depend- ing on the context of the development life cycle and the system
Component testing may include testing of functionality and specific non- functional characteristics
Component testing, also known as unit, module and program testing, searches for defects in, and verifies
the functioning of software that are separately testable.
Integration testing
There may be more than one level of integration testing and it may be carried out on test objects of
varying size
At each stage of integration, testers concentrate solely on the integration itself.
Integration testing tests interfaces between components, interactions to dif- ferent parts of a system such as an operating system, file system and
hard- ware or interfaces between systems.
System testing
System testing should investigate both functional and nonfunctional
requirements of the system
System testing requires a controlled test environment with regard to, amongst other things, control of the software versions, testware and the
test data
System testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product.
Acceptance testing
The goal of acceptance testing is to establish confidence in the system, part of the system or specific non-functional characteristics, e.g. usability, of the system
Other types of acceptance testing that exist are contract acceptance testing and compliance acceptance testing
When the development organization has performed its system test and has cor- rected all or most defects, the system will be delivered to the user or
customer for acceptance testing
Maintenance testing
Impact analysis and regression testing
testing the changes
regression tests to show that the rest of the system
has not been affected by the maintenance work.
Triggers for maintenance testing
adaptive modifications (adapting software to environmental changes such as new hardware, new
systems software or new legislation)
corrective planned modifications (deferrable
correction of defects)
perfective modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance)
formal review step