Please enable JavaScript.
Coggle requires JavaScript to display documents.
SWT301 - Coggle Diagram
SWT301
Chapter 4: Test design techniques
2. Testing techniques - Dynamic testing (Chapter 4.2)
2.1. Specification-based (or black-box) techniques (Chapter 4.3)
2.1.3. Decision tables
A good way to deal with combinations of things
Be referred to as a 'cause-effect' table
Provide a systematic way of stating complex business rules
Can be used in test design
Aid the systematic selection of effective test cases
2.1.4. State transition
Be used where some aspect of the system can be described in what is called a 'finite state machine
A finite state system is often shown as a state diagram
A state transition model has four basic parts
The states that the software may occupy (open/closed or funded/insufficient funds)
The transitions from one state to another (not all transitions are allowed)
The actions that result from a transition (an error message or being given your cash)
The events that cause a transition (closing a file or withdrawing money)
2.1.2. Boundary value analysis
Be based on testing at the boundaries between partitions.
"range checking"
Valid boundaries (in the valid partitions)
Invalid boundaries (in the invalid partitions)
2.1.5. Use case testing
Use case testing is technique that helps us identify test cases
Use case is description of a particular use of the system by an actor
Use cases are defined in terms of the actor, not the system
Use cases can uncover integration defects
Use cases describe the process flows through a system based on its most likely use.
System requirements can also be specified as a set of use cases
2.1.1. Equivalence partitioning
Be applied at any level of testing and is often a good technique to use first
A common sense approach to testing
Equivalence classes - the two terms
mean exactly the same thing
Requires that we need test only one condition from each partition
Where to apply technique?
at all levels of testing: component testing, component integration testing, system testing and acceptance testing
2.2. Structure-based (white-box) testing techniques (Chapter 4.4)
2.2.2. Decision
Decision coverage = (Number of decision outcomes exercised / Total number of decision outcomes) x 100%
Black-box testing: 40% - 60% decision coverage
Typical ad hoc testing: 20% decision coverage
2.2.3. Condition
2.2.1. Statement
Statement coverage = (Number of statements exercised / Total number of statements) x 100%
Black-box testing: 60% - 75% statement coverage
Typical ad hoc testing: around 30% statement coverage
2.2.4. Multiple condition
Where to apply technique?
at all levels of testing: component testing, component integration testing, system testing and acceptance testing
Purposes:
structural test case design
test coverage measurement
2.3. Experience-based testing techniques (Chapter 4.5)
2.3.1. Error guessing
A technique that should always be used as a complement to other more formal techniques
No rules for error guessing
2.3.2. Exploratory testing
A hands-on approach in which testers are involved in minimum planning and maximum test execution.
The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts.
Test logging is undertaken as test execution is performed, documenting the key aspects of what is tested, any defects found and any thoughts about possible further testing.
A key aspect of exploratory testing is learning: learning by the tester about the software
its strengths
its weaknesses
its use
Where to apply technique?
at some levels of testing: no specification, the inadequate or out-of-date specification, low-risk systems
3. Choosing a test technique (Chapter 4.6)
Factors
Internal
Models used
Tester knowledge I experience
Likely defects
Test objective
Documentation
Life cycle model
External
Customer I contractual requirements
Type of system
Risk
Regulatory requirements
Time and budget
1. Identifying test conditions and designing test cases (Chapter 4.1)
1.3. Test design: specifying test cases
Test conditions can be rather vague, covering quite a large range of possibilities
One test case covers a number of conditions
A test case needs to have input values
Test assesses that the system does what it is supposed to do.
1.1. Formality of test documentation
The level of formality is also influenced by your organization
Testing may be performed with varying degrees of formality
1.4. Test implementation: specifying test procedures or scripts
To group the test cases in a sensible way for executing them and to specify the sequential steps that need to be done to run the test
The document that describes the steps to be taken in running a set of tests is called a test procedure in IEEE 829, and is often also referred to as a test script
The test procedures, or test scripts, are then formed into a test execution schedule that specifies which procedures are to be run first - a kind of superscript.
1.2. Test analysis: identifying test conditions
It could be a system requirement, a technical specification, the code itself (for structural testing), or a business process.
A test condition is simply something that we could test.
The process of looking at something that can be used to derive test information
Chapter 1: Fundamentals of testing
1.1 Why is testing necessary?
1.1.2 Software systems context
When we discuss risks, we need to consider how likely it is that the problem would occur and the impact if it happens
Testing Principle - Testing is context dependent Testing is done differently in different contexts
1.1.4 Role of testing in software development, maintenance and operations
Give reasons why testing is necessary by giving examples.
1.1.5 Testing and quality
The more rigorous our testing, the more defects we'll find
Organizations should consider testing as part of a larger quality assurance strategy
1.1.3 Causes of software defects
In some cases, where the defect is too serious, the system may have to be de-installed completely.
It is quite often the case that defects detected at a very late stage
People also design and build the software and they can make mistakes during the design and build.
1.1.1 Introduction
We need to check everything and anything we produce because things can always go wrong
Describe, with examples, the way in which a defect in software can cause harm to a person, to the environment or to a company.
1.1.6 How much testing is enough?
Testing Principle - Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases
In testing we need to decide how we will focus our testing, based on the risks.
1.4 FUNDAMENTAL TEST PROCESS
1.4.6 Test closure activities
Test closure activities include the following major tasks
Finalize and archive testware, such as scripts
Hand over testware to the maintenance organization
Evaluate how the testing went and analyze lessons learned for future releases and projects.
Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral
1.4.2 Test planning and control
Test plan
Determine the test approach
Determine the scope and risks and identify the objectives of testing
Determine the required test resources
Determine the exit criteria
Schedule test analysis and design tasks, test implementation, execution and evaluation
Implement the test policy and/or the test strategy
Test control
Provide information on testing
Monitor and document progress, test coverage and exit criteria
Measure and analyze the results of reviews and testing
Initiate corrective actions
Make decisions
1.4.1 Introduction
we can divide the activities within the fundamental test process into the following basic steps:
Test closure activities
Implementation and execution
Analysis and design
Planning and control
Evaluating exit criteria and reporting
1.4.4 Test implementation and execution
Execution
Where there are differences between actual and expected results, report discrepancies as incidents
Repeat test activities as a result of action taken for each discrepancy
Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware
Compare actual results
Execute the test suites and individual test cases
Implementation
Implement and verify the environment
Develop and prioritize our test cases
Create test suites from the test cases for efficient test execution
1.4.3 Test analysis and design
Evaluate testability of the requirements and system
Design the test environment set-up and identify any required infrastructure and tools
Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure
Design the tests
Review the test basis
1.4.5 Evaluating exit criteria and reporting
Evaluating exit criteria has the following major tasks
Write a test summary report for stakeholders: It is not enough that the testers know the outcome of the test
Check test logs against the exit criteria specified in test planning
Assess if more tests are needed or if the exit criteria specified should be changed
1.3 TESTING PRINCIPLES
Early testing
Pesticide paradox
Defect clustering
Testing shows presence of defects
Exhaustive testing is impossible
Testing is context dependent
Absence-of-errors fallacy
1.2 WHAT IS TESTING?
Testing can show that defects are present, but cannot prove that there are no defects.
Preventing defects.
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
Finding defects
Gaining confidence in and providing information about the level of quality
1.5 THE PSYCHOLOGY OF TESTING
1.5.2 Why do we sometimes not get on with the rest of the team?
we need to be careful when we are reviewing and when we are testing:
Communicate findings on the product in a neutral, fact-focused way without criticizing the person who created it
Explain that by knowing about this now we can work round it or fix it so the delivered system is better for the customer
Remind everyone of the common goal of better quality systems
1.5.1 Independent testing
Tests designed by a person from a different-organization or company
Tests by a person from a different organizational group, such as an independent test team
Tests by the person who wrote the item under test
Tests by another person within the same team, such as another programmer
Chapter 3: Static Techniques
3.2 Review process
3.2.3 Types of review
Inspection
Remove defects efficiently, as early as possible;
Train new employees in the organization's development process;
Improve product quality, by producing documents with a higher level of quality;
Help the author to improve the quality of the document under inspection;
Create a common understanding by exchanging information among the inspection participants;
Technical review
Ensure, at an early stage, that technical concepts are used correctly;
Inform participants of the technical content of the document.
Assess the value of technical concepts and alternatives in the product and project environment.
Establish consistency in the use and representation of technical concepts.
Walkthrough
-To present the document
-To explain
-To establish a common understanding of the document
-To examine and discuss the validity of proposed solutions and the viability of alternatives, establishing consensus.
3.2.1 PHASES OF A FORMAL REVIEW
Follow-up.
The moderator is responsible for ensuring that satisfactory actions have been taken on all (logged) defects, process improvement suggestions and change requests.
Kick-off
To get everybody on the same wavelength regarding the document under review and to commit to the time that will be spent on checking.
Rework
Based on the defects detected, the author will improve the document under review step by step.
Review meeting
The meeting typically consists of the following elements (partly depending on the review type): logging phase, discussion phase and decision phase
Preparation
All issues are recorded, preferably using a logging form.
Planning
The moderator always performs an entry check and defines at this stage formal exit criteria.
3.2.2 ROLES AND RESPONSIBILITIES
Within a review team, four types of participants can be distinguished: moderator, author, scribe and reviewer.
-The moderator
-The author
-The scribe
-The reviewers
-The manager
3.2.4 Success factors for reviews
Find a 'champion'
CONTINUOUSLY IMPROVE PROCESS AND TOOLS
Explicitly plan and track review activities
REPORT RESULTS
JUST DO IT
FOLLOW THE RULES BUT KEEP IT SIMPLE
Pick things that really count
Train participants
MANAGE PEOPLE ISSUES
3.3 Static analysis by tools
3.3.1 Coding standards
The first action to be taken is to define or adopt a coding standard.
Checking for adherence to coding standards is certainly the most well-known of all features.
3.3.2 Code metrics
When performing static code analysis, usually information is
+calculated about structural attributes of the code
+such as comment frequency, depth of nesting
+cyclomatic number and number of lines of code
3.3.3 Code structure
There are several aspects of code structure to consider:
-Control flow structure
-Data flow structure
-Data structure
3.1 Reviews and the test process
Static testing
Software work products are examined manually, or with a set of tools, but not executed.
Dynamic testing
Executed using a set of input values and its output is then examined and compared to what is expected.
Chapter 2: Testing throughout the life cycle
Test types: the targets of testing (Chapter 3.1)
3.2 Testing of software product characteristics (non-functional testing)
A second target for testing is the testing of the quality characteristics, or nonfunctional attributes of the system.
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.
3.4 Testing related to changes (confirmation and regression testing)
The final target of testing is the testing of changes.
This category is slightly different to the others because if you have made a change to the software, you will have changed the way it functions, the way it performs (or both) and its structure.
The way to detect these 'unexpected side-effects' of fixes is to do regression testing.
3.3 Testing of software structure/architecture (structural testing)
Coverage measurement tools assess the percentage of executable elements (e.g. statements or decision outcomes).
The techniques used for structural testing are structure-based techniques, also referred to as white-box techniques.
Control flow models are often used to support structural testing.
The third target of testing is the structure of the system or component.
3.1 Testing of function (functional testing)
Function (or functionality) testing can be done focusing on suitability, interoperability, security, accuracy and compliance.
The function of a system (or component) is 'what it does'. This is typically described in a requirements specification, a functional specification, or in use cases.
Software developments models (Chapter 2.1)
1.2 Testing within a life cycle model
In summary, whichever life cycle model is being used, there are several charac-teristics of good testing:
Testers should be involved in reviewing documents as soon as drafts are avail able in the development cycle.
The analysis and design of tests for a given test level should begin during the corresponding development activity.
Each test level has test objectives specific to that level.
For every development activity there is a corresponding testing activity.
Manifesto for Agile Software Development
The 12 principles of Agile
12.Harnessing change for a competitive advantage.
8.Assembling the project team and business owners on a daily basis throughout the project.
6.Maintaining a constant pace for completed work.
2.Breaking big work down into smaller tasks that can be completed quickly.
3.Recognizing that the best work emerges from self-organized teams.
9.Having the team reflect at regular intervals on how to become more effective, then tuning and adjusting behavior accordingly.
4.Providing motivated individuals with the environment and support they need and trusting them to get the job done.
1.Satisfying customers through early and continuous delivery of valuable work.
5.Creating processes that promote sustainable efforts.
10.Measuring progress by the amount of completed work.
11.Continually seeking excellence.
7.Welcoming changing requirements, even late in a project.
The 4 values of Agile
Customer collaboration over contract negotiation.
Responding to change over following a plan.
Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Maintenance testing
4.2. Triggers for maintenance testing.
Since modifications are most often the main part of maintenance testing foi most organizations, this will be discussed in more detail.
As stated maintenance testing is done on an existing operational system.
A risk analysis of the operational systems should be performed.
Planned modifications: The following types of planned modification may be identified:
Perfective modifications (adapting software to the user's wishes).
Corrective planned modifications (deferrable correction of defects).
Adaptive modifications (adapting software to environmental changes).
Ad-hoc corrective modifications are concerned with defects requiring an immediate solution.
Ad-hoc corrective modifications.
4.1. Impact analysis and regression testing
Usually maintenance testing will consist of two parts:
Regression tests to show that the rest of the system has not been affected by the maintenance work.
Testing the changes.
In addition to testing what has been changed, maintenance testing includes extensive .
Note that maintenance testing is different from maintainability testing, which defines how easy it is to maintain the system.
The development and test process applicable to new developments does not change fundamentally for maintenance purposes.
Testing that is executed during this life cycle phase is called 'maintenance testing.
Test levels (Chapter 2.2)
2.3 System testing
System testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product.
2.4 Acceptance testing
When the development organization has performed its system test and has cor-rected all or most defects, the system will be delivered to the user or customer for acceptance testing.
2.1 Component testing
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.
2.2 Integration testing
Integration testing tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems. Note that integration testing should be differentiated from other integration activities. Integration testing is often carried out by the integrator, but preferably by a specific integration tester or test team.