Please enable JavaScript.
Coggle requires JavaScript to display documents.
Software Testing (Test design techniques (Structure-based or white-box…
Software Testing
Test design techniques
Specification-based or black-box techniques
Structure-based or white-box techniques
Statement coverage:
The percentage of executable statements that have been exercised by a test suite
Structural testing:
See white box testing
Decision coverage:
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage
Structure- based testing:
See white-box testing
Code coverage
: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage
White-box testing:
Testing based on an analysis of the internal structure of the component or system
Categories of test design
Experience-based techniques
exploratory testing:
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests
fault attack:
See attack
error guessing
: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
attack:
Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur
Identifying test conditions and designing test cases
Formality of test documentatio
Test analysis: identifying test conditions
Why is traceability important?
The requirements for a given function or feature have changed
A set of tests that has run OK in the past has started to have serious prob lems.
Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification
Test design: specifying test cases
Test implementation: specifying test procedures or scripts
Choosing a test technique
Models used
- Since testing techniques are based on models, the models available will to some extent govern which testing techniques can be used.
Likely defects
- Knowledge of the likely defects will be very helpful in choosing testing techniques
Test objective
- If the test objective is simply to gain confidence that the software will cope with typical operational tasks then use cases would be a sensible approach.
Documentation
- Whether or not documentation exists and whether or not it is up to date will affect the choice of testing techniques.
Life cycle model
- A sequential life cycle model will lend itself to the use of more formal techniques whereas an iterative life cycle model may be better suited to using an exploratory testing approach.
Tester knowledge I experience
- How much testers know about the system and about testing techniques will clearly influence their choice of testing techniques.
Fundamentals of Testing
Testing principles
Test Principle 1
Test Principle 2
Fundamental test process
Test planning and control
Determine the scope and risks and identify the objectives of testing
Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware
Implement the test policy and/or the test strateg
Determine the required test resources
Schedule test analysis and design tasks, test implementation, execution and evaluation
Determine the exit criteria
Test analysis and design
Review the test basis
Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure.
Design the tests , using techniques to help select representative tests that relate to particular aspects of the soft ware which carry risks or which are of particular interest, based on the test conditions and going into more detail.
Evaluate testability of the requirements and system.
Design the test environment set-up and identify any required infrastructure and tools.
Test implementation and execution
Implementation
Develop and prioritize our test cases
Create test suites from the test cases for efficient test execution
Implement and verify the environment
Execution
Where there are differences between actual and expected results, report discrepancies as incidents
Compare actual results
Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware.
Execute the test suites and individual test cases
Repeat test activities as a result of action taken for each discrepancy
Evaluating exit criteria and reporting
Test closure activities
What is testing
The driving test - an analogy for software testing
The test is planned and prepared for.
The test has known goals
The test is therefore carried out to show that the driver satisfies the require ments for driving and to demonstrate that they are fit to drive.
Defining software testing
First
Process
All life cycle activities
Both static and dynamic
Planning
Preparation
Evaluation
Software products and related work products
Second
Determine that (software products) satisfy specified requirements
Demonstrate that (software products) are fit for purpose
Detect defects
When can we meet our test objectives?
Focusing on defects can help us plan our tests
The defect clusters change over time
Debugging removes
Is the software defect free
If we don't find defects does that mean the users will accept the software?
The psychology of testing
Independent testing - who is a tester
tests by the person who wrote the item under tes
tests by another person within the same team, such as another programmer;
tests by a person from a different organizational group, such as an independ ent test team;
tests designed by a person from a different-organization or company, such as outsourced testing or certification by an external body.
Why do we sometimes not get on with the rest of the team?
Communicate findings on the product in a neutral, fact-focused way without criticizing the person who created i
Explain that by knowing about this now we can work round it or fix it so the delivered system is better for the customer
Start with collaboration rather than battles
Testing is necessary
Testing and quality
Keyword
Fail
Defect
Error
Failure
Mistake
Quality
Risk
Software
Testing
Exhaustive testing
Software systems context
Testing is context dependent
A risk is something that has not happened yet and it may never happen; it is a potential problem.
Some of the problems we encounter when using software are quite trivial, but others can be costly and damaging - with loss of money, time or business reputation - and even may result in injury or death.
WHY IS TESTING NECESSARY
Causes of software defects
Do our mistakes matter?
Errors in the specification, design and implementation of the software and system;
Errors in use of the system
Environmental conditions
Intentional damage
Potential consequences of earlier errors, intentional damage, defects and failures
Type of error and defect
Cost of defect
Role of testing in software development, maintenance and operations
How much testing is enough?
Time
Budget
Scope
Testing throughout the software life cycle
Test levels
Integration testing
Integration testing tests interfaces between components, interactions to dif- ferent parts of a system such as an operating system, file system and hard- ware or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size
At each stage of integration, testers concentrate solely on the integration itself.
System testing
System testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product.
System testing should investigate both functional and non-functional requirements of the system.
System testing requires a controlled test environment with regard to, amongst other things, control of the software versions, testware and the test data
Component testing
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software that are separately testable.
Component testing may be done in isolation from the rest of the system depend- ing on the context of the development life cycle and the system
Component testing may include testing of functionality and specific non- functional characteristics
Acceptance testing
When the development organization has performed its system test and has cor- rected all or most defects, the system will be delivered to the user or customer for acceptance testing
The goal of acceptance testing is to establish confidence in the system, part of the system or specific non-functional characteristics, e.g. usability, of the system.
Other types of acceptance testing that exist are contract acceptance testing and compliance acceptance testing.
Test types: the targets of testing
Testing of function (functional testing)
Functional testing considers the specified behavior and is often also referred to as black-box testing.
Function (or functionality) testing can, based upon ISO 9126, be done focus- ing on suitability, interoperability, security, accuracy and compliance.
Testing functionality can be done from two perspectives: requirements-based or business-process-based.
The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used
Testing of software structure/architecture (structural testing)
Structural testing is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage items.
The techniques used for structural testing are structure-based techniques, also referred to as white-box techniques.
Control flow models are often used to support structural testing.
Testing of software product characteristics (non-functional testing)
Non-functional testing, as functional testing, is performed at all test levels
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.
This standard is getting more and more recognition in the industry, enabling development, testing and their stakeholders to use a common terminology for quality characteristics and thereby for non-functional testing.
functionality
, which consists of five sub-characteristics: suitability, accuracy, security, interoperability and compliance; this characteristic deals with functional testing
reliability
, which is defined further into the sub-characteristics maturity (robustness), fault-tolerance, recoverability and compliance
usability
, which is divided into the sub-characteristics understandability, learnability, operability, attractiveness and compliance
efficiency
, which is divided into time behavior (performance), resource uti lization and compliance
maintainability
, which consists of five sub-characteristics: analyzability, changeability, stability, testability and compliance
portability
, which also consists of five sub-characteristics: adaptability, installability, co-existence, replaceability and compliance
Testing related to changes (confirmation and regression testing)
The final target of testing is the testing of changes.
Confirmation testing (re-testing)
When a test fails and we determine that the cause of the failure is a software defect, the defect is reported, and we can expect a new version of the software that has had the defect fixed.
Regression testing
For regression testing, the test cases probably passed the last time they were executed (compare this with the test cases executed in confirmation testing - they failed the last time).
Software development models
V-Model
Iterative life cycles
RAD Model (Rapid Application Developer)
Agile development
It demands an on-site customer for continual feedback and to define and carry out functional acceptance testing.
It promotes pair programming and shared code ownership amongst the developers.
It promotes the generation of business stories to define the functionality.
It states that component test scripts shall be written before the code is written and that those tests should be automated.
It states that integration and testing of the code shall happen several times a day.
It states that we always implement the simplest solution to meet today's problems.
Test within a life cycle model
each test level has test objectives specific to that level
he analysis and design of tests for a given test level should begin during the corresponding development activity
for every development activity there is a corresponding testing activity
testers should be involved in reviewing documents as soon as drafts are avail able in the development cycle
Maintenance testing
Impact analysis and regression testing
testing the changes
regression tests to show that the rest of the system has not been affected by the maintenance work.
Triggers for maintenance testing
perfective modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance)
adaptive modifications (adapting software to environmental changes such as new hardware, new systems software or new legislation)
corrective planned modifications (deferrable correction of defects)
Static techniques
Review process
Informal review
: A review not based on a formal (documented) procedure
Formal review
: A review characterized by documented procedures and requirements, e.g. inspection
Technical review
: A peer group discussion activity that focuses on achieving consensus (nhất trí) on the technical approach to be taken
Inspection (thanh tra)
: A type of peer review that relies (dựa vào) on visual (trực quan) examination of documents to detect defects, e.g. violations (vi phạm) of development standards and non-conformance (k phù hợp) to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
Entry criteria
: The set of generic (đặc điểm chung) and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent (ngăn chặn) a task from starting which would entail more (wasted) (lảng phí) effort compared to the effort needed to remove (loại bỏ) the failed entry criteria
Metric
: A measurement (đo lường) scale and the method used for measurement.
Moderator/Inspection leader
: The leader and main person responsible for an inspection or other review process.
Peer review
: A review of a software work product by colleagues of the producer of the product for the purpose of identifying (xác định) defects and improvements. Examples are inspection, technical review and walkthrough
Scribe:
The person who records each defect mentioned (đề cập) and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable
Walkthrough
: A step-by-step presentation by the author of a document in order to gather (thu thập) information and to establish (thiết lập) a common understanding of its content
Static analysis by tools
Complexity
: The degree (mức độ) to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify (xác minh)
Control flow
: A sequence of events (paths) in the execution through a component or system.
Data flow
: An abstract representation (đại diện) of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction (tiêu hủy)
Static analysis
: Analysis of software artifacts (sự giả tạo), e.g. requirements or code, carried out without execution of these software artifacts
Compiler
: A software tool that translates programs expressed in a high order language into their machine language equivalents (tương đương)
Reviews and the test process
Static testing
: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis
Review
: An evaluation of a product or project status to ascertain (xác định) discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough
Dynamic testing
: Testing that involves the execution of the software of a component or system.