Please enable JavaScript.
Coggle requires JavaScript to display documents.
SWT301 - Software Testing - Coggle Diagram
SWT301 - Software Testing
Chapter 1 - Fundamentals of testing
1.1. Why testing is neccessary
1.1.2 Software system context
When we discuss risk, we need to consider how likely it is that the problem would occur and the impact
Some of the problems can be costly and damaging
A risk is something that has not happened yet and it may never happen - a potential problem
1.2.3 Causes of software defects
Someone makes an error in using software and not behave as we expected
These are called defects or bugs or faults
Do our mistakes matter?
Defects in software, systems or documents may result in failures, but not all defects do cause failures
Failures can be caused by misunderstanding, careless, tired or under time pressure
Can also caused by environmental conditions as well
When we thing about what might go wrong, consider defects and failures from:
errors in specifications, design and implementation of the software and system
errors in use of the system
environmental conditons
intentional damage
potential consequences of earlier errors
When do defects arise?
What is the cost of defects?
finding and fixing defects rises considerably across the life cycle
a defect is introduced until acceptance testing or even once the system has been implemented then it will be much more expensive to fix
because one defect in requirement may well propagate into several places in design and code
all the testing work done to that point will need to be repeated to reach the confidence level in software that we required
the project team may have delivered what they ask to deliver but not the user want
1.1.4. Role of testing in software development, maintenance, and operations
Executing tests helps move towards improves quality of product and service
we may also be required to carry out software testing
we don't always test all the code - too expensive
1.1.5. Testing and quality
testing helps measure quality in terms of number of defects found, the test run and the system covered by the tests
We can do this for both function and nonfunctional requirements and characteristics.
What is quality
Delivered system must meet the specification - known as validation and verification
We need to understand what the customers understand by quality and what their expectations are
what is root cause analysis?
when we detect failures we might try to track them back their root cause - the real reason that they happened
As a tester we want to think and report on defects and any potential cause of failures
We use testing to reduce the risk of failures occurring in an operational evironment
Fixing a defect has some chance of introducing another defect or of being done incorrectly or incompletely
Organization should consider testing as part of a larger quality assurance strategy, which includes other activities.
1.1.6. How much testing is enough
Testing completely is simply cannot afford
Aligning the testing we do with the risks for the customers, the stakeholders, the project and the software
Assessing and managing risks is a key activity and reason for testing
take account of the level of risk, including technical and business risks related to the product and project constraints such as time and budget
The effort put into the quality assurance and testing activities need to be tailored to the risks and costs associated with the project
1.2.What is testing
1.2.1 The driving test - an analogy for software testing
Testing could be describe as 'checking the software is OK'
A single severe fault is enough to fail the whole test, but a small number of minor faults might still mean the test is passed
The format of testing
The test is planned and prepared for
The test has known goals
The test is representative and allow examiner to make a risk-based decision about the drive
Asking several question
1.2.2 Defining software testing
Process of testing
Process: Testing is a process rather than a single activity
All life cycle activities
Both static and dynamic
Planning: activity take place before and after test execution
Preparation: choose what testing we'll do, by selecting test conditions and designing test cases
select test conditions
designing test cases
Evaluation - executing and check the result and evaluate the software under test and completion criteria
executing the test
check the result
evaluate the software
Software products and related work products
test the requirement
design specification
test related document
definition covers the some of the objectives for testing
Determine that satisfy specified requirements
review the design to see if it meet the requirement
execute the code to check that it meets the design
Demonstrate that software product are fit for purpose
look at whether the software does what the user expect
Detect defects
improves the quality of the products
improve the development processes
make fewer mistakes in future
1.2.3. Software testing
The approach
Planning and preparation
static and dynamic
evaluation
determine that they satisfy specified requirements
demonstrate that they are fit for purpose
detect defects
in some cases, we use lightweight outline providing the goals and general direction of the test
use detailed script showing steps in the test route and document what the test should expect to happen
test activities exist before and after test execution
1.2.4 When can we meet our test objectives
use both dynamic testing and static testing as a means for achieving similar test objectives
Testing can have different goals and objectives
finding defects
gaining confidence in and providing information about the level of quality
preventing defects
Many types of review and testing activities take place at different stages in life cycle
early testing: finds defects early on when they are cheap to find and fix
gather information and measure the software
take form of attribute measures such as
mean time between software to assess reliability
assessment of the defect density in the software
regression testing - testing to ensure that nothing has changed that should not have changed
development testing: cause as many failures as possible so that defects in the software are identify and fixed
1.2.5 Focusing on defects can helps us plan our tests
1.2.6. The defect clusters change over time
A typical test improvement initiative will initially find more defects as the testing improves
when the defect prevention kicks in, defect numbers dropping
As the 'hot spot' for bugs cleaned up, move focus to next set of risks
1.2.7 Debugging removes defects
debugging - examine the code for the immediate cause of problem, repair the code and check that code now executes as expected
1.2.8 However many tests we execute without finding a bug, we have not shown 'there are no bugs'
1.2.9. If we don't find defects does that mean the users will accept the software
customers are not interested in defects or numbers of defects, except when they are directly affected by instability of the software
they interested in the software supporting them in completing tasks efficiently and effectively
1.3. Testing principle
Testing shows presence of defects but cannot prove that there are no defects
Exhaustive testing is impossible. Instead, use risks and priorities to focus testing efforts
Early testing
defect clustering
Pesticide paradox: hiệu ứng lờn thuốc
testing is context dependent: testing is done differently in different context
absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations
1.4. Fundamental test process
1.4.2. test planning and control
Test planning
make sure we understand the goals and objectives of the customers, stakeholders and the project, and the risks which testing is intended to address
set the goals objectives for the testing itself and derive an approach an plan for the test
test policy: gives rules for testing
test strategies: the overall high-level approach
Test planning has the following major tasks
Determine the scope and risks and identify the objectives of testing
determine the test approach
consider how we will carry out the testing, the techniques to use,
who to get involves
determined the required test resources
decide what we are going to produce as part of the testing
implementing test policy and/or test strategy
schedule test analysis and design task, test implementation, execution and evaluation
determine exit criteria to track whether we are completing the test activities correctly
Test control
is an ongoing activities
compare actual progress against the planned progress
report to the project manager and customer on the current status of testing
test control has the following major tasks
measure and analysis the result of reviews and testing
monitor and document progress, test coverage and exit criteria
provide information on testing
initiate corrective actions
make decisison
1.4.3. test analysis and design
is the activity where general testing objectives are transformed into tangible test conditions and test designs
test analysis and design has the following major tasks
review the test basis
we use test basis to help us build our tests
we often identify gaps and ambiguities in the specifications
prevents defects appearing in the code
identify test conditions
we use test techniques to help us define the test conditions
start to identify the type of generic test data we might need
Design the tests: define test case and test procedures
evaluate testability of requirements and system
design the test environment setup and identify any required infrastructure and tools
1.4.4 Test implementation and execution
transform our test conditions into test cases and test procedure
set up an environment where we will run the tests and build test data
implementation
develop and prioritize our test cases
create test suites from the test cases for efficient test execution
implement and verify the environment
execution
execute the test suites and individual test cases, following our test procedures
log the outcome of test execution and record the identities and versions of the software under test, test tools and software
compare actual results
where there are differences between actual and expected results, report discrepancies as incidents
repeat test activities as a result of action taken for each discrepancies
1.4.5. Evaluating exit criteria and reporting
evaluating exit criteria is the activity where test execution is assessed against the defined objectives
based on risk assessment, set criteria against which will measure 'enough'
exit criteria should be set and evaluated for each test level
evaluating exit criteria has major tasks
check test log against the exit criteria for specified in test planning
assess if more tests are needed or if criteria specified should be changed
we may need to change exit criteria to lower them
write test summary report for stakeholders
1.4.6 test closure activities
we collect data from completed test activities to consolidate experience, including
checking and filing testware
analyzing facts and numbers
major tasks
check which planned deliverables we actually delivered and ensure all incident reports
document acceptance or rejection of the software system
Finalize and archive testwares
help us save times and effort
compare the results of testing between software version
hand over the testware to maintenance organization who will support the software and make any bug fixes or maintenance changes
evaluate how the testing went and analyze lessons learned for future releases and projects
1.5. the psychology of testing
1.5.1. Independent testing - who is the tester
programmers are testers
several levels of independence can be identified
tests by the person who wrote the item under test
tests by another person within the same teams
tests by a person from a different organizational group
tests designed by a person from a different organization or company
1.5.2 why do we sometimes not get on with the rest of the team
when someone else identifies a defect we might take this personally and get annoyed with other person, especially if under time pressure
because testing can be seen as a destructive activity, we need to take care to report on defects and failures as objectively and polite as possible
Communicate
don't gloat
don't blame
be constructively critical and discuss the defect
Explain
Start with collaboration rather than battles
Tester and test leader need good interpersonal skill to communicate