Please enable JavaScript.
Coggle requires JavaScript to display documents.
CTFL - Chapter 2: Testing Throughout the Software Development Lifecycle -…
CTFL - Chapter 2: Testing Throughout the Software Development Lifecycle
Software Development Model
Vary
Discipline
Interest
Time to market
Documentation
Objective
Types of activity performed at each stage
Sequential development models
Any phase should begin when the previous phase is complete
There is no overlap of phases
Beneficial to have early feedback from the following phase
Waterfall Model
User Requirement -> System Requirement -> Global Design -> Detailed Design -> Coding -> Testing
System Requirement
Global Design
High-level design.
Map the system requirement to computer module.
User Requirement
Collected from the user in the user requirements stage.
Analyze the user requirement and put it in a document called the system requirements, system specification or functional specification.
Starts with an idea or a need
Detailed Design
Low-level design.
Detail what each module should do individually.
Testing
Testing happens towards the end of the project life cycle.
Coding
Developer convert the detailed design into a piece of software by many computer programming languages.
The V-Model
Testing activities are carried out in
parallel
with development activities
Objective
System Testing
Testing the behavior of the whole system.
Integration Testing
Test interfaces and interaction between different components.
Acceptance Testing
Validation testing regarding user needs, requirements conducted to determine whether to accept the system.
Component Testing
Searches for defects implemented components.
Iterative and incremental development models
Incremental Models (Integrate smaller part -> Perfect product)
Iterative Models (Grow subset of the overall set of features -> Final software)
Example
Rational Unified Process
Each iteration -> long
The feature increments -> large
Spiral
Create experimental increments
Heavily reworked | rejected in subsequent development work
Scrum
Each iteration -> short
The feature increments -> small (few enhancements, two or three new features)
Kanban
Implemented without fixed-length iterations
Impact of the Software Development Lifecycle on Testing
Type of product being developed
Business priorities
Project goal
Identified product and project risks
Organizational and cultural issues
SDLC Impact
Level of detail of test documentation
Choice of test techniques and test approach
Scope and timing of test activities
Extent of test automation
Role and responsibilities of a tester
Testing as a Driver for Software Development
Test-Driven Development (TDD)
Test should be written before the code is written (Focus in Code)
Acceptance Test-Driven Development (ATDD)
Based on communication between business customers, developer, tester
Define acceptance criteria & test during the creation of user stories
Behavior-Driven Development (BDD)
Focus on testing the code based on the expected behavior of the software (Focus in Result)
Given -> When -> Then
DevOps and Testing
Isolation Issues
Frequent Errors & Downtime
Lack of comunication between development and operation -> misconfiguration, enviroment inconsistence -> downtime & system failures when software is deployed to production
Inefficient Processes
Manual & ad-hoc processes for testing, deployment, infrastructure maintenance were time-consuming & ineffective.
Slow Software Delivery
Build software -> operation for deployment
Limited Visibility
Developers had limit visibility in to how their code performed in production -> Challenching to identify & address performance issues & reliability issues.
Resistance to Change
The traditional model discouraged frequent updates, changes due to perceived risk of disrupting production system
DevOps
Definition
Cultural philosophies , practices & tool -> Deliver applications & services at high velocity, evolving & improving products at a faster pace.
Benefit
Promote team autonomy, fast feedback, integrated tool, technique practices CI & CD.
Enable teams to build, test, release high -quality code faster through DevOps delivery pipeline.
Collaborative, automated, continuous approach to software delivery & infrastructure management.
Shift-Left Approach
Not wait for code to be implemented or for components to be integrated to start the testing.
Writing test cases before the code is written (TDD).
Static analysis of source code before dynamic testing.
Save a lot of effort & costs during the duration of the project.
Retrospectives and Process Improvement
Improved quality of the test basis
Better team bonding & learning can result from raising issues, listening to other team members
Increased test effectiveness & efficiency
Better cooperation between development & testing as collaboration is reviewed & optimized regularly.
Increased quality of testware
Test Levels
System testing
Objectives
Build confidence in the quality of the system as a whole
Find defects
Validate that the system is complete & will work as expected
Prevent defect from escaping to higher test levels or production
Verify whether the functional & non-functional behaviors of the system are as designed & specified
Verify data quality may be an objective
Reduce risk
Test Basis
Epics & user stories
State Diagrams
Models of system behavior
Risk analysis reports
Uses cases
System & software requirement specifications
System and user manuals
Typical Defects & Failures
Incorrect control and/or data flows within the system
Failure to properly and completely carry out end-to-end functional tasks
Incorrect or unexpected system functional or non-functional behavior
Failure of the system to work properly in the production environments
Incorrect calculations
Failure of the system to work as described in system and user manuals
Characteristic
Early involvement of tester in user story refinement or static testing activities, such as reviews to reduce the incidence of such situations.
Independent testers typically carry out system testing.
Focus on the overall,
end-to-end behavior
of the system, both functional and non-functional.
Integration testing
Objectives
Build confidence in the quality of the interfaces
Find defects
Verify whether the functional & non-functional behaviors of the interfaces are as-designed & specified
Prevent defects from escaping to higher test levels
Reducing risk
Characteristic
Focus on the interactions and interfaces between integrated components & is performed after component testing
Type of testing is usually carried out by developer & is generally automated
Part of the continuous integration process in iterative & incremental development
System Integration testing
Characteristic
Focus on the interactions and interfaces between systems, packages, and microservices
Can cover interactions and interfaces provided by external organizations such as web services
Typical Defects and Failures
Failure to comply with mandatory security regulations
Incorrect assumptions about the meaning, units or boundaries of the data being passed between systems
Failures in
communication between systems
Interface mismatch
Incorrect data, missing data, or incorrect data encoding
Component testing
Characteristic
Tested in isolation
Done by the
developers
Lowest level of testing
Ensure that the code written meets its specification before its integration with other units :
Objective
Prevent defects from escaping to higher test levels
Reduce the risk of delivering a bad component
Finding defect in the component
Build confidence in the component's quality
Verify whether the functional & non-functional behaviors are as-designed & specified
Stubs and Drivers
Stubs -> Fake data in sub-function in function (Example calcSalary -> calcBonus is sub-function)
Drivers -> Test function by calling function (Example: Test calcSalary)
Acceptance testing
Characteristic
Validate the system is complete and will work as expected.
Verify that functional & non-functional behaviors of the system are as specified.
Establish confidence in the quality of the system as a whole.
Test Basis
System requirements
Installation procedures
Use cases
Risk analysis reports
Regulations, legal contracts & standards
Business processes
User or business requirements
Objectives
System configuration & configuration data
Business processes for a fully integrated system
System under test
Operational & maintenance processes
Reports
Typical Defects & Failures
Business rules are not implemented correctly
Non-functional failures
System workflows do not meet business or user requirements
The system does not satisfy contractual or regulatory requirements
Common Forms
Operational acceptance testing
Installing, uninstalling & upgrading
Disaster recovery
Testing of backup & restore
User management
Checks for security vulnerabilities
Contractual & regulatory acceptance testing
User acceptance testing
Build confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.
Alpha & beta testing
(A) : performed at the developing organization's site by potential customers, and/or operators or an independent test team.
(B): Performed by potential customers, and/or operators ar their own locations.
Test Types
Black-Box Testing
Objectives
Specification-based testing (C)
Examine the behavior of software system without looking at its internal code, structure, or implementation details (C)
Documents serve as blueprint for understanding how the software is expected to function (C)
Assess whether the software behaves in accordance with its specified requirements and behaves as expected
Non-Functional Testing
Objectives
Maintainability testing
Reliability testing
Usability testing
Portability testing
Compatibility testing
Security testing
Performance testing
White-Box Testing
Objectives
Cover the underlying structure by the tests to the acceptable level.
Functional Testing
Objectives
Functional correctness
: The degree to which a component or system provides the correct results with the needed degree of precision.
Functional appropriateness
: The degree to which the functions facilitate the accomplishment of specified tasks and objectives.
Functional completeness
: The degree to which the set of functions covers all the specified tasks & user objectives.
Others
Confirmation Testing & Regression Testing
Confirmation Testing
Testing -> Developing -> Confirmation testing (re-testing)
Purpose: To confirm whether the original defect has been successfully fixed
Regression Testing
Move backward and test areas that were already tested and working fine to make sure that it's still working fine after any kind of change to the software
Maintenace Testing
Takes place on a system in operation in the live environment.
Triggers for Maintenance
Modification
Testing of data migration; archiving the data; ensure remaining functionality sill works
Migration
From one platform to another; data conversion
Retirement
Impact Analysis
Determine how the existing system may be affected by changes
Decide how much regression testing to do
Identify the impact of a change on existing tests
The size of the change
The degree of risk of the change
The size of the existing system