Please enable JavaScript.
Coggle requires JavaScript to display documents.
Software Testing - Coggle Diagram
Software Testing
Manual Testing
Definition
A type of software testing where testers manually execute test cases without using any automation tools
Manual testing is the most primitive of all testing types and helps find bugs in the software system
Any new application must be manually tested before its testing can be automated
Manual testing requires more effort but is necessary before deciding on automation
Manual testing does not require knowledge of any testing tool
One of the software testing fundamentals is "100% automation is not possible". This makes manual testing imperative
Goal
The goal of manual testing is to ensure that the application is error free and it is working in conformance to the specified functional requirements
Test suites or cases are designed during testing phase and should have 100% test coverage
It also makes sure that reported defects are fixed by developers and re-testing has been performed by testers on the fixed defects
Basically, this testing checks the quality of the system and delivers bug-free product to the customer
Types
Acceptance testing
Black Box
White Box
Unit Testing
System testing
Integration testing
In fact, any type of software testing type can executed both manually as well as using an automation tool
Myths
1
Myth : Anyone can do manual testing
Fact : Testing requires many skill sets
2
Myth : Testing ensures 100% defect free product
Fact : Testing attempts to find as many defects as possible. Identifying all possible defects is impossible
3
Myth : Automated testing is more powerful than manual testing
Fact : 100% test automation cannot be done. Manual Testing is also essential
4
Myth : Testing is easy
Fact : Testing can be extremely challenging. Testing an application for possible use cases with minimum test cases requires high analytical skills
Automation Testing
What is Automation Testing?
Automation Testing or Test Automation is a software testing technique that performs using special automated testing software tools to execute a test case suite
Why Test Automation?
Manual Testing of all workflows, all fields, all negative scenarios is time and money consuming
It is difficult to test for multilingual sites manually
Test Automation in software testing does not require Human intervention. You can run automated test unattended (overnight)
Test Automation increases the speed of test execution
Automation helps increase Test Coverage
Manual Testing can become boring and hence error-prone.
Which Test Cases to Automate?
High Risk – Business Critical test cases
Test cases that are repeatedly executed
Test Cases that are very tedious or difficult to perform manually
Test Cases which are time-consuming
Test cases are not suitable for automation
Test Cases that are newly designed and not executed manually at least once
Test Cases for which the requirements are frequently changing
Test cases which are executed on an ad-hoc basis.
Automated Testing Process
Step 1) Test Tool Selection
Step 2) Define scope of Automation
Step 3) Planning, Design and Development
Step 4) Test Execution
Step 5) Maintenance
Define the scope of Automation
The features that are important for the business
Scenarios which have a large amount of data
Common functionalities across applications
Technical feasibility
The extent to which business components are reused
The complexity of test cases
Ability to use the same test cases for cross-browser testing
Benefits of Automation Testing
70% faster than the manual testing
Wider test coverage of application features
Reliable in results
Ensure Consistency
Saves Time and Cost
Improves accuracy
Human Intervention is not required while execution
Increases Efficiency
Better speed in executing tests
Re-usable test scripts
Test Frequently and thoroughly
More cycle of execution can be achieved through automation
Early time to market
Framework for Automation
Data Driven Automation Framework
Keyword Driven Automation Framework
Modular Automation Framework
Hybrid Automation Framework
Types of Automated Testing
Smoke Testing
Unit Testing
Integration Testing
Functional Testing
Keyword Testing
Regression Testing
Data Driven Testing
Black Box Testing
How to Choose an Automation Tool?
Environment Support
Ease of use
Testing of Database
Object identification
Image Testing
Error Recovery Testing
Object Mapping
Scripting Language Used
Support for various types of test – including functional, test management, mobile, etc…
Support for multiple testing frameworks
Easy to debug the automation software scripts
Ability to recognize objects in any environment
Extensive test reports and results
Minimize training cost of selected tools
Automation Testing Tools
Ranorex Studio
Kobiton
ZAPTEST
LambdaTest
Parasoft Continuous Quality Suite
Avo Assure
Selenium
QTP (MicroFocus UFT)
Rational Functional Tester
Watir
SilkTest
Manual vs. Automation Testing
Manual testing is the process in which QA analysts execute tests one-by-one in an individual manner. The purpose of manual testing is to catch bugs and feature issues before a software application goes live.
Automation testing is the process in which testers utilize tools and scripts to automate testing efforts.
Test Execution
Done manually by QA testers
Done automatically using automation tools and scripts
Test Efficiency
Time-consuming and less efficient
More testing in less time and greater efficiency
Types of Tasks
Entirely manual tasks
Most tasks can be automated, including real user simulations
Test Coverage
Difficult to ensure sufficient test coverage
Easy to ensure greater test coverage
Automation Testing Strengths
More Testing in Less Time
More Test Coverage
Manual Testing Strengths
A biggest pro of manual testing over automation or continuous testing is its focused attention. When a tester is manually creating and executing tests, there is more ability to handle complex and nuanced test scenarios.
Automation testing will not replace manual testing. You need both manual and automation testing.
System Testing
What is System Testing?
System Testing is a level of testing that validates the complete and fully integrated software product
The purpose of a system test is to evaluate the end-to-end system specifications
System Testing is actually a series of different tests whose sole purpose is to exercise the full computer-based system.
What do you verify
in System Testing?
Testing the fully integrated applications including external peripherals in order to check how components interact with one another and with the system as a whole.
Verify thorough testing of every input in the application to check for desired outputs
Testing of the user’s experience with the application.
Software Testing Hierarchy
Unit Testing => Integration Testing => System Testing => Acceptance Testing
Types of System Testing
Listed types of system testing a large software development company would typically use: Usability Testing, Load Testing, Regression Testing, Functional Testing, Hardware/Software Testing,...
The specific types used by a tester
depend on several variables
Who the tester works for
Time available for testing
Resources available to the tester
Software Tester’s Education
Testing Budget
Unit Testing
It is also called component testing
It is performed on stand-alone module to check whether it is developed correctly
Unit testing is done by developers. But in practical world are either reluctant to test their own code or do not have time to unit test
Many times much of the unit tseting is done by testers
Integration testing
Definition
It is also termed as 'I&T' (Integration & Test), 'String Testing', 'Thread Testing'
A type of testing where software modules are integrated logically ad tested as a group.
Integration testing focus on checking data communication among the modules.
Why do Integration testing?
To verify the software modules work in unity
Those new requirements which are changed by clients may not be unit tested, hence system integration testing become very necessary
Interfaces of the software modules with the database could be erroneous
External hardware interfaces, if any, could be erroneous
Inadequate exception handling could cause issues
Approaches
Big Bang approach
Incremental approach
Bottom-up Approach
Top-down Approach
Sandwich Approach
How to do Integration Testing?
1.Prepare the Integration Tests Plan
2.Design the Test Scenarios, Cases, and Scripts.
3.Executing the test Cases followed by reporting the defects.
4.Tracking & re-testing the defects.
Non Functional Testing
What is Non-Functional Testing?
Is defined as a type of Software testing to check non-functional aspects (performance, usability, reliability, etc) of a software application
Objectives of Non-functional testing
Optimize the way product is installed, setup, executes, managed and monitored.
Non-functional testing should increase usability, efficiency, maintainability, and portability of the product.
Helps to reduce production risk and cost associated with non-functional aspects of the product.
Collect and produce measurements, and metrics for internal research and development.
Improve and enhance knowledge of the product behavior and technologies in use.
Most common Types of Non-Functional Testing
Performance Testing, Load Testing, Failover Testing, Compatibility Testing, Usability Testing, Stress Testing, Maintainability Testing, Scalability Testing, Security Testing, Reliability Testing,...
Characteristics of Non-functional testing
Non-functional testing should be measurable, so there is no place for subjective characterization like good, better, best, etc.
Exact numbers are unlikely to be known at the start of the requirement process
Important to prioritize the requirements
Ensure that quality attributes are identified correctly in Software Engineering.
Non-functional testing Parameters
Security, Reliability, Survivability, Availability, Usability, Scalability, Interoperability, Efficiency, Flexibility, Portability, Reusability
Regression Testing
Need
Whenever there is requirement to change the code and we need to test whether the modified code affects the other part of software application or not
When a new feature is added to the software application and for defect fixing as well as performance issue fixing
Techniques
Retest all
Regression test selection
Prioritization of test cases
Definition
A type of software testing to conform that a recent program or code change has not adversely affected existing featres
Smoke Testing Vs Sanity Testing
Smoke testing
It is performed to ascertain that the critical functionalities of the program is working fine
Objective is to verify the “stability” of the system in order to proceed with more rigorous testing
Performed by the developers or testers
Usually documented or scripted
Is a subset of Acceptance testing
Exercises the entire system from end to end
Sanity Testing
It is done to check the new functionality/bugs have been fixed
Objective is to verify the “rationality” of the system in order to proceed with more rigorous testing
Performed by testers
Usually not documented and is unscripted
Is a subset of Regression Testing
Exercises only the particular component of the entire system
Similarity
Both Sanity and Smoke testing are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing.
Both smoke and sanity tests can be executed manually or using an automation tool.