Please enable JavaScript.
Coggle requires JavaScript to display documents.
CHAP 6 – System Integration Quality - Coggle Diagram
CHAP 6 – System Integration Quality
Performance Testing
type of software testing to ensure software applications will perform well under their expected workload
Software application's performance
response time
Reliability
scalability
not to find bugs but to eliminate performance bottlenecks
REASONS to DO PERFORMANCE TESTING
Provide stakeholders with information about their application regarding speed, stability, and scalability.
determine whether their software meets speed, scalability and stability requirements under expected workloads
Type
Load testing
checks the application's ability to perform under anticipated user loads
identify performance bottlenecks before the software application goes live
Stress testing
testing an application under extreme workloads to see how it handles high traffic or data processing
identify the breaking point of an application
Endurance testing
make sure the software can handle the expected load over a long period of time
Spike testing
tests the software's reaction to sudden large spikes in the load generated by users
Volume testing
check software application's performance under varying database volumes
Scalability testing
determine the software application's effectiveness in "scaling up" to support an increase in user load
helps plan capacity addition to your software system
Common Performance Problems
Long Load time
initial time it takes an application to start
Load time should be kept under a few seconds if possible
Poor response time
time it takes from when a user inputs data into the application until the application outputs a response to that input
should be very quick
user has to wait too long, they lose interest
Poor scalability
software product suffers from poor scalability when it cannot handle the expected number of users
Bottlenecking
obstructions in a system which degrade overall system performance
either coding errors or hardware issues cause a decrease of throughput under certain loads
Performance Testing Process
Identify your testing environment
Identify the performance acceptance criteria
Plan & design performance tests
Configuring the test environment
Implement test design
Run the tests
Analyze, tune and retest
Performance Testing Metrics: Parameters Monitored
Memory use
amount of physical memory available to processes on a computer.
Disk time
amount of time disk is busy executing a read or write request.
Bandwidth
shows the bits per second used by a network interface.
Private bytes
number of bytes a process has allocated that can't be shared amongst other processes
Committed memory
amount of virtual memory used
Memory pages/second
number of pages written to or read from the disk in order to resolve hard page faults.
Page faults/second
overall rate in which fault pages are processed by the processor
CPU interrupts per second
avg. number of hardware interrupts a processor is receiving and processing each second
Disk queue length
avg. no. of read and write requests queued for the selected disk during a sample interval.
Network output queue length
length of the output packet queue in packets
Network bytes total per second
rate which bytes are sent and received on the interface including framing characters
Response time
time from when a user enters a request until the first character of the response is received
Throughput
rate a computer or network receives requests per second
Amount of connection pooling
number of user requests that are met by pooled connections
Maximum active sessions
maximum number of sessions that can be active at once
Example Performance Test Cases
Verify response time is not more than 4 secs when 1000 users access the website simultaneously
Check database execution time when 500 records are read/written simultaneously
Check the maximum number of users that the application can handle before it crashes
Performance Test Tools
LoadNinja
Empowers teams to record & instantly playback comprehensive load tests, without complex dynamic correlation & run these load tests in real browsers at scale
NeoLoad
performance testing platform designed for DevOps that seamlessly integrates into your existing Continuous Delivery pipeline
HP LoadRunner
capable of simulating hundreds of thousands of users, putting applications under real-life loads to determine their behavior under expected loads.
Jmeter
one of the leading tools used for load testing of web and application servers
Service Level Agreements (SLA)
contract between a service provider and its internal or external customers
documents what services the provider will furnish and defines the service standards the provider is obligated to meet
help them manage customer expectations and define the circumstances under which they are not liable for outages or performance issues
describe the performance characteristics of the service
SLA Checklist
Statement of objectives.
Scope of services to be covered.
Service provider responsibilities.
Customer responsibilities.
Performance metrics
Penalties for contract breach/exclusion.
BENEFITS
Increased service delivery efficiencies;
Improved resource utilisation;
Continuous improvement of service quality;
Clear performance expectations of both the customer and service provider;
Performance metrics
Availability and uptime percentage
amount of time services are running and accessible to the customer
Specific performance benchmark
actual performance will be periodically compared
Service provider response time
time it takes the service provider to respond to a customer's issue or request
Resolution time
time it takes for an issue to be resolved once logged by the service provider
Usage statistics that will be provided
Penalties: Repercussions for breaking terms
service provider may cap performance penalties at a maximum dollar amount to limit exposure.
include a section detailing exclusions
list might include events such as natural disasters or terrorist acts
Who needs a service-level agreement?
Companies that establish SLAs include IT service providers, managed service providers and cloud computing service providers
Corporate IT organizations, particularly those that have embraced IT service management (ITSM), enter SLAs with their in-house customers
IT department creates an SLA so that its services can be measured, justified and perhaps compared with those of outsourcing vendors