Please enable JavaScript.
Coggle requires JavaScript to display documents.
Testing Best Practices in WL - Coggle Diagram
Testing Best Practices in WL
How to identify them
Who can propose best practices
Members of OZI Workshop
The testing community
Sources of potential best practices
WIN Awards
FS Labs
SDCO
Continuous Improvement Program
Techforum
FS Agile Coaching Hub
Operational Gates Initiative - PS
External benchmarks and testing practices
Researchers: Gartner, etc.
How to transfer them
How to identify beneficiary scopes
Members of OZI Workshop
Communication about the best-practice requesting interested beneficiaries
Role of the Best Practice "owner"
Incentives to transfer the Knowledge
Scope
Employees
Methodology
Awareness sessions
Training sessions
Change angels
Set-up of a continuous improvement project
Regular follow-up sessions
Duration of the knowledge transfer exercise
Effort requested
For the best practice owner
For the beneficiary
How to define them
Eligibility criteria for a best practice
Example of best practice
How to evaluate (continuous improvement)
Micro-level
OZI Workshop level
Measurement - KPIs
Best practices :check:
Define a clear testing strategy for the whole platform life-cycle
During the release deployment
regularly after go-live
Implement Management CAB when the level of risk is higher than acceptable but business push for production
Focus tests on a selected set of business applications and/or infrastructures having recurrent testing issues
Create a community sharing best testing practices to help people embodying their roles
Run failover tests with a given periodicity (e.g. ~once a year) to ensure they still work as designed
Forster test automation,
short term investment
only long terms benefits
Make business aware of the damage due to improper testing
Set-up KPIs so that people in development are getting objectives on incidents during the run.
Involve architects to ensure that test environmement is exact up to date replication of the production when possible
Include non functional testing in the testing procedures
Update the DRP when needed with both schedule and performance/capacity/availability tests
Operational Gate initiative (PS) :star:
Check that services promoted to productions have gone through mandatory steps
Check that expected deliveries are provided
Check that basic requirements are fulfilled
Focus on unit testing and non-regression testing
Investigate model based testing
Have people dedicated to tests different from people in charge of developing and integrating
Adopt test practices on "Infrastructure as Code", including compliance with standards (analysis of terraform files, etc.), testing resulting infra...
Cros-topics
How to create awareness about OZI Testing Best Practice Initiative
WL Intranet - ONE
Challenges : :warning:
Ensure prior appropriate testing of every changes in production
Define who should do which tests
Promote successful tests as key drivers for quality
Support Agile/DevOps trends
Reconcile testing practices and means
Tooling aspects
Resource issues
Rely on goodwill and guidance, not on blame
Highlight collective responsibility (no finger pointing)
Deal with parties external to production
Manage exceptions causing test process breaches (i.e. tests skipped)
Handle Total Cost of Ownership (TCO) question
...or struggling to quickly put to production in order to invoice the customer, but with higher costs in terms of run and maintenance
Working upfront to do qualitative software, and then run activities at lower costs...
Define relevant incentives so that people feel their own interest to do proper testing
Find the right balance for costs related to the alignment of acceptance & production platforms
Consider customer specificity
Encoutered issues :fire:
Business pressure to overrule the risks and to speed-up changes in production
No clear process expectations to do regular tests after go live
improper testing
Lack of clear requirements
No time to test
Missing procedure
Tests not well designed
Missing non-functional aspects
Capacity testing
Performance testing
Lack of failover tests after service deployment
Last minute changes to the release
Tests done not valid anymore
No time left to retest
No authority to reject this decision
Misalignment between business (GBL) & run (PS) on what the delivered service should be
Unreflected changes in the low-level design compared to the agreed high level design/architecture
Production & acceptance misalignment
Production expectations not considered
Test environment inconsistencies