Please enable JavaScript.
Coggle requires JavaScript to display documents.
Software Testing (Test management (Test planning and estimation (The…
Software Testing
Test management
Test organization
Independent and integrated testing
There is several level of independence:
- Tests by the person who wrote the item.
- Tests by another person within the same team, like another programmer.
- Tests by the person from some different group such as an independent test team.
- Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
the test team is independent
- the programmer performs testing within the programming team.
- find a team of testers who are independent and outside the development team.
- might see a separate test team reporting into the organization at a point equal to the development or project team.
Working as a test leader
- Tend to be involved in the planning, monitoring, and control of the testing activities and tasks discussed.
- Test leaders and stakeholders devise the test objectives, organizational test policies (if any), test strategies and test plans.
- Estimate the testing to be done and negotiate with management to acquire the necessary resources.
- Recognize when test automation is appropriate.
- They may consult with other groups – e.g. programmers – to help them with their testing.
- Lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test suites.
- They ensure proper configuration management of the testware produced and traceability of the tests to the test basis.
- They make sure the test environment is put into place before test execution and managed during test execution.
- Write summary reports on test status.
Working as a tester
- Testers should analyzing, reviewing and assessing requirements and design specifications.
- Identifying test conditions and creating test designs, test cases, test procedure specifications and test data .
- set up the test environments.
Defining the skills test staff need
- Application or business domain.
- Technology.
- Testing.
-
-
CONFIGURATION MANAGEMENT
- configuration management is in part about determining clearly what the items are that make up the software or system.
- Include source code, test scripts, third-party software,hardware, data and both development and test documentation.
RISK AND TESTING
Risks and levels of risk
- it’s the possibility of a negative or undesirable outcome.
Product risks
- product risk as the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation.
Project risks
- A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.
Tying it all together for risk management
- Deal with test-related risks to the project and product
INCIDENT MANAGEMENT
- Results that vary from expected results.
- An incident report contains a description of the misbehaviour that was observed and classification of that misbehaviour.
- A good incident report is a technical document, clear for its goals and audience, any good report grows out of a careful approach to researching and writing the report.
Fundamentals of testing
Why is testing necessar?
-
-
The role of testing, and its effect on quality
-Vai trò của kiểm thử trong phát triển phần mềm, bảo trì và hoạt động.-
- Where there are defects, there are risks of failure.
- While nothing can reduce the level of risk to zero, we certainly can - and should - try reduce risk to an acceptable level prior to releasing the software to customers and users.
-
Fundamental test process
Planning and control: planning involves defining the overall strategic and tactical objectives of testing to satisfy those objectives and the general mission of testing. So, in test control we continuously compare actual progress against the plan, adjust the plan, report the test status and any necessary deviations from the plan, monitor test activities,and take whatever actions are necessary to meet the mission and objectives of the project.
Test analysis and design: we transform the more general testing objectives defined in the test plan into tangible test conditions and test cases.
- Review the test basis.
- Identify and prioritize specific test conditions
- Design and prioritize high level (i.e. abstract or logical) test cases
- Evaluate the testability of the test basis and test objects
-
Evaluating exit criteria and reporting
We assess test execution against the objectives which we defined in the test plan.
Have to perform the following major tasks:
- Check the test logs gathered during test execution against the exit criteria specified in test planning.
- Assess if more tests are needed or if the exit criteria specified should be changed.
- Write a test summary report for stakeholders.
Test closure activities :Test closure activities should occur at major project milestones.
- Check that the customer receives the expected product from the beginning and make sure that all issues are resolved.
- Complete and host testware software, such as scripts, test environments, and any other test infrastructure for reuse later.
- Handing over testware to the maintenance organization will support the software and make any bug fixes or maintenance changes, for use in validation and regression testing.
- Assess how to test and give lessons for future releases and other projects.
What is testing ?
Static testing is any evaluation of the software or related work products (such as user stories) occurs without executing the software itself.
Dynamic testing is an evaluation of that software or related work products that does involve software or related work products that does involve executing the software.
Some of the major activities in the test process :
- Test Planning: We establish (and update) the scope, approach, resources, schedule, and specific tasks in the intended test activities that comprise the rest of the test process.
- Test control: We develop and carry out corrective actions to get a test project back on track when we deviate from the plan.
- Test analysis: We identify what to test, choosing the test conditions we need to cover. We can verify using one or more test cases.
- Test design: We determine how we will test what we decided to test during test analysis.
- Test implementation: We carry out the remaining activities required to be ready for test execution, such as developing and prioritizing our test procedures, creating test data, and setting up test environments.
- Test execution: We run our tests against the test object.
- Checking results: we see the actual results of the test case, the consequences and outcomes. These include outputs to screens, changes to data, reports, and communication messages sent out. We must compare these actual results against expected results to determine the pass/fail status of the test.
- Evaluating exit criteria: exit criteria are a set of conditions that would allow some part of a process to complete. Exit criteria are usually defined during test planning.
- Test results reporting: we want to report our progress against exit criteria, as described above. This often involves details related to the status of the test project, the test process.
- Test closure: Test closure involves collecting test process data related to the various completed test activities in order to consolidate our experience, re-useable testware,
important facts, and relevant metrics.
Seven testing Principles
Testing shows presence of defect : Testing can show that defects are present, but cannot prove that there are no defect. Testing reduces the probability of undiscovered defects remaining in the software
Exhautive testing is impossible: Testing everything is impossible because nobody know a number of defect correctly. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.
Early testing: To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives.
Defect clustering: Defects are not randomly. Rather, defects tend to be found in clusters. If we can determine this, we will focus on finding errors around the specified area. It is considered to be one of the most effective ways to perform effective testing.
Pesticide paradox: If the same set of continuous test cases is used, after a while the test cases will not find any new errors. The effectiveness of test cases begins to decline after a number of runs, so we must always review and revise test cases on a regular basis.
Testing is context dependent: We have to approach testing in many different contexts. For example, a piece of security software will be tested differently than an e-commerce website.
Absence-of-fallacy: Not finding a bug on a product doesn't mean it's ready to hit the market. Because test cases are created just to test the features that are done as required instead of finding new bugs.
-
Code of ethics
- Public
- Client and employer
- Product
- Judgment
- Management
- Profession
- Colleagues
- Self
-
-
Tool support for testing
Types of test tool
Understanding the Meaning and Purpose of Tool Support for Testing
- To start with the most visible use, we can use tools directly in testing.
- We can use tools to help us manage the testing process.
- We can use tools as part of what’s called reconnaissance, or, to use a simpler term, exploration
Test tool classification
- Test Management Tools
- Requirements Management Tools
- Incident Management Tools (Defect Tracking Tools)
- Configuration Management Tools
Tool support for static testing
- Review Tools
- Static Anaylisis Tools
- Modeling Tools
Tool support for test specification
- Test design tools
- Test data preparation tools
Tool support for test execution and logging
- Test execution tools
- Test harness/unit test framework tools (D)
- Test comparators
- Coverage measurement tools (D)
- Security testing tools
Tool support for performance and monitoring
- Dynamic analysis tools (D)
- Performance-testing, load-testing and stress-testing tools
- Monitoring tools
Tool support for specific application areas
Tool support using other tools
Tool support for specific testing needs
-
-
Static techniques
-
Review Process
-
-
Types of review
1.Informal review: test by my self
2.Walk through group test
3.Technical review: is a discussion meeting that focuses on achieving consensus about the technical content of a document
4.Inspection: create a planning to test have to record result (formal)
-
Static analysis by tools
Coding standards
- Checking for adherence to coding standards is certainly the most well-known of all
features.
- The main advantage of this is that it saves a lot of effort.
- if you take a well-known coding standard there will probably be checking tools available that support this standard.
Code metrics
- The cyclomatic complexity metric is based on the number of decisions in a program.
- It provides an indication of the amount of testing (including reviews) necessary to detect a sufficient number of defects and to have adequate confidence in the system
Code structure
- There are several aspects of code structure to consider: control flow structure; data flow structure; data structure.