Test planning
The aim of the test plan is to describe how all the tests will be carried out, including the resources, schedules and tools required to carry them out. An effective approach to developing the test plan is to identify and assess the level of risk of the new IT system. The more business-critical the functionality, the greater the testing effort required.
As not all functionalities carry the same weight, risk-based planning ensures an appropriate breakdown of testing effort, and a sequence of execution adapted to the needs of the system or software product to be developed.
Breakdown of testing effort
Software testing should consume around 40% of a project's total effort.
This 40% includes all the tests that might take place during a project, i.e. unit (TU), integrated (TI), system (TS), acceptance (TA), performance, volume, pre-production, etc. tests.
The more immature the development process, or the riskier the project, the faster the 40% mark will be reached.
A structured testing program will reduce the effort to 30% after a few months.
Implementing quality assurance and quality control (QA/QC) practices upstream of the testing phase (shift-left testing) will reduce the effort to less than 20% after a few months.
Whatever the percentage, it is important to respect three rules:
- Integrated testing effort must be at least twice that of unit testing.
- Integrated testing effort should be similar to system testing effort.
- Acceptance testing effort should be limited to the same volume of effort as unit testing, if everyone has respected their testing levels and efforts.
Test documentation
When carrying out tests, the test team must produce several deliverables:
Test strategy
This details how all tests (unit, integrated, system, acceptance, etc.) will be carried out. A test strategy is normally based on a risk analysis.
Test plan
This specifies how tests will be carried out for a single test phase. There is normally one test plan for each test phase. Of course, if IT and TS are carried out by the same test team, there will be a single plan combining the two test phases. A test plan is also normally based on a risk analysis.
Test cases
This deliverable documents test scenarios and test cases step by step. It is produced during the design phase.
Test schedule
This shows the sequence of test case execution, day by day. It is produced during the design phase.
User and training guide
Describes how to use the product. It is produced by the Technical Writer, who should be part of the test team. It is initiated during the design phase and evolves as the product is developed.
Unit test proof
This is the list of minimum elements to be tested by developers during unit testing. It is produced during the design phase, at the same time as the test sets.
Test log
Records the daily progress of tests. It is produced during test execution.
Test report
This report presents the test results and enables an objective decision to be made on whether or not to deploy a delivery. It is produced at the end of a test period.
Anomaly management
Good anomaly management enables you to present the anomalies detected during testing. The list is updated as the development and testing processes progress. It contains not only test anomalies, but also quality anomalies (non-conformities).
Risk analysis
An effective approach to developing the test strategy and plan is to identify and assess the level of risk of the new IT system. Risk is the major thread running through the design of test cases. It helps determine the time, budget and resources allocated to testing.
First, the test team needs to understand and investigate the system's characteristics, in order to assess the level of potential risk magnitude (business risks). Then it must develop its test strategy based on these risks.
To do this, the test team must list all the functions and quality attributes (system or software product quality attributes, see ISO/IEC 25010 model, examples: security, reliability, portability, etc.) that are required by the project. These quality attributes should be indicated if they are important to the customer and stakeholders. They will need to be tested, and are generally classified as non-functional or “white box” tests.

New functions, modified functions and functions for which regression tests are required are therefore listed. For each function and attribute, a severity rating is assigned. During risk analysis, two major components must be considered when assigning a severity rating:
The probability of a negative event occurring for that function or attribute
The impact, potential loss or impact associated with the event.
► Risk severity = probability x impact.
Probability
The probability of occurrence takes into account the rate or frequency of use of a function. Thus, when analyzing a function, we determine its probability according to the following scale:
- Inevitable: all users necessarily use this function (e.g. system access).
- Frequent: users frequently use this function, but not always (e.g. printing or downloading a report).
- Occasional: an average user does not use this function, which is used by a more experienced user in the event of a particular need (setting options).
- Rare: most users never use this function, which is only used for complex operations (toolbar modifications).
Impact
Once the risk areas have been identified (functions and quality attributes), we analyze the impact of a malfunction on the user. Thus, the impact of the malfunction is :
Catastrophic: if this function fails, the impact is major (the machine crashes, the software stops working, backup is impossible, etc.).
Serious: if this function fails, the application still works, but the risk of losing data or using corrupted data is high. The computer must be restarted to resolve the problem (communication interrupted to host computer; data update interrupted, etc.).
Moderate: the problem encountered inconveniences the user, but can be circumvented with additional actions (directory full; create a new backup directory).
Annoying: if this function doesn't work properly, you can continue to use the application, but subsequent problems may arise (number of characters allowed for entering a file name).
Severity
Severity determines the priority of functions to be tested, as well as their relative importance. We also aim to automate functions with a “High” severity.

Each function and attribute of the software product is therefore assigned a high, medium or low level of risk. High” functions or attributes will require more testing, and therefore more effort and test cases. Medium” functions need to be tested half as much as ‘High’ ones. Low” functions or attributes should also be tested half as much as ‘Medium’ functions. The test sequence must also respect the severity level.
1/3 rule
The only important rule to respect is that all the severities allocated to the functions and attributes of a system or software product must be equally distributed:
- 1/3 “high severity” functions or attributes,
- 1/3 “medium severity
- 1/3 “low severity”.
Risk is not a fixed or absolute value: it depends on the context and perception of each individual. What may be perceived as risky for one person may not be for another, depending on their experience, knowledge and risk tolerance. A good practice is to carry out the exercise in a team with the customer or the Product Owner (PO), and above all not to forget the stakeholders.
Conclusion
Risk-based planning is an essential approach in software development, enabling efforts to be prioritized where they are most critical. By identifying and assessing potential risks from the outset, the test team can focus its resources on the most vulnerable areas, minimizing surprises at the end of the cycle. This proactive approach not only enhances the quality and reliability of the final product, but also ensures a more predictable and controlled delivery.
To find out more :
Quality: The Basics and Best Practices of Software TestingQuality: plan and execute your software testing activities
Software testing : implementing a team in the organization
Software Testing: Applying the Best Practices Using the Agile Mode