Showing posts with label ROI. Show all posts
Showing posts with label ROI. Show all posts

Sunday, May 8, 2011

Economic principles of testing - why test, what and how long

This week I received an email in which I was asked following questions:
  1. How to determine if functionality is critical or not? Is it on the basis of business turnover or something else?
  2. Consider this situation: around 40 systems, approx. 800 interfaces, and thousands of functionalities. How to effectively address the importance (in terms of being business critical) of individual functionalities and their relationships? Are there any dependency algorithms for this?
  3. Is it possible to define end of testing based on criteria that if the bug count is below some threshold, you can go into production? Or is it necessary to go to production only when the bug count is zero in all components?
As all of these questions revolve around the economic understanding of testing, I realized that I did not address the explanation of basic principles yet and decided to remedy this situation.
So here are the basic economic principles of testing.


We test because it pays off and as long as it pays off

In business every step that we do we do with the prospect of future profits. If we offer employees various benefits, we're doing it because we want to entice quality employees and reduce costs caused by their volatility. If we invest in quality products, we do it because better quality brings us more money. Whether due to the fact that we do not have to spend large amounts of money on the correction of defects discovered by customers, or because it brings us more satisfied customers.

This is related to the fact that we test as long as it pays off. If the cost of discovering and fixing defects at some stage is bigger than the estimated costs resulting from leaving them there, it makes no sense to continue with testing.

Sometimes the manager can get into a situation where the application is very buggy, testing is effective and it would be good to have two or three months to continue with testing and fixing bugs, but for some reasons, for example legal issues, it is still more cost-effective to deploy the application.
It is customer call to know the overall situation and decide when to deploy the application. This decision should be in hands of manager of that part of company, who will use it or will sell it on mass market. Test manager only provides him with information for making this decision, especially to warn him about the quality risks.
If only 20% of an application is fully and reliably functional, 10% has too many defects to use it and the rest has not been tested yet, what it could mean if application is deployed?

Thus tester is not the one to define the acceptance criteria or making the decision when to release the application into production. This is customers call and he decides how many defects of what severity to tolerate or if he makes the decision not based on quality but based on the situation on market.

We monitor the effectiveness of testing in terms of number of newly discovered defects 


As mentioned above, we should test as long as it is financially worthwhile. What does this mean? It means that we need to stop testing in the point when the future cost of finding and correcting defects would be bigger than cost of not to continue to test and risk that the defects will be discovered by customers. But determination of such a moment is not trivial. Apart from the need to work with a probability, it is also necessary to have sufficiently detailed information on the defect and testing costs. This is information that most companies do not monitor at such level is not monitored and so it is not available. Less accurate but simple and reasonable alternative is to track the number of newly found defects. The following picture shows a typical example of how the number of newly found defects fluctuates during testing.


Initially, the test team founds a lot of new defects. Over time, the number is decreasing and it appears that further testing could not bring any surprises. At this point, an inexperienced test manager terminates the testing thinking that the product is ready. Experienced test manager continues, knowing that the testing normally grabs a second wind, and the number of discovered bugs will begin to grow again. Moreover the test manager will support the growth in discovered defects by variation of tests. However, even without the change of testing approach, the increase is expected. End of testing is advisable after the second or third recess. Of course this is only one of the methods how to determine when to stop, there are others that are based on data gathering and monitoring.


Testing is an investment. Money should be put where it will have the greatest effect

The reason for determining priority of the functionalities is to focus the efforts appropriately. What is important for business is determined by the customer again. Most often in complex systems, the criticality of applications and business processes is well known and to find it, it is sufficient to contact the risk manager or look into contingency plans. Determination of the criticality is based on what the impact would be of the application unavailability or malfunction.
For example, the losses of one day unavailability are estimated, and the ranges are matched with importance labels.

Regarding the importance of individual tests, here you can draw mainly from the priority of use cases. The customer identifies their priorities in the phase of gathering requirements and specifications. To determine priorities he usually think about how many people use the function, or if it comes to contact with critical resources (money, personal data, etc.) or how essential the business processes are, which function handles. However often the customer, especially in smaller projects, identifies the importance based on feelings and according to contractor’s instruction determined 20% of the most important functionalities. Popular and well understandable is assigning three levels of priority:

Critical to quality - Without functions with this label the product cannot be use and it would fail to fulfill even the most basic needs of the customer.

Good enough quality – This label marks needed but not the most critical functions.

Nice to have – This label marks features that are the proverbial icing on the cake. They can make improve the experience or make a work a little bit easier or be just another thing that someone could use in the future.

So back to the start, the answer to all three questions is: "ask the customer what he wants". Because he is the one who invests, and he is the one who will consider the return of the investment.

Saturday, December 5, 2009

Why should we test our software?

This article is not only for testers, but also managers interested in what software testing could bring for them and perhaps does not bring now. Therefore I start with definitions of two terms, which I often use here:

Quality Assurance - planned and systematic processes designed to ensure the suitability of the product for its intended purpose.

Testing - the process of collecting and sorting information obtained through the examination of the product, a part of more general quality assurance.

If we feel that the testing and quality assurance is not given enough attention, and this side of the software development is constantly underestimated, perhaps we are not able to convince others of the importance of testing and its expert management.

So what are the benefits of software testing and quality assurance? Which items should we mention in the presentation of its benefits?

Quality assurance process affects three managerially important areas:
- Marketing
- Risk management
- Reducing costs

Marketing


The fact that a good product delivered to a customer has a positive impact on reputation of company, while highly unreliable software destroys its reputation, is a simple logical conclusion. It is harder to predict or just realize a level of this influence. Poor product influences reputation in waves, some of them are immediate and fast disappearing, some has negative impact on business for several years.

The first wave is a response of direct customer, who makes decisions about subsequent cooperation based on his contentment.

The second wave is the reaction of end customers. They can cause a number of inconveniences in the case of bad reception of a product. Their constant complaints and bug reporting can cause project to go unexpectedly overprice. So that initially profitable project can become unprofitable project. Furthermore, their negative reception of a product may cause a change in the view of management on the supplier or get among the general public through newspaper and television news.

The third wave is a reaction of current employees and others in the field of software development. Reputations of software companies spread further due to constant staff turnover and persist for many years. Competent employees leave firms that are not able to deliver software in required quality because they do not want to be associated with next failure, or because they are simply not motivated to improve there. Where is no objective criteria of quality there is always a lack of efforts to improve. For the same reasons, good potential employees avoid the firm. They heard negative things about it from former or current employees.

The fourth wave, which affects a software company with the largest delay, is the reluctance of other customers or other software firms to cooperate with it. It is not possible to conceal a lack of efforts to ensure quality and poor processes from employees. If these employees are convinced that they themselves would never give their firm a contract for software development, then they would be less willing to give this company a contract even after several years when they will make such decisions from a position of managers and directors. The same goes for anyone who learned about poorly designed processes from their colleagues and friends.

In these four waves, each project influences according to its quality and impact in a positive or negative direction the company's business.

Risk management


Testing is a tool to obtain objective information on the status of developed software. This information is the most important input for risk management of software development. Testing is intertwined through the entire process of development. From the very beginning, testing controls compliance of the development with customer's needs, clarity and logic of outputs. It prevents misunderstandings and unnecessary waste of resources by timely informing about the shortcomings and errors. Quality assurance moreover define procedures and monitors developments in terms of providing simplicity, clarity, accuracy, precision, speed and other quality standards. As a result there is both a significant reduction in the likelihood of quality problems during and after development, and a limitation of an impact when a problem occurs.

Testing significantly prevents problems with malicious bugs. A single malicious bug in financial, medical or other critical sector can impact a loss that exceeds the entire budget for the development of this software.

Without testing, there are only two unpleasant ways to reduce the risk associated with software development:
- To find the subcontractor that takes a responsibility to some extent
- If it is possible, to insure against certain types of problems

Reducing costs


Although one of the most common management mistakes is sacrificing the quality during cost reduction, it is just effective quality assurance that brings the greatest savings. From the economic point of view, testing should last as long as the estimated average cost of finding and correcting a bug discovered in the next test cycle is less than the average cost of a bug discovered by the customer multiplied by the probability of its discovery. Simply, testing should last as long as it is financially more profitable than not testing. Quality assurance is an investment, so it is useful to monitor its return.

The problem is that it is very difficult to manage quality management with maximum efficiency and minimum cost. Such a task requires experience, feeling and excellent knowledge of a professional. Therefore, when you assure quality, you need to do well.

How to start?


Set effective testing process is not a one time thing, but it is created by constant tuning based on different reasonably chosen metrics which warn unless everything is alright and where you can monitor the deterioration or improve efficiency.

Understanding the key role of quality management in software development, setting standards, quality control process and having excellent professionals with experience directly in testing is a good basis for any quality assurance department.