Monday, September 7, 2009

Murphy laws. Why are they true?

A program is good when it is bug free - which is impossible.

There are people who do not believe in the existence of bug-free software, and people who think that testing ensures perfection. (There are other groups of people, of which the worst are those that do not care for quality. But lets leave that aside.)
Bug-free programs exist, but only the trivial ones. For example, if the only thing the program should do is to write "Hello World!" on the screen and end.
The more programs are complicated, the harder it is to check if it is without errors.
A simple program of a few lines may have so many different paths in code that a person would not test all of them even in ten years.
Therefore, with any non-trivial program especially complex applications and systems man must reckon with the fact that no matter how much we test there always will be some bugs left.

Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited.

This law is merely the result of Dijkstr's axiom that testing is appropriate to prove the presence of errors, but inappropriate to prove their absence.

Every non-trivial program contains at least one bug.
Every non-trivial program can be simplified by at least one line of code.
The conclusion of the last two laws: Every non trivial program can be simplified to one line of code, and it will contain a bug.


There is almost nothing to add. Perhaps only that it is a self-fulfilling prophecy. The more a person makes changes in the code, the greater is the probability that he would carry a bug into it.

A working program is one that has only unobserved bugs.

This law would, in fact, read: the software no one complains about has only undiscovered bugs and that is because nobody uses it.
It is like that in reality. The aim of testing is to achieve a certain (preferably high) degree of quality, not to make sure the customer will not report any errors.

It is enough when the customer feels good about the product because he is not bother with bugs frequently and if he finds one it is nothing serious and it is quickly fixed.

The number of bugs always exceeds the number of lines found in a program.


This is a generalized observation. Often, this law is true, because during the development lifecycle will be so many changes in the program, that although the final program has x rows, programmers have written many times more.


The chances of a program doing what it's supposed to do is inversely proportional to the number of lines of code used to write it.


It is easy to understand. The more a program is complicated, the harder it is to understand and remember it and not make a bug.

No matter how many resources you have, it is never enough.

Non-trivial program can never be completely tested and we can not determine how many bugs have to be discovered yet. It is therefore possible to test indefinitely.
It is important to achieve the greatest effect with available resources. This is the true art of competent managers. If the discovery of bug and its repair cost more than costs of being discovered by (angry) customer, then it is time to stop testing.

A patch is a piece of software which replaces old bugs with new bugs.


Although the patch is relatively small piece of software, it needs to be tested. Rely on that everything will run as intended, without anyone really test it, this is a sure recipe for disaster.

Bugs will appear in one part of a working program when another 'unrelated' part is modified.

There are two reasons why even if the change is only one part of the program, there may suddenly arise bugs where none were previously.
The first reason is that these parts are in some way connected whit each other, but nobody realized it.
For example, one of them calls a function, which is unnecessary calling another function, which was abolished by this change.
The second reason is that although these two parts are not related, they use the same resources. This change may thus show a bug that was in the software for some time. This happens in case of poorly treated shared memory or a pointer that points to random data.


The subtlest bugs cause the greatest damage and problems.


Small and inconspicuous errors have either little or no effect (for example, the logo is shifted from a specified position) or very serious consequences (in one of hundred cases it will round up to the wrong amount, which may result in loss several times greater than the price of software). If a error is obvious, the user will notice it immediately and if its consequences are serious, it is now possible to remedy the situation and fix the bug. But if an error escapes attention, its impacts will accumulate until the avalanche break down and cause very serious problems.


Software bugs are impossible to detect by anybody except the end user.


Although testers reveal a large amount of various bugs, some bugs are difficult to detect for anyone than end user. This is because even if the tester tries to look at software from a user's perspective, he is not average user and lacks knowledge of user behaviour.
The only way to know reactions of users on software is seeing them work with it.

Any problem, no matter how complex, can be found by simple inspection.
Corollary: A nagging intruder with unsought advice will spot it immediately.


If the tester exhausted all ideas, and discovered all the bugs he could, it is necessary to modify approach or perhaps try a greater distance to find other bugs.

Walking on water and developing software to specification are easy as long as both are frozen.


Any change in specification brings risks into a development:
• Not everybody who need to know about a change will be told about it
• Change is unclear and poorly understood
• You must drop what was already tested and write a new part, which is potentially full of errors
• The complexity of what a developer must remember is increasing and it is easier to make a mistake
• ...

Law of Anti-security
The best way past a pesky security feature is a 13-year-old.


In an effort to secure software from unwelcome intruders, people sometimes tend to focus on the most common and best known ways to penetrate system and build sophisticated defences against them. Doing so they may overlook an indirect but simple way in. One could say that this law is the result when someone does not see the trees through the forest.

2 comments: