Why does Cem Kaner consider a test not revealing a bug a waste of time?

What about confirming the functionality in positive tests, proving it is working – should I say it is a waste of time? What kind of concept is behind this quote?

Unsuccessful tests, i.e. tests that do not find errors are a waste of
time.

Web Engineering: The Discipline of Systematic Development of Web Applications quoting Cem Kaner.

12

I wrote most of Testing Computer Software over 25 years ago. I’ve since pointed to several parts of the book that I consider outdated, or simply wrong. See http://www.kaner.com/pdfs/TheOngoingRevolution.pdf

You can see more (current views, but without explicit pointers back to TCS) at my site for the Black Box Software Testing Course (videos and slides available for free), www.testingeducation.org/BBST

The testing culture back then was largely confirmatory.

In modern testing, the approach to unit testing is largely confirmatory–we write large collections of automated tests that simply verify that the software continues to perform as intended. The tests serve as change detectors–if something in other parts of the code and this part now has problems, or if data values that used to be impossible in the real world are now reaching the application, then the change detectors fire, alerting the programmer to a maintenance problem.

I think the confirmatory mindset is appropriate for unit testing, but imagine a world in which all of system testing was confirmatory (for folks who make a distinction, please interpret “system integration testing” and “acceptance testing” as included in my comments on system testing.) The point of testing was to confirm that the program met its specifications and the dominant approach was to build a zillion (or at least a few hundred) system-level regression tests that mapped parts of the spec to behaviors of the program. (I think spec-to-behavior confirmation is useful, but I think it is a small portion of a larger objective.)

There are still test groups that operate this way, but it is no longer the dominant view. Back then, it was. I wrote emphatically and drew sharp contrasts to make a point to people who were consistently being trained in this mindset. Today, some of the sharp contrasts (including the one quoted here) are outdated. They get misinterpreted as attacks on the wrong views.

As I see it, software testing is an empirical process for learning quality-related information about a software product or service.

A test should be designed to reveal useful information.

Back then, by the way, no one talked about testing as a method for revealing “information”. Back then, testing was either for (some version of …) finding bugs or for (some version of … ) verifying (checking) the program against specifications. I don’t think that the assertion that tests are for revealing useful information came into the testing vocabulary until this century.

Imagine rating tests in terms of their information value. A test that is very likely to teach us something we don’t know about the software would have a very high information value. A test that is very likely to confirm something that we already expect and that has already been demonstrated many times before, would have a low information value. One way to prioritize tests is to run higher information value tests before lower information value tests.

If I was to oversimplify this prioritization so that it would attract the attention of a programmer, project manager, or process manager who is clueless about software testing, I would say “A TEST THAT IS NOT DESIGNED TO REVEAL A BUG IS A WASTE OF TIME.” It’s not a perfect translation, but for readers who cannot or will not understand any subtlety or qualification, that’s as close as it’s going to get.

Back then, and I see it again here, some of the people who don’t understand testing would respond that a test designed to find corner cases is a waste of time compared to a test of a major use of a major function. They don’t understand two things. First, by the time testers find time to check boundary values, the major uses of the major functions have already been exercised several times. (Yes, there are exceptions, and most test groups will pay careful attention to those exceptions.) Second, the reason to test with extreme values is that the program is more likely to fail with extreme values. If it doesn’t fail at the extreme, you test something else. This is an efficient rule. On the other hand, if it DOES fail at an extreme value, the tester might stop and report a bug or the tester might troubleshoot further, to see whether the program fails in the same way at more normal values. Who does that troubleshooting (the tester or the programmer) is a matter of corporate culture. Some companies budget the tester’s time for this, some budget the programmers, and some expect programmers to fix corner case bugs whether they are generalizable or not so that troubleshooting is not relevant. The common misunderstanding — that testers are wasting time (rather than maximizing efficiency) by testing extreme values is another reason that “A test that is not designed to reveal a bug is a waste of time” is an appropriate message for testers. It’s a counterpoint to the encouragement from some programmers to (in effect) never run tests that might challenge the program. The message is oversimplified, but the entire discussion is oversimplified.

By the way, “information value” can’t be the only prioritization system. It’s not my rule when I design unit test suites. It’s not my rule when I design build verification tests (aka sanity checks). In both of those cases, I’m more interested in types of coverage than in the power of the individual tests. There are other cases (e.g. high-volume automated tests that are cheap to set up, run and monitor) where power of individual tests is simply irrelevant to my design. I’m sure you can think of additional examples.

But as a general rule, if I could state only one rule (e.g. speaking to an executive whose head explodes if he tries to process more than one sentence), it would be that a low information-value test is usually a waste of time.

5

The idea is, according to Kaner, “since you will run out of time before running out of test cases, it is essential to use the time available as efficiently as possible.”

Concept behind the quote you ask about is presented and explained in good detail in the Testing Computer Software article by Cem Kaner, Jack Falk, Hung Quoc Nguyen, in the chapter “THE OBJECTIVES AND LIMITS OF TESTING”:

SO, WHY TEST?

You can’t find all the bugs. You can’t prove the program correct, and you don’t want to. It’s expensive,
frustrating, and it doesn’t win you any popularity contests. So, why bother testing?

THE PURPOSE OF TESTING A PROGRAM IS TO FIND PROBLEMS IN IT

Finding problems is the core of your work. You should want to find as many as possible; the more serious the problem, the better.

Since you will run out of time before running out of test cases, it is essential to use the time available as efficiently as possible. Chapters 7,8,12, and 13 consider priorities in detail. The guiding principle can be put simply:


A test that reveals a problem is a success. A test that did not reveal a problem was a waste of time.


Consider the following analogy, from Myers (1979). Suppose that something’s wrong with you. You go
to a doctor. He’s supposed to run tests, find out what’s wrong, and recommend corrective action. He runs test after test after test. At the end of it all, he can’t find anything wrong. Is he a great tester or an incompetent diagnostician? If you really are sick, he’s incompetent, and all those expensive tests were a waste of time, money, and effort. In software, you’re the diagnostician. The program is the (assuredly) sick patient…


You see, the point of above is that you should prioritize your testing wisely. Testing is expected to take a limited amount of time and it’s impossible to test everything in the time given.

Imagine that you spent a day (week, month) running tests, find no bugs and let some bug to slip through because you didn’t have time to run a test that would reveal it. If this happens, you can’t just say “it’s not my fault because I was busy running other tests” to justify this miss – if you say so, you will still be held responsible.

You wasted time running tests that did not reveal bugs and because of this, you missed a test that would find a bug.

(In case if you wonder, misses like above are generally unavoidable no matter how you try, and there are ways to deal with these, but that would be more a topic for a separate question… and probably a better fit for SQA.SE.)

3

Well, I don’t know Mr. Caner, but IMHO

tests that do not potentially find errors

are a waste of time. That includes the situation where you already have some tests (it does not matter if they are automatic or just on a checklist), and you add new tests which validate essentially the same cases you already have. So your new tests won’t find any more errors than the existing ones.

Such a situation can happen, for example, if you just through a list of randomly – I could say also “brainlessly” (forgive me that word) – choosen test cases at your program, without thinking if they check new edge case, new equivalence classes of your input data, or if they increase code coverage in relation to the already written tests.

In my opinion this quote refers to too general or unrobusts tests.

If you make a test for a function that validate emails and on the test you only provide valid emails, that test is completely useless. You would have to test this function for “any” string posible, invalid emails, too long emails, unicode characters (áêñç….)

If you code a test that only checks that [email protected] returns true and name@com returns false, that test is the same of no test at all.

Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa Dịch vụ tổ chức sự kiện 5 sao Thông tin về chúng tôi Dịch vụ sinh nhật bé trai Dịch vụ sinh nhật bé gái Sự kiện trọn gói Các tiết mục giải trí Dịch vụ bổ trợ Tiệc cưới sang trọng Dịch vụ khai trương Tư vấn tổ chức sự kiện Hình ảnh sự kiện Cập nhật tin tức Liên hệ ngay Thuê chú hề chuyên nghiệp Tiệc tất niên cho công ty Trang trí tiệc cuối năm Tiệc tất niên độc đáo Sinh nhật bé Hải Đăng Sinh nhật đáng yêu bé Khánh Vân Sinh nhật sang trọng Bích Ngân Tiệc sinh nhật bé Thanh Trang Dịch vụ ông già Noel Xiếc thú vui nhộn Biểu diễn xiếc quay đĩa Dịch vụ tổ chức tiệc uy tín Khám phá dịch vụ của chúng tôi Tiệc sinh nhật cho bé trai Trang trí tiệc cho bé gái Gói sự kiện chuyên nghiệp Chương trình giải trí hấp dẫn Dịch vụ hỗ trợ sự kiện Trang trí tiệc cưới đẹp Khởi đầu thành công với khai trương Chuyên gia tư vấn sự kiện Xem ảnh các sự kiện đẹp Tin mới về sự kiện Kết nối với đội ngũ chuyên gia Chú hề vui nhộn cho tiệc sinh nhật Ý tưởng tiệc cuối năm Tất niên độc đáo Trang trí tiệc hiện đại Tổ chức sinh nhật cho Hải Đăng Sinh nhật độc quyền Khánh Vân Phong cách tiệc Bích Ngân Trang trí tiệc bé Thanh Trang Thuê dịch vụ ông già Noel chuyên nghiệp Xem xiếc khỉ đặc sắc Xiếc quay đĩa thú vị
Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa
Thiết kế website Thiết kế website Thiết kế website Cách kháng tài khoản quảng cáo Mua bán Fanpage Facebook Dịch vụ SEO Tổ chức sinh nhật