Remix.run Logo
fuzztester 4 days ago

The Art of Software Testing. New York: Wiley, 1979

The Art of Software Testing, Second Edition. with Tom Badgett and Todd M. Thomas, New York: Wiley, 2004.

It is by Glenford Myers (and others).

https://en.m.wikipedia.org/wiki/Glenford_Myers

From the top of that page:

[ Glenford Myers (born December 12, 1946) is an American computer scientist, entrepreneur, and author. He founded two successful high-tech companies (RadiSys and IP Fabrics), authored eight textbooks in the computer sciences, and made important contributions in microprocessor architecture. He holds a number of patents, including the original patent on "register scoreboarding" in microprocessor chips.[1] He has a BS in electrical engineering from Clarkson University, an MS in computer science from Syracuse University, and a PhD in computer science from the Polytechnic Institute of New York University. ]

I got to read it early in my career, and applied it some, in commercial software projects I was a part of, or led, when I could.

Very good book, IMO.

There is a nice small testing-related question at the start of the book that many people don't answer well or fully.

pfdietz 4 days ago | parent [-]

As I recall this was a book that included the orthodoxy at the time that random testing was the worst kind of testing, to be avoided if possible.

That turned out to be bullshit. Today, with computers many orders of magnitude faster, using randomly generated tests is a very cost effective away of testing, compared to carefully handcrafted tests. Use extremely cheap machine cycles to save increasingly expensive human time.

fuzztester 4 days ago | parent [-]

Interesting. Don't remember that from the book, but then, I read it long ago.

I agree that random testing can be useful. For example, one kind of fuzzing is using tons of randomly generated test data against a program to try to find unexpected bugs.

But I think both kinds have their place.

Also, I think the author might have mean that random testing is bad when used with a small amount of test data, in which case I'd agree with him, because in that case, an equally small amount of carefully crafted test data would be the better option, e.g. using some test data in each equivalence class of the input.

pfdietz 3 days ago | parent | next [-]

Here is the quote (from the 3rd ed., page 41):

"In general, the least effective methodology of all is random-input testing—the process of testing a program by selecting, at random, some subset of all possible input values. In terms of the likelihood of detecting the most errors, a randomly selected collection of test cases has little chance of being an optimal, or even close to optimal, subset. Therefore, in this chapter, we want to develop a set of thought processes that enable you to select test data more intelligently."

You can immediately see the problem here. It's optimizing for number of tests run, not for the overall cost of creating and running the tests. It's an attitude suited to when running a program was an expensive thing using precious resources. It was very wrong in 2012 when this edition came out and even more wrong today.

pfdietz 3 days ago | parent | prev [-]

I'd say in any sufficiently complex program, random testing is not only useful, it's essential, in that it will quickly find bugs no other approach would.

Even better, it subsumes many other testing paradigms. For example, there was all sorts of talk about things like "pairwise testing": be sure to test all pairwise combinations of features. Well, randomly generated tests will do that automatically.

I view random testing as another example of the Bitter Lesson, that raw compute dominates manually curated knowledge.