Remix.run Logo
wjrb 4 days ago

Are there any resources out there that anyone can recommend for learning testing in the way the author describes?

In-the-trenches experience (especially "good" or "doing it right" experience) can be hard to come by; and why not stand on the shoulders of giants when learning it the first time?

Jtsummers 4 days ago | parent | next [-]

Working Effectively with Legacy Code by Michael Feathers. It spends a lot of time on how to introduce testability into existing software systems that were not designed for testing.

Property-Based Testing with PropEr, Erlang, and Elixir by Fred Hebert. While a book about a particular tool (PropEr) and pair of languages (Erlang and Elixir), it's a solid introduction to property-based testing. The techniques described transfer well to other PBT systems and other languages.

Test-Driven Development by Kent Beck.

https://www.fuzzingbook.org/ by Zeller et al. and https://www.debuggingbook.org/ by Andreas Zeller. The latter is technically about debugging, but it has some specific techniques that you can incorporate into how you test software. Like Delta Debugging, also described in a paper by Zeller et al. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=988....

I'm not sure of other books I can recommend, the rest I know is from learning on the job or studying specific tooling and techniques.

GrumpyYoungMan 4 days ago | parent [-]

TDD is a development methodology, not a testing methodology. The main thing it does is check whether the developer implemented what they thought they should be implementing, which is not necessarily what the spec actually says to implement or what the end user expects.

Jtsummers 4 days ago | parent [-]

It's still a useful technique and way to apply testing to development. But yes, it's not the best resource in telling you what tests to write, more about how they can be applied effectively. Which is a skill that seems absent in many professionals.

fuzztester 4 days ago | parent | prev | next [-]

The Art of Software Testing. New York: Wiley, 1979

The Art of Software Testing, Second Edition. with Tom Badgett and Todd M. Thomas, New York: Wiley, 2004.

It is by Glenford Myers (and others).

https://en.m.wikipedia.org/wiki/Glenford_Myers

From the top of that page:

[ Glenford Myers (born December 12, 1946) is an American computer scientist, entrepreneur, and author. He founded two successful high-tech companies (RadiSys and IP Fabrics), authored eight textbooks in the computer sciences, and made important contributions in microprocessor architecture. He holds a number of patents, including the original patent on "register scoreboarding" in microprocessor chips.[1] He has a BS in electrical engineering from Clarkson University, an MS in computer science from Syracuse University, and a PhD in computer science from the Polytechnic Institute of New York University. ]

I got to read it early in my career, and applied it some, in commercial software projects I was a part of, or led, when I could.

Very good book, IMO.

There is a nice small testing-related question at the start of the book that many people don't answer well or fully.

pfdietz 4 days ago | parent [-]

As I recall this was a book that included the orthodoxy at the time that random testing was the worst kind of testing, to be avoided if possible.

That turned out to be bullshit. Today, with computers many orders of magnitude faster, using randomly generated tests is a very cost effective away of testing, compared to carefully handcrafted tests. Use extremely cheap machine cycles to save increasingly expensive human time.

fuzztester 4 days ago | parent [-]

Interesting. Don't remember that from the book, but then, I read it long ago.

I agree that random testing can be useful. For example, one kind of fuzzing is using tons of randomly generated test data against a program to try to find unexpected bugs.

But I think both kinds have their place.

Also, I think the author might have mean that random testing is bad when used with a small amount of test data, in which case I'd agree with him, because in that case, an equally small amount of carefully crafted test data would be the better option, e.g. using some test data in each equivalence class of the input.

pfdietz 3 days ago | parent | next [-]

Here is the quote (from the 3rd ed., page 41):

"In general, the least effective methodology of all is random-input testing—the process of testing a program by selecting, at random, some subset of all possible input values. In terms of the likelihood of detecting the most errors, a randomly selected collection of test cases has little chance of being an optimal, or even close to optimal, subset. Therefore, in this chapter, we want to develop a set of thought processes that enable you to select test data more intelligently."

You can immediately see the problem here. It's optimizing for number of tests run, not for the overall cost of creating and running the tests. It's an attitude suited to when running a program was an expensive thing using precious resources. It was very wrong in 2012 when this edition came out and even more wrong today.

pfdietz 3 days ago | parent | prev [-]

I'd say in any sufficiently complex program, random testing is not only useful, it's essential, in that it will quickly find bugs no other approach would.

Even better, it subsumes many other testing paradigms. For example, there was all sorts of talk about things like "pairwise testing": be sure to test all pairwise combinations of features. Well, randomly generated tests will do that automatically.

I view random testing as another example of the Bitter Lesson, that raw compute dominates manually curated knowledge.

cogman10 4 days ago | parent | prev [-]

Resources, none that I'm aware of. I generally think this is an OK way to look at testing [1], though I think it goes too far if you completely adopt their framework.

The boil down the tests I like to see. Structure them with "Given/when/then" statements. You don't need a framework for this, just make method calls with whatever unit test framework you are using. Keep the methods small, don't do a whole lot of "then"s, split that into multiple tests. Structure your code so that you aren't testing too deep. Ideally, you don't need to stand up your entire environment to run a test. But do write some of those tests, they are important for catching issues that can hide between unit tests.

[1] https://cucumber.io/docs/bdd/