Remix.run Logo
IanCal 8 hours ago

I'm a huge fan of property based testing, I've built some runners before, and I think it can be great for UI things too so very happy to see this coming around more.

Something I couldn't see was how those examples actually work, there are no actions specified. Do they watch a user, default to randomly hitting the keyboard, neither and you need to specify some actions to take?

What about rerunning things?

Is there shrinking?

edit - a suggestion for examples, have a basic UI hosted on a static page which is broken in a way the test can find. Like a thing with a button that triggers a notification and doesn't actually have a limit of 5 notifications.

owickstrom 8 hours ago | parent | next [-]

Hey, yeah the default specification includes a set of action generators that are picked from randomly. If you write a custom spec you can define your own action generators and their weights.

Rerunning things: nothing built for that yet, but I do have some design ideas. Repros are notoriously shaky in testing like this (unless run against a deterministic app, or inside Antithesis), but I think Bombadil should offer best-effort repros if it can at least detect and warn when things diverge.

Shrinking: also nothing there yet. I'm experimenting with a state machine inference model as an aid to shrinking. It connects to the prior point about shaky repros, but I'm cautiously optimistic. Because the speed of browser testing isn't great, shrinking is also hard to do within reasonable time bounds.

Thanks for the questions and feedback!

theptip 6 hours ago | parent [-]

For re-running, I assume you want to do this all on a review app with a snapshot of the DB, so you start with a clean app state.

Should be pretty easy to make it deterministic if you follow that precondition.

(How I had my review apps wired up was I dumped the staging DB nightly and containerized it, I believe Neon etc make it easy to do this kind of thing.)

Ages ago I wired up something much more basic than this for a Python API using hypothesis, and made the state machine explicit as part of the action generator (with the transitions library), what do you think about modeling state machines in your tests? (I suppose one risk is you don’t want to copy the state machine implementation from inside the app, but a nice fluent builder for simple state machines in tests could be a win.)

owickstrom 6 hours ago | parent [-]

That's true, clean app state gets you far. And that's something I'm going to add to Bombadil once it gets an ability to run many tests (broad exploration, reruns, shrinking), i.e. something in the spec where you can supply reset hooks, maybe just bash commands.

Regarding state machines: yeah, it can often become an as-complex mirror of the system your testing, if the system has a large complicated surface. If on the other hand the API is simple and encapsulates a lot of complexity (like Ousterhout's "Deep Modules") state machine specs and model-based testing make more sense. Testing a key-value store is a great example of this.

If you're curious about it, here's a very detailed spec for TodoMVC in Bombadil: https://github.com/owickstrom/bombadil-playground/blob/maste... It's still work-in-progress but pretty close to the original Quickstrom-flavored spec.

danbruc 7 hours ago | parent | prev [-]

How effective is property based testing in practice? I would assume it has no trouble uncovering things like missing null checks or an inverted condition because you can cover edge cases like null, -1, 0, 1, 2^n - 1 with relatively few test cases and exhaustively test booleans. But beyond that, if I have a handful of integers, dates, or strings, then the state space is just enormous and it seems all but impossible to me that blindly trying random inputs will ever find any interesting input. If I have a condition like (state == "disallowed") or (limit == 4096) when it should have been 4095, what are the odds that a random input will ever pass this condition and test the code behind it?

Microsoft had a remotely similar tool named Pex [1] but instead of randomly generating inputs, it instrumented the code to enable executing the code also symbolically and then used their Z3 theorem proofer to systematically find inputs to make all encountered conditions either true or false and with that incrementally explore all possible execution paths. If I remember correctly, it then generated a unit test for each discovered input with the corresponding output and you could then judge if the output is what you expected.

[1] https://www.microsoft.com/en-us/research/publication/pex-whi...

IanCal 2 hours ago | parent | next [-]

In practice I’ve found that property based testing has a very high ratio of value per effort of test written.

Ui tests like:

* if there is one or more items on the page one has focus

* if there is more than one then hitting tab changes focus

* if there is at least one, focusing on element x, hitting tab n times and then shift tab n times puts me back on the original element

* if there are n elements, n>0, hitting tab n times visits n unique elements

Are pretty clear and yet cover a remarkable range of issues. I had these for a ui library, which came with the start of “given a ui build with arbitrary calls to the api, those things remain true”

Now it’s rare it’d catch very specific edge cases, but it was hard to write something wrong accidentally and still pass the tests. They actually found a bug in the specification which was inconsistent.

I think they often can be easier to write than specific tests and clearer to read because they say what you actually are testing (a generic property, but you had to write a few explicit examples).

What you could add though is code coverage. If you don’t go through your extremely specific branch that’s a sign there may be a bug hiding there.

spooneybarger 6 hours ago | parent | prev | next [-]

An important step with property based testing and similar techniques is writing your own generators for your domain objects. I have used to it to incredible effect for many years in projects.

I work at Antithesis now so you can take that with a grain of salt, but for me, everything changed for me over a decade ago when I started applying PBT techniques broadly and widely. I have found so many bugs that I wouldn't have otherwise found until production.

kqr 5 hours ago | parent | prev | next [-]

"Exhaustively covering the search space" or "hitting specific edge cases" is the wrong way to think about property tests, in my experience. I find them most valuable as insanity checks, i.e. they can verify that basic invariants hold under conditions even I wouldn't think of testing manually. I'd check for empty strings, short strings, long strings, strings without spaces, strings with spaces, strings with weird characters, etc. But I might not think of testing with a string that's only spaces. The generator will.

kwillets 4 hours ago | parent | prev | next [-]

One of the founders of Antithesis gave a talk about this problem last week; diversity in test cases is definitely an issue they're trying to tackle. The example he gave was Spanner tests not filling its cache due to jittering near zero under random inputs. Not doing that appears to be a company goal.

https://github.com/papers-we-love/san-francisco/blob/master/...

wwilson 4 hours ago | parent [-]

Glad you enjoyed the talk! Making Bombadil able to take advantage of the intelligence in the Antithesis platform is definitely a goal, but we wanted to get a great open source tool into peoples’ hands ASAP first.

skybrian 7 hours ago | parent | prev | next [-]

One thing you can find pretty quickly with just basic fuzzing on strings is Unicode-related bugs.

Mr_RxBabu 2 hours ago | parent | prev [-]

[dead]