Remix.run Logo
allcentury 4 days ago

Manual testing as the first step… not very productive imo.

Outside in testing is great but I typically do automated outside in testing and only manual at the end. The loop process of testing needs to be repeatable and fast, manual is too slow

simonw 4 days ago | parent [-]

Yeah that's fair, the manual testing doesn't have to sequentially go first - but it does have to get done.

I've lost count of the number of times I've skipped it because the automated test passed and then found there was some dumb but obvious bug that I missed, instantly exposed when I actually exercised the feature myself.

9rx 4 days ago | parent | next [-]

Maybe a bit pedantic, but does manual testing really need to be done, or is the intent here more towards being a usability review? I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests (there is no reason to write code that doesn't have a contractual purpose), but, after trying it, finding out that what you've created has an awful UX is something I have encountered and that is something much harder to encode in tests[1].

[1] As far as I can tell. If there are good solutions for this too, I'd love to learn.

RaftPeople 4 days ago | parent [-]

> I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests

Unit testing, whether manual or automated, typically catches about 30% of bugs.

End to end testing and visual inspection of code are both closer to 70% of bugs.

9rx 4 days ago | parent [-]

Automated testing (there aren't different kinds; to try and draw a distinction misunderstands what it is) doesn't catch bugs, it defines a contract. Code is then written to conform to that contract. Bugs cannot be introduced to be caught as they would violate the contract.

Of course that is not a panacea. What can happen in the real world is not truly understanding what the software needs to do. That can result in the contract not being aligned with what the software actually needs. It is quite reasonable to call the outcome of that "bugs", but tests cannot catch that either. In that case, the tests are where the problem lies!

Most aspects of software are pretty clear cut, though. You can reasonably define a full contract without needing to see it. UX is a particular area where I've struggled to find a way to determine what the software needs before seeing it. There is seemingly no objective measure that can be applied in determining if a UX is going to spark joy in order to encode that in a contract ahead of time. Although, as before, I'm quite interested to learn about how others are solving that problem as leaving it up to "I'll know it when I see it" is a rather horrible approach.

robryk 4 days ago | parent | prev [-]

Would automated tests that produce a transcript of what they've done allow perusing that transcript to substitute for manual testing?

pjc50 4 days ago | parent | next [-]

That sounds harder?

There's a lot of pedantry here trying to argue that there exists some feature which doesn't need to be "manually" tested, and I think the definition of "manual" can be pushed around a lot. Is running a program that prints "OK" a manual test or not? Is running the program and seeing that it now outputs "grue" rather than "bleen" manual? Does verifying the arithmetic against an Excel spreadsheet count?

There are programs that almost can't be manual, and programs that almost have to be manual. I remember when working on PIN pad integration we looked into getting a robot to push the buttons on the pad - for security reasons there's no way of injecting input automatically.

What really matters is getting as close to a realistic end user scenario as possible.

simonw 4 days ago | parent | prev | next [-]

No. I've fallen for that trap in the past. Something inevitably catches you out in the end.

bluGill 4 days ago | parent | prev [-]

The value of manual tests is when you "see something" that you didn't even think of.