▲ | cdavid 8 hours ago | ||||||||||||||||
A/B testing does not have to involve micro optimization. If done well, it can reduce the risk / cost of trying things. For example, you can A/B test something before investing a full prod development, etc. When pushing for some ML-based improvements (e.g. new ranking algo), you also want to use it. This is why the cover of the reference A/B test book for product dev has a hippo: A/B test is helpful against just following the HIghest Paid Person Opinion. The practice is ofc more complicated, but that's more organizational/politics. | |||||||||||||||||
▲ | simonw 3 hours ago | parent [-] | ||||||||||||||||
In my own career I've only ever seen it increase the cost of development. The vast majority of A/B test results I've seen showed no significant win in one direction or the other, in which case why did we just add six weeks of delay and twice the development work to the feature? Usually it was because the Highest Paid Person insisted on an A/B test because they weren't confident enough to move on without that safety blanket. There are other, much cheaper things you can do to de-risk a new feature. Build a quick prototype and run a usability test with 2-3 participants - you get more information for a fraction of the time and cost of an A/B test. | |||||||||||||||||
|