▲ | simonw 8 hours ago | |||||||
In my own career I've only ever seen it increase the cost of development. The vast majority of A/B test results I've seen showed no significant win in one direction or the other, in which case why did we just add six weeks of delay and twice the development work to the feature? Usually it was because the Highest Paid Person insisted on an A/B test because they weren't confident enough to move on without that safety blanket. There are other, much cheaper things you can do to de-risk a new feature. Build a quick prototype and run a usability test with 2-3 participants - you get more information for a fraction of the time and cost of an A/B test. | ||||||||
▲ | cdavid 5 hours ago | parent [-] | |||||||
There are cases where A/B testing does not make sense (not enough users to measure anything sensible, etc.). But if the A/B test results were inconclusive, assuming they were done correctly, then what was the point of launching the underlying feature ? As for the HIPPO pushing for an A/B test because of lack of confidence, all I can say is that we had very different experiences, because I've almost always seen the opposite, be it in marketing, search/recommendation, etc. | ||||||||
|