Remix.run Logo
epgui 17 hours ago

It does, if you assume you care about the validity of the results or about making changes that improve your outcomes.

The degree of care can be different in less critical contexts, but then you shouldn’t lie to yourself about how much you care.

renjimen 16 hours ago | parent [-]

But there’s an opportunity cost that needs to be factored in when waiting for a stronger signal.

Nevermark 16 hours ago | parent | next [-]

One solution is to gradually move instances to you most likely solution.

But continue a percentage of A/B/n testing as well.

This allows for a balancing of speed vs. certainty

imachine1980_ 15 hours ago | parent [-]

do you use any tool for this, or simply crunk up slightly the dial each day

hruk 2 hours ago | parent | next [-]

We've used this Python package to do this: https://github.com/bayesianbandits/bayesianbandits

travisjungroth 15 hours ago | parent | prev [-]

There are multi armed bandit algorithms for this. I don’t know the names of the public tools.

This is especially useful for something where the value of the choice is front loaded, like headlines.

scott_w 10 hours ago | parent | prev | next [-]

There is but you can decide that up front. There’s tools that will show you how long it’ll take to get statistical significance. You can then decide if you want to wait that long or have a softer p-value.

epgui 14 hours ago | parent | prev [-]

Even if you have to be honest with yourself about how much you care about being right, there’s still a place for balancing priorities. Two things can be true at once.

Sometimes someone just has to make imperfect decisions based on incomplete information, or make arbitrary judgment calls. And that’s totally fine… But it shouldn’t be confused with data-driven decisions.

The two kinds of decisions need to happen. They can both happen honestly.