▲ | b0a04gl 8 hours ago | |
even the proposed fix preregistration or sticking to singlemetric tests assumes your metric design is clean to begin with. in practice, i've seen product metrics get messy, nested, and full of indirect effects. i might preregister a metric like activation rate, but it's influenced by onboarding UX, latency, cohort time and external traffic spikes. so even if i avoid phacking structurally, i'm still overfitting to a proxy i don't fully control. that's the blindspot. how do i preregister a test when the metric itself isn't stable across runs? doesn't it overcomplicate the process only. it's new to me but context plays a bigger role ig |