▲ | apetresc 17 hours ago | ||||||||||||||||||||||
Can someone ELI5 what Statsig actually is? Their landing page is full of gems like "Turn action into insights and insights into action" and "Scale your experimentation culture with the world's leading experimentation platform" so I have no clue. It appears to be another analytics + A/B testing platform, but surely that can't be worth $1.1B to OpenAI? | |||||||||||||||||||||||
▲ | chambers 11 hours ago | parent | next [-] | ||||||||||||||||||||||
Statsig's core value is their experimentation platform— the automation of Data Science. Big Tech teams want to ship features fast, but measuring impact is messy. It usually requires experiments and traditionally every experiment needed one Data Scientist (DS) to ensure statistical validity, i.e., "can we trust these numbers?". Ensuring validity means DS has to perform multiple repetitive but specialized tasks throughout the experiment process: debugging bad experiment setups, navigating legacy infra, generating & emailing graphs, compensating for errors and biases in post-analysis, etc. It's a slog for folks involved. Even then, cases still arise where Team A reports wonderful results & ships their feature while unknowingly tanking Team B's revenue— a situation discovered only months later when a DS is tasked to trace the cause. Experimentation platforms like Statsig exist to lower the high cost of experimenting. To show a feature's potential impact before shipping, while reducing frustrations along the way. Most platforms will eliminate common statistical errors or issues at each stage of the experiment process, with appropriate controls for each user role. Engs setup experiments via SDK/UI with nudges and warnings for misconfigurations. DS can focus on higher-value work like metric design. PMs view shared dashboards and get automatic coordination emails with other teams if their feature is seen as breaking. People still fight but earlier on and in the same "room" with fewer questions about what's real versus what's noise. Separating real results from random noise is the meaning of "statsig" / "statistically significant". I think it's similar to how companies define their own metrics (their sense of reality) while the platform manages the underlying statistical and data complexity. The ideal outcome is less DS needed, less crufty tooling to work around, less statistics learning, and crucially, more trust & shared oversight. But it comes at considerable, unsaid cost as well. Is Statsig worth $1B to OpenAI? Maybe. There's an art & science to product development, and Facebook's experimentation platform was central to their science. But it could be premature. I personally think experimentation as an ideology best fits optimization spaces that previously achieved strong product-market fit ages ago. However, it's been years since I've worked in the "Experimentation" domain. I've glossed over a few key details in my answer and anyone is welcome to correct me. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | jijapiopq 16 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
A buzz word driven company with a product meant to track users, their mouse movement, keyboard usage across the internet. Of course to help make the world a better place .... for shoving advertisements. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | Brajeshwar 15 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I'm of the opinion that the marketing gimmicks that we see on some products but ends up either being acquired big or gets those big elusive contracts, is that they did those messaging on purpose to steer the general onlooker something else. However, their internals or when a customer talks to, say, the founders, they would narrate and then show such things that are 100x better than what we see in the open. | |||||||||||||||||||||||
▲ | ygouzerh 11 hours ago | parent | prev [-] | ||||||||||||||||||||||
It seems a mix of analytics + session-replay (e.g MixPanel) and feature flags platform (e.g Growthbook) |