Remix.run Logo
refactor_master 4 days ago

There’s a common misconception that high throughput methods = large n.

For example, I’ve encountered the belief that just by recording something at ultra high temporal resolution gives you “millions of datapoints”. This then has all sorts of effects on the breakdown of statistics and hypothesis testing (seemingly).

In reality, the replicability of the entire setup, the day it was performed, the person doing it, etc. means the n for the day is probably closer to 1. So to ensure replicability you’d have to at least do it on separate days, with separately prepared samples. Otherwise, how can you eliminate the chance that your ultra finicky sample just happened to vibe with that day’s temperature and humidity?

But they don’t teach you in statistics what exactly “n” means, probably because a hundred years ago it was much more literal in nature. 100 samples is because I counted 100 mice, 100 peas, or 100 surveys.

clickety_clack 3 days ago | parent | next [-]

I learned about experiment design in statistics, so I wouldn’t blame statisticians for this.

There’s a lot of folks out there though who learned the mechanics of linear regression in a bootcamp or something without gaining an appreciation for the underlying theories, and those folks are looking for low p-value and as long as they get it it’s good enough.

I saw this link yesterday and could barely believe it, but I guess these folks really live among us.

https://stats.stackexchange.com/questions/185507/what-happen...

ImageXav 3 days ago | parent | prev [-]

This is an interesting point. I've been trying to think about something similar recently but don't have much of an idea how to proceed. I'm gathering periodic time series data and am wondering how to factor in the frequency of my sampling for the statistical tests. I'm not sure how to assess the difference between 50Hz and 100Hz on the outcome, given that my periods are significantly longer. Would you have an idea of how to proceed? The person I'm working with currently just bins everything in hour long buckets and uses the mean for comparison between time series but this seems flawed to me.

refactor_master a day ago | parent [-]

I don't know if you'll be reading this, but my first intuition would be to determine my effective sampling rate, and determine if samples are comparable at all in the first place.

For example, if your phenomenon is observable at 50 Hz, maybe even 10 Hz, then any higher temporal resolution does not give you new information, because any two adjacent datapoints in the time-series are extremely correlated. Going the other way, at a very low sampling frequency you'd just get the mean, which might not reveal anything of interest.

If you bin 100 Hz data at 50 Hz, are they the same? Is the Fourier spectrum the same? If you have samples of different resolution you must choose the lowest common denominator for a fair statistical comparison. Otherwise, a recording between a potato and an advanced instrument would always be "statistically different", which doesn't make sense.

If you don't find "anything", the old adage goes "the absence of evidence is not the evidence of absence", so statistics don't really fail here. You can only conclude that your method is not sensitive enough.