| ▲ | chaps 6 hours ago | |||||||
Exactly. This is a big problem with "open data". A lot goes into cleaning it up to make it publishable, which often includes removing data so that the public "doesn't get confused". Now I have to spend months and months fighting FOIA fights to get the original raw, messy data because someone , somewhere had opinions on what "clean data" is. I'll pass -- give me the raw, messy data. | ||||||||
| ▲ | freehorse 5 hours ago | parent [-] | |||||||
I do not disagree with that, but I am not sure what "raw data" means in some cases like the ones the article talks about. The 1.700.000 is no less or more raw than 1.700,000. Most probably somebody messed up some decimals somewhere, or somebody imported a csv in excel and it misinterpreted the numbers due to different settings. Similar to swapped longitude/latitude. That sounds different to me than, let's say, noisy temperature data from sensors. Rather, it seems more like issues that arose at the point of merging datasets together, which is already far from the data being raw. The issue imo is that a person closer to the point the data was collected or merged is probably better equipped with understanding of what may be wrong with it, than a random person looking into that dataset. So I do not think it is unreasonable to have people in organisations take a second look into the datasets they publish. | ||||||||
| ||||||||