| ▲ | wlesieutre 5 hours ago |
| I miss the pre-LLM days when you could make a decent argument that having any unnecessary data was just a liability. Now all anybody thinks is “more data for the AI!” |
|
| ▲ | hdndjsbbs 3 hours ago | parent | next [-] |
| 10+ years ago companies were hoovering up data for ML - trying to find correlations in high-dimensionality data. Mostly the results were garbage but occasionally you hit on a real, unexpected phenomenon. Nowadays you just throw all the data into a black box and believe whatever it says blindly. |
|
| ▲ | CincinnatiMan 5 hours ago | parent | prev | next [-] |
| Were you not around for the Big Data heyday a decade ago? |
| |
| ▲ | varispeed 5 hours ago | parent | next [-] | | Until thumb drives became large enough to fit most datasets it stopped becoming Big Data. Just normal data. | | |
| ▲ | jmalicki 3 hours ago | parent | next [-] | | To some degree IMO big data is still a mindset when it might take a day to process your data in a normal SQL query. Some tech doesn't scale to the data size for all use cases, and you need different solutions. | |
| ▲ | ffsm8 4 hours ago | parent | prev [-] | | We have thumb drives that can store petabytes of data? Or did you mean the "big data" crowd which thought 500GB was noteworthy? I don't think anyone took those serious, neither in 2010s nor now. That was always "small" data | | |
| ▲ | 0x457 an hour ago | parent | next [-] | | My rule of thumb was "can it fit in RAM on a server?" If it can, then it's not big data. 500GB is in the "fits" category. | |
| ▲ | butlike 4 hours ago | parent | prev | next [-] | | > We have thumb drives that can store petabytes of data We do? | | |
| ▲ | dylan604 2 hours ago | parent | next [-] | | It was a question that you've edited out the punctuation. You're asking the exact same thing as the person you've replied | |
| ▲ | ffsm8 3 hours ago | parent | prev [-] | | Please provide a link. | | |
| |
| ▲ | varispeed 4 hours ago | parent | prev [-] | | Most companies using term "big data" had datasets in TB region. One company I had a gig at had full Hadoop cluster setup and their whole dataset was 40GB. Their marketing had all the big data adjacent keywords over the brochures for clients. |
|
| |
| ▲ | ToucanLoucan 4 hours ago | parent | prev [-] | | Hell you mean a decade ago? I still see businesses running losses left right and center saying that they're gonna monetize user data, any day now. Related "monetizing user data" seems to just mean ads. Ads on everything, forever, until the userbase gets fed up and moves to a new service that definitely won't do that, and the cycle repeats about every 3 years. |
|
|
| ▲ | citrin_ru 5 hours ago | parent | prev [-] |
| Data hoarding predates LLMs. There where other machine learning methods which also needed data for training. |
| |
| ▲ | Forgeties79 4 hours ago | parent [-] | | “Before LLM’s there was_____” I see this whenever an LLM’s impact is assessed. We know. The issue is scale and the ability for smaller and smaller groups (down to individuals) to execute at scale. Fake news always existed. Now one dude in India can flood multiple sock puppet media accounts with right wing content/images (actual example) at a scale previously unimaginable. | | |
| ▲ | dpoloncsak 4 hours ago | parent | next [-] | | Do LLMs require that much more data than the tradional ML approaches we've seen over the years? | | |
| ▲ | sigmoid10 4 hours ago | parent | next [-] | | Yes. This is pretty well established. Neural networks in general are considerably less sample-efficient than traditional ML methods. The reason they became so successful is that they scale better as you increase training data and model size. But only with modern compute power they became useful outside of academic toy model applications. | |
| ▲ | Forgeties79 2 hours ago | parent | prev [-] | | That’s not the issue I’m hitting here primarily but yes. My concern is that I can open up chatGPT and even with a free, “anonymous” account run an assembly line generating tens of thousands of words a day to pump to Twitter that are good enough to prop up multiple fake accounts and cause mayhem. Now make it thousands of people like me doing it. Now add funding and political orgs. Add company leadership that turns a blind eye so long as it drives engagement. This scale and pipeline wasn’t possible 5 years ago, even if we clearly see the throughline. I’m not even getting into fake images either. That used to require some know how. There are basically no hurdles and even if most people learn it’s fake, millions likely won’t. If you’re a little lucky, less scrupulous “news” outlets will amplify it for you as well for free. |
| |
| ▲ | b00ty4breakfast 4 hours ago | parent | prev | next [-] | | I really hate this when it's something negative that humans also do. It's like, yeah, people do do that, but why are we automating {negativeTrait}? | |
| ▲ | ToucanLoucan 4 hours ago | parent | prev [-] | | > Now one dude in India can flood multiple sock puppet media accounts with right wing content/images (actual example) at a scale previously unimaginable. I have the faintest possible hope that such things are going to be the death knell of social media. Yeah a lot of credulous idiots are happily giving AI thirst traps their money for stroking their confirmation bias, but that's just who's left at this point. It feels like every social media app I use is gradually bleeding users who aren't hopelessly addicted to the dopamine treadmill, because what's left is just plain unappealing to them, which selects for the people who are most vulnerable to AI shit, which is far from ideal, but also means those platforms are comprised ever more of that vulnerable population and nobody else. And the problem with all these businesses going through that is without a diverse, growing audience, you just become InfoWars, slinging the same slop to the same people every day, and every ounce of said slop is great for what's left of your audience, but absolute garbage for getting anyone new in it. And it just goes on that way until you sputter out and die (or harass the wrong group of parents I guess). I wish all social media sites a very haha die in a fire. | | |
| ▲ | dpoloncsak 3 hours ago | parent [-] | | Mate you're on a social media site right now that often has AI-generated content displayed at the top of whats "trending". Sure the general user-base does a better job here flagging that sort of stuff, as AI seems to be a shared interest in much of the community, but it still sneaks it's way by |
|
|
|