Remix.run Logo
PlatoIsADisease 8 hours ago

In 10-20 years all this AI disclaimer stuff is going to be like 'don't use wikipedia, it could lie!'

Status Quo Bias is a real thing, and we are seeing those people in meltdown with the world changing around them. They think avoiding AI, putting disclaimers on it, etc... will matter. But they aren't being rational, they are being emotional.

The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.

jacquesm 8 hours ago | parent | next [-]

I don't think that's true. The 'this battle is already over' attitude is the most defeatist strategy possible. It's effectively complying in advance, rolling over before you've attempted to create the best possible outcome.

With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.

I'm sure AI has its place but to see it assume the guise of human output without any kind of differentiating factor has so many downsides that it is worth trying to curb the excesses. And news articles in particular should be free from hallucinations because they in turn will cause others to pass those on. Obviously with the quality of some publications you could argue that that is an improvement but it wasn't always so and a free and capable press is a precious thing.

mikkupikku 4 hours ago | parent | next [-]

> With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.

None of these things were rolling back a technology. History shows that technology is a ratchet, the only way to get rid of a technology is social collapse or surplanting the technology with something even more useful or at the very least approximately as useful but safer.

Once a technology has proliferated, it's a fiat accompli. You can regulate the technology but turning the clock back isn't going to happen.

jacquesm 3 hours ago | parent [-]

We have plenty of examples of regulated technology.

And usually the general public does not have a direct stake in the outcome (ok, maybe broadcast spectrum regulation should be mentioned there), but this time they do and given what's at stake it may well be worth trying to define what a good set of possible outcomes would be and how to get there.

As I mentioned above and which TFA is all about, the press for instance could be held to a standard that they have shown they can easily meet in the past.

terminalshort an hour ago | parent | prev [-]

Well they aren't free from hallucinations with human authors. Not to long ago there was an outbreak of articles in the "reputable" mainstream press claiming that there was a foiled terrorist plot against the UN which was actually (and obviously) a garden variety SMS fraud operation. Why should I care if it's AI lying to me next time rather than the constant deluge of humans lying to me?

Llamamoe 8 hours ago | parent | prev | next [-]

AI-written articles tend to be far more regurgitative, lower in value, and easier to ghostwrite with intent to manipulate the narrative.

Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.

RobotToaster 8 hours ago | parent | next [-]

My theory is that AI writes the way it does because it was trained on a lot of modern (organic) journalism.

So many words to say so little, just so they can put ads between every paragraph.

charcircuit 7 hours ago | parent | prev [-]

That is low quality articles in general. Have you never seen how hundreds of news sites will regurgitate the same story of another. This was happening long before AI. High quality AI written articles will still be high value.

orwin 7 hours ago | parent [-]

Did you go on grokipedia at release? I still sometimes loose myself reading stuff on Wikipedia, I guarantee you that this can't happen on grok, so much noise between facts it's hard to enjoy.

charcircuit 7 hours ago | parent [-]

Yes I did go immediately on release. I was finally able to correct articles that have been inaccurate on Wikipedia for years.

orwin 2 hours ago | parent [-]

So you noticed how poor the prose was? Really unbearable to read.

charcircuit an hour ago | parent [-]

I found it fine to read and it handled controversial subjects much better than Wikipedia.

duskdozer 7 hours ago | parent | prev | next [-]

Current AI use is heavily subsidized; we will see how much value there actually is when it comes time to monetize.

8 hours ago | parent | prev | next [-]
[deleted]
simion314 8 hours ago | parent | prev | next [-]

Emotional my ass, just have websites and social media give me a filter to hide AI stuff , I can't enjoy a video , post or story anymore since I always doubt it is real, if I am part of a minority this filter should not hit the budget of companies and would encourage real people generated content if we are larger then a dozen people.

wiseowise 8 hours ago | parent | prev [-]

> But they aren't being rational, they are being emotional.

When your mind is so fried on slop that you start to write like one.

> The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.

Look at all this value created like *checks notes* scam ads, apps that undress women and teenage girls, tech bros jerking each other off on twitter, flooding open source with tsunami of low quality slop, inflating chip prices, thousands are cut off in cost savings and dozens more.

Cat is out of the bag for sure.

mikkupikku 7 hours ago | parent [-]

You may not like it, but this is what peak economic performance looks like.

blibble 4 hours ago | parent [-]

no, this is what massive subsidy looks like