Remix.run Logo
bonesss 3 hours ago

I’m cautiously optimistic that LLMs have a role in addressing that asymmetry to the side of good faith actors.

Gish galloping bad faith trolls aren’t new. LLMs shape their BS into fluffy BS that isn’t particularly more effective. But now, We Have The Technology, refuting a pile of poo semi-accurately should be cheap (or at least getting cheaper).

I don’t need an LLM on my phone that can do tax law in Georgia the country. But an “AI Assistant” that could highlight logical fallacies, shifting goalposts, non-responsive dialog, rhetorical obfuscations, etc, would be useful online, at the bar, and work (ie when HR tries to “HR” you, but also is lying and obfuscating about it).

We already have models and people that bullshit. Maybe refutation models are the cure… Chinese needle snakes to catch the lizards, Gorillas to catch the snakes…

applfanboysbgon 2 hours ago | parent | next [-]

This is a horrendous take. The only thing this is going to do / is already doing is increasing people's creation of their own reality bubble. LLMs are not some source of objective truth, they will inevitability lean towards reinforcing either (1) prompter's position, (2) the model trainer's position, or (3) the statistically average position, none of which are guaranteed to be logically correct. But people do take them as objective truth, so now we have a bunch of fucking morons going around saying "see, ChatGPT says so, I'm right!".

Pay08 2 hours ago | parent | prev [-]

> LLMs shape their BS into fluffy BS that isn’t particularly more effective.

But does take longer to disprove.