Remix.run Logo
nerdjon 2 days ago

How we keep getting articles like this, that LLM's will flat out lie, and yet we keep pushing them and the general public keeps eating it up... is beyond me.

They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.

chownie a day ago | parent | next [-]

From the LLMs perspective, "let me check the docs" is the invocation you say before you come back with an answer, because that almost certainly appears in the corpus many times naturally.

gmm1990 2 days ago | parent | prev [-]

How are there not agents that are "instruct trained" differently. Is this behavior in the fundamental model? From my limited knowledge I'd think it'd be more from those post model training steps, but there are so many people who don't like that I'd figure there be an interface that doesn't talk like that.