Remix.run Logo
The "are you sure?" Problem: Why AI keeps changing its mind(randalolson.com)
13 points by turoczy 20 hours ago | 13 comments
trusche 2 hours ago | parent | next [-]

This is real, but (at least in a coding context) easily preventable. Just append "don't assume you're wrong - investigate" or something to that effect. Annoying, but usually effective.

RugnirViking 2 hours ago | parent | prev | next [-]

The article's main idea is that for an AI, sycophancy or adversarial are the two available modes because they don't have enough context to make defensible decisions. You need to include a bunch of fuzzy stuff around the situation, far more than it strictly "needs" to help it stick to its guns and actually make decisions confidently

I think this is interesting as an idea. I do find that when I give really detailed context about my team, other teams, ours and their okrs, goals, things I know people like or are passionate about, it gives better answers and is more confident. but its also often wrong, or overindexes on these things I have written. In practise, its very difficult to get enough of this on paper without a: holding a frankly worrying level of sensitive information (is it a good idea to write down what I really think of various people's weaknesses and strengths?) and b: spending hours each day merely establishing ongoing context of what I heard at lunch or who's off sick today or whatever, plus I know that research shows longer context can degrade performance, so in theory you want to somehow cut it down to only that which truly matters for the task at hand and and and... goodness gracious its all very time consuming and im not sure its worth the squeeze

agentultra 2 hours ago | parent | prev | next [-]

There isn’t a mind to change. Unfortunately the article is slop. Too bad, won’t read the rest.

I wish there was a tag or something we could put on headlines to avoid giving views to slop.

sunir 2 hours ago | parent [-]

There is a mind; the model + text + tool inputs is the full entity that can remember, take in sensory information, set objectives, decide, learn. The Observe, Orient, Decide, Act loop.

philipp-gayret 2 hours ago | parent | prev | next [-]

I am seriously tired of every other paragraph I read ending in an It isn't just X, it's Y. I'm sure there is something insightful in between this slop but to the author: Please write using your own voice, if I wanted ChatGPT's take on it I would ask.

tyleo 2 hours ago | parent | next [-]

Agreed. I don't even necessarily have anything against AI edited text but there's a way to sharpen your own writing and there's a way to let its voice dominate. There's a lot of idioms it tends to fall back on (em dashes being the most well known). I'm surprised that folks don't notice these and aggressively reassert their voice.

I use LLMs in my own writing because they have benefits for conciseness but it tends to be a fairly laborious process of putting my text in the LLM for shortening and grammar, getting something more generic out, putting my soul back in, putting it back in the LLM for shortening, etc. I tend to do this at the paragraph level rather than the page level.

jofzar 2 hours ago | parent | prev | next [-]

I miss people having their own voice. I can't keep reading slop.

I wish hackernews banned slop, or atleast required disclosure.

srean 2 hours ago | parent | prev | next [-]

I think HN might need a downvote button for stories if this continues.

jagged-chisel 2 hours ago | parent [-]

We have "flag." Flag 'em.

robertlagrant 2 hours ago | parent | prev [-]

Exactly. It's not just nauseating—it's sickening.

catigula 2 hours ago | parent | prev | next [-]

An AI can only be tuned to either be sycophantic or adversarial.

It isn't possible to tune an AI to have some sort of 'correct answer' orientation because that would be full AGI.

gmerc 2 hours ago | parent | prev | next [-]

AI Slop. Unfortunately

josefritzishere 2 hours ago | parent | prev [-]

AI slop about AI slop. The internet is dead.