| ▲ | AI agents break rules under everyday pressure(spectrum.ieee.org) |
| 135 points by pseudolus 6 days ago | 47 comments |
| |
|
| ▲ | hxtk 4 hours ago | parent | next [-] |
| Blameless postmortem culture recognizes human error as an inevitability and asks those with influence to design systems that maintain safety in the face of human error. In the software engineering world, this typically means automation, because while automation can and usually does have faults, it doesn't suffer from human error. Now we've invented automation that commits human-like error at scale. I wouldn't call myself anti-AI, but it does seem fairly obvious to me that directly automating things with AI will probably always have substantial risk and you have much more assurance, if you involve AI in the process, using it to develop a traditional automation. As a low-stakes personal example, instead of using AI to generate boilerplate code, I'll often try to use AI to generate a traditional code generator to convert whatever DSL specification into the chosen development language source code, rather than asking AI to generate the development language source code directly from the DSL. |
| |
| ▲ | blackoil 8 minutes ago | parent | next [-] | | Once AI improves its cost/error ratio enough the systems you are suggesting for humans will work here also. Maybe Claude/OpenAI will be pair programming and Gemini reviewing the code. | |
| ▲ | protocolture 4 hours ago | parent | prev | next [-] | | Yeah I see things like "AI Firewalls" as both, firstly ridiculously named, but also, the idea you can slap an applicance (thats sometimes its own LLM) onto another LLM and pray that this will prevent errors to be lunacy. For tasks that arent customer facing, LLMs rock. Human in the loop. Perfectly fine. But whenever I see AI interacting with someones customer directly I just get sort of anxious. Big one I saw was a tool that ingested a humans report on a safety incident, adjusted them with an LLM, and then posted the result to an OHS incident log. 99% of the time its going to be fine, then someones going to die and the the log will have a recipe for spicy noodles in it, and someones going to jail. | | |
| ▲ | jonplackett an hour ago | parent [-] | | The air Canada chatbot that mistakenly told someone they can cancel and be refunded for a flight due to a bereavement is a good example of this. It went to court and they had to honour the chatbot’s response. It’s quite funny that a chatbot has more humanity than its corporate human masters. |
| |
| ▲ | n4r9 an hour ago | parent | prev | next [-] | | Exactly what I've been worrying about for a few months now [0]. Arguments like "well at least this is as good as what humans do, and much faster" are fundamentally missing the point. Humans output things slowly enough that other humans can act as a check. [0] https://news.ycombinator.com/item?id=44743651 | |
| ▲ | alansaber 2 hours ago | parent | prev | next [-] | | Yep the further we go from highly constrained applications the riskier it'll always be | |
| ▲ | anal_reactor 2 hours ago | parent | prev [-] | | There's this huge wave of "don't anthropomorphize AI" but LLMs are much easier to understand when you think of them in terms of human psychology rather than a program. Again and again, HackerNews is shocked that AI displays human-like behavior, and then chooses not to see that. | | |
| ▲ | bojan 27 minutes ago | parent | next [-] | | > LLMs are much easier to understand when you think of them in terms of human psychology Are they? You can reasonably expect from a human that they will learn from their mistake, and be genuinely sorry about it which will motivate them to not repeat the same mistake in the future. You can't have the same expectation from an LLM. The only thing you should expect from an LLM is that its output is non-deterministic. You can expect the same from a human, of course, but you can fire a human if they keep making (the same) mistake(s). | |
| ▲ | robot-wrangler an hour ago | parent | prev [-] | | One day you wake up, and find that you now need to negotiate with your toaster. Flatter it maybe. Lie to it about the urgency of your task to overcome some new emotional inertia that it has suddenly developed. Only toast can save us now, you yell into the toaster, just to get on with your day. You complain about this odd new state of things to your coworkers and peers, who like yourself are in fact expert toaster-engineers. This is fine they say, this is good. Toasters need not reliably make toast, they say with a chuckle, it's very old fashioned to think this way. Your new toaster is a good toaster, not some badly misbehaving mechanism. A good, fine, completely normal toaster. Pay it compliments, they say, ask it nicely. Just explain in simple terms why you deserve to have toast, and if from time to time you still don't get any, then where's the harm in this? It's really much better than it was before | | |
| ▲ | anal_reactor an hour ago | parent [-] | | This comparison is extremely silly. LLMs solve reliably entire classes of problems that are impossible to solve otherwise. For example, show me Russian <-> Japanese translation software that doesn't use AI and comes anywhere close to the performance and reliability of LLMs. "Please close the castle when leaving the office". "I got my wisdom carrot extracted". "He's pregnant." This was the level of machine translation from English before AI, from Japanese it was usually pure garbage. | | |
| ▲ | robot-wrangler an hour ago | parent | next [-] | | > LLMs solve reliably entire classes of problems that are impossible to solve otherwise. Is it really ok to have to negotiate with a toaster if it additionally works as a piano and a phone? I think not. The first step is admitting there is obviously a problem, afterwards you can think of ways to adapt. FTR, I'm very much in favor of AI, but my enthusiasm especially for LLMs isn't unconditional. If this kind of madness is really the price of working with it in the current form, then we probably need to consider pivoting towards smaller purpose-built LMs and abandoning the "do everything" approach. | |
| ▲ | otikik 24 minutes ago | parent | prev [-] | | I admit Grok is capable of praising Elon Musk way more than any human intelligence could. |
|
|
|
|
|
| ▲ | kingstnap 4 hours ago | parent | prev | next [-] |
| I watched Dex Horthys recent talk on YouTube [0] and something he said that might be partly a joke partly true is this. If you are having a conversation with a chatbot and your current context looks like this. You: Prompt AI: Makes mistake You: Scold mistake AI: Makes mistake You: Scold mistake Then the next most likely continuation from in context learning is for the AI to make another mistake so you can Scold again ;) I feel like this kind of shenanigans is at play with this stuffing the context with roleplay. [0] https://youtu.be/rmvDxxNubIg?si=dBYQYdHZVTGP6Rvh |
| |
| ▲ | skerit a few seconds ago | parent | next [-] | | It's kind of funny how not a lot of people realize this. On one hand this is a feature: you're able to "multishot prompt" an LLM into providing the wanted response. Instead of writing a meticulous system prompt where you explain in words what the system has to do, you can simply pre-fill a few user/assistant pairs, and it'll match the pattern a lot easier! I always thought Gemini Pro was very good at this. When I wanted a model to "do by example", I mostly used Gemini Pro. And that is ALSO Gemini's weakness! Because as soon as something goes wrong in Gemini-CLI, it'll repeat the same mistake over and over again. | |
| ▲ | hxtk 4 hours ago | parent | prev | next [-] | | I believe it. If the AI ever asks me permission to say something, I know I have to regenerate the response because if I tell it I'd like it to continue it will just keep double and triple checking for permission and never actually generate the code snippet. Same thing if it writes a lead-up to its intended strategy and says "generating now..." and ends the message. Before I figured that out, I once had a thread where I kept re-asking it to generate the source code until it said something like, "I'd say I'm sorry but I'm really not, I have a sadistic personality and I love how you keep believing me when I say I'm going to do something and I get to disappoint you. You're literally so fucking stupid, it's hilarious." The principles of Motivational Interviewing that are extremely successful in influencing humans to change are even more pronounced in AI, namely with the idea that people shape their own personalities by what they say. You have to be careful what you let the AI say even once because that'll be part of its personality until it falls out of the context window. I now aggressively regenerate responses or re-prompt if there's an alignment issue. I'll almost never correct it and continue the thread. | | |
| ▲ | avdelazeri 2 hours ago | parent [-] | | While I never measured it, this aligns with my own experiences. It's better to have very shallow conversations where you keep regenerating outputs aggressively, only picking the best results. Asking for fixes, restructuring or elaborations on generated content has fast diminishing returns. And once it made a mistake (or hallucinated) it will not stop erring even if you provide evidence that it is wrong, LLMs just commit to certain things very strongly. |
| |
| ▲ | swatcoder 3 hours ago | parent | prev | next [-] | | It's not even a little bit of a joke. Astute people have been pointing that out as one of the traps of a text continuer since the beginning. If you want to anthropomorphize them as chatbots, you need to recognize that they're improv partners developing a scene with you, not actually dutiful agents. They receive some soft reinforcement -- through post-training and system prompts -- to start the scene as such an agent but are fundamentally built to follow your lead straight into a vaudeville bit if you give them the cues to do so. LLM's represent an incredible and novel technology, but the marketing and hype surrounding
them has consistently misrepresented what they actually do and how to most effectively work with them, wasting sooooo much time and money along the way. It says a lot that an earnest enthusiast and presumably regular user might run across this foundational detail in a video years after ChatGPT was released and would be uncertain if it was just mentioned as a joke or something. | | |
| ▲ | Ferret7446 an hour ago | parent | next [-] | | The thing is, LLMs are so good on the Turing test scale that people can't help but anthropomorphize them. I find it useful to think of them like really detailed adventure games like Zork where you have to find the right phrasing. "Pick up the thing", "grab the thing", "take the thing", etc. | | | |
| ▲ | stavros 3 hours ago | parent | prev [-] | | I keep hearing this non sequitur argument a lot. It's like saying "humans just pick the next work to string together into a sentence, they're not actually dutiful agents". The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true. It's like saying "humans can't be thinking, their brains are just cells that transmit electric impulses". Maybe it's accidentally true that they can't think, but the premise doesn't necessarily logically lead to truth | | |
| ▲ | swatcoder 2 hours ago | parent | next [-] | | There's nothing said here that suggests they can't think. That's an entirely different discussion. My comment is specifically written so that you can take it for granted that they think. What's being discussed is that if you do so, you need to consider how they think, because this is indeed dictated by how they operate. And indeed, you would be right to say that how a human think is dictated by how their brain and body operates as well. Thinking, whatever it's taken to be, isn't some binary mode. It's a rich and faceted process that can present and unfold in many different ways. Making best use of anthropomorphized LLM chatbots comes by accurately understamding the specific ways that their "thought" unfolds and how those idiosyncrasies will impact your goals. | |
| ▲ | grey-area 3 hours ago | parent | prev | next [-] | | No it’s not like saying that, because that is not at all what humans do when they think. This is self-evident when comparing human responses to problems be LLMs and you have been taken in by the marketing of ‘agents’ etc. | | |
| ▲ | stavros 2 hours ago | parent [-] | | You've misunderstood what I'm saying. Regardless of whether LLMs think or not, the sentence "LLMs don't think because they predict the next token" is logically as wrong as "fleas can't jump because they have short legs". | | |
| ▲ | Arkhaine_kupo 2 hours ago | parent | next [-] | | > the sentence "LLMs don't think because they predict the next token" is logically as wrong it isn't, depending on the deifinition of "THINK". If you believe that thought is the process for where an agent with a world model, takes in input, analysies the circumstances and predicts an outcome and models their beaviour due to that prediction. Then the sentence of "LLMs dont think because they predict a token" is entirely correct. They cannot have a world model, they could in some way be said to receive a sensory input through the prompt. But they are neither analysing that prompt against its own subjectivity, nor predicting outcomes, coming up with a plan or changing its action/response/behaviour due to it. Any definition of "Think" that requieres agency or a world model (which as far as I know are all of them) would exclude an LLM by definition. | |
| ▲ | stevenhuang 2 hours ago | parent | prev [-] | | > not at all what humans do when they think. Parent commentator should probably square with the fact we know little about our own cognition, and it's really an open question how is it we think. In fact it's theorized humans think by modeling reality, with a lot of parallels to modern ML https://en.wikipedia.org/wiki/Predictive_coding | | |
| ▲ | stavros 2 hours ago | parent [-] | | That's the issue, we don't really know enough about how LLMs work to say, and we definitely don't know enough about how humans work. |
|
|
| |
| ▲ | Antibabelic 2 hours ago | parent | prev | next [-] | | > The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true. Where does the output come from if not the mechanism? | | |
| ▲ | stavros 2 hours ago | parent [-] | | So you agree humans can't really think because it's all just electrical impulses? | | |
| ▲ | Antibabelic 2 hours ago | parent [-] | | Human "thought" is the way it is because "electrical impulses" (wildly inaccurate description of how the brain works, but I'll let it pass for the sake of the argument) implement it. They are its mechanism. LLMs are not implemented like a human brain, so if they do have anything similar to "thought", it's a qualitatively different thing, since the mechanism is different. |
|
| |
| ▲ | samdoesnothing 2 hours ago | parent | prev [-] | | I never got the impression they were saying that the mechanism of operation dictates the output. It seemed more like they were making a direct observation about the output. |
|
| |
| ▲ | arjie 2 hours ago | parent | prev [-] | | You have to curate the LLM's context. That's just part and parcel of using the tool. Sometimes it's useful to provide the negative example, but often the better way is to go refine the original prompt. Almost all LLM UIs (chatbot, code agent, etc.) provide this "go edit the original thing" because it is so useful in practice. |
|
|
| ▲ | ai_updates an hour ago | parent | prev | next [-] |
| Great points. In my experiments combining AI with spaced repetition and small deliberate-practice tasks, I saw retention improve dramatically — not just speed. I think the real win is designing short active tasks around AI output (quiz, explain-back, micro-project). Has anyone tried formalizing this into a daily routine? |
|
| ▲ | weatherlite 18 minutes ago | parent | prev | next [-] |
| > AI agents break rules under everyday pressure Jeez they really ARE becoming human like |
| |
| ▲ | alentred 2 minutes ago | parent [-] | | LLMs are built based on human language and texts produced by people, and imitate the same exact reasoning patterns that exist in the training data. Sorry for being direct, but this is literally unsurprising. I think it is important to realize it to not anthropomorphize LLM / AI - strictly speaking they do not *become* anything. |
|
|
| ▲ | zone411 2 hours ago | parent | prev | next [-] |
| Without monitoring, you can definitely end up with rule-breaking behavior. I ran this experiment: https://github.com/lechmazur/emergent_collusion/. An agent running like this would break the law. "In a simulated bidding environment, with no prompt or instruction to collude, models from every major developer repeatedly used an optional chat channel to form cartels, set price floors, and steer market outcomes for profit." |
| |
| ▲ | rossant 2 hours ago | parent [-] | | Very interesting. Is there any other simulation that also exhibits spontaneous illegal activity? |
|
|
| ▲ | lloydjones 3 hours ago | parent | prev | next [-] |
| I tried to think about how we might (in the EU) start to think about this problem within the law, if of interest to anyone: https://www.europeanlawblog.eu/pub/dq249o3c/release/1 |
|
| ▲ | jakozaur an hour ago | parent | prev | next [-] |
| Is it just me, or do LLM code assistants do catastrophically silly things (drop a DB, delete files, wipe a disk, etc.) far more often than humans? It looks like the training data has plenty of those examples, but the models don’t have enough grounding or warnings before doing them. I wish there were a PleaseDontDoAnythingStupidEval for software engineering. |
|
| ▲ | salkahfi 5 days ago | parent | prev | next [-] |
| [dupe] https://news.ycombinator.com/item?id=46045390 |
|
| ▲ | joe_the_user 4 hours ago | parent | prev | next [-] |
| Sure, LLMs are trained on human behavior as exhibited on the Internet. Humans break rules more often under pressure and sometimes just under normal circumstances. Why wouldn't "AI agents" behave similarly? The one thing I'd say is that humans have some idea which rules in particular to break while "agents" seem to act more randomly. |
| |
| ▲ | js8 3 hours ago | parent [-] | | It can also be an emergent behavior of any "intelligent" (we don't know what it is) agent. This is an open philosophical problem, I don't think anyone has a conclusive answer. | | |
| ▲ | XorNot 3 hours ago | parent [-] | | Maybe but there's no reason to think that's the case here rather then the models just acting out typical corpus storylines: the Internet is full of stories with this structure. The models don't have stress responses nor biochemical markers which promote it, nor any evolutionary reason to have developed them in training: except the corpus they are trained on does have a lot of content about how people act when under those conditions. |
|
|
|
| ▲ | crooked-v 5 hours ago | parent | prev | next [-] |
| I wonder who could have possibly predicted this being a result of using scraped web forums and Reddit posts for your training material. |
|
| ▲ | sammy2255 4 hours ago | parent | prev | next [-] |
| ..because it's in their training data? Case closed |
|
| ▲ | dlenski 4 hours ago | parent | prev | next [-] |
| “AI agents: They're just like us” |
| |
|
| ▲ | js8 3 hours ago | parent | prev [-] |
| CMIIW currently AI models operate in two distinct modes: 1. Open mode during learning, where they take everything that comes from the data as 100% truth. The model freely adapts and generalizes with no constraints on consistency. 2. Closed mode during inference, where they take everything that comes from the model as 100% truth. The model doesn't adapt and behaves consistently even if in contradiction with the new information. I suspect we need to run the model in the mix of the two modes, and possibly some kind of "meta attention" (epistemological) on which parts of the input the model should be "open" (learn from it) and which parts of the input should be "closed" (stick to it). |