| ▲ | liendolucas 3 hours ago |
| I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works. I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data. You simply don't vibe command a computer. |
|
| ▲ | baxtr 2 hours ago | parent | next [-] |
| Vibe command and get vibe deleted. |
| |
|
| ▲ | camillomiller 2 hours ago | parent | prev | next [-] |
| Now, with this realization, assess the narrative that every AI company is pushing down our throat and tell me how in the world we got here.
The reckoning can’t come soon enough. |
| |
| ▲ | qustrolabe 2 hours ago | parent [-] | | What narrative? I'm too deep in it all to understand what narrative being pushed onto me? | | |
| ▲ | camillomiller an hour ago | parent [-] | | No, wasn't directed at someone in particular. More of an impersonal "you". It was just a comment against the AI inevitabilism that has profoundly polluted the tech discourse. |
|
|
|
| ▲ | Kirth 3 hours ago | parent | prev [-] |
| This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user. |
| |
| ▲ | marmalade2413 3 hours ago | parent | next [-] | | It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy. | | |
| ▲ | Eisenstein 2 hours ago | parent [-] | | According to what, exactly? How did you come up with that analogy? | | |
| ▲ | baq 2 hours ago | parent | next [-] | | Start with LLMs are not humans, but they’re obviously not ‘not intelligent’ in some sense and pick the wildest difference that comes to mind. Not OP but it makes perfect sense to me. | | |
| ▲ | nosianu an hour ago | parent [-] | | I think a good reminder for many users is that LLMs are not based on analyzing or copying human thought (#), but on analyzing human written text communication. -- (#) Human thought is based on real world sensor data first of all. Human words have invisible depth behind them based on accumulated life experience of the person. So two people using the same words may have very different thoughts underneath them. Somebody having only text book knowledge and somebody having done a thing in practice for a long time may use the same words, but underneath there is a lot more going on for the latter person. We can see this expressed in the common bell curve meme -- https://www.hopefulmons.com/p/the-iq-bell-curve-meme -- While it seems to be about IQ, it really is about experience. Experience in turn is mostly physical, based on our physical sensors and physical actions. Even when we just "think", it is based on the underlying physical experiences. That is why many of our internal metaphors even for purely abstract ideas are still based on physical concepts, such as space. |
| |
| ▲ | oskarkk 2 hours ago | parent | prev [-] | | Isn't it obvious that the way AI works and "thinks" is completely different from how humans think? Not sure what particular source could be given for that claim. | | |
| ▲ | seanhunter an hour ago | parent [-] | | No source could be given because it’s total nonsense. What happened is not in any way akin to a psychopath doing anything. It is a machine learning function that has trained on a corpus of documents to optimise performance on two tasks - first a sentence completion task, then an instruction following task. | | |
| ▲ | oskarkk 43 minutes ago | parent [-] | | I think that's more or less what marmalade2413 was saying and I agree with that. AI is not comparable to humans, especially today's AI, but I think future actual AI won't be either. |
|
|
|
| |
| ▲ | lazide an hour ago | parent | prev | next [-] | | It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time. What you’re saying is so far from what is happening, it isn’t even wrong. | |
| ▲ | BoredPositron 3 hours ago | parent | prev [-] | | So if you make a mistake and say sorry you are also a psychopath? | | |
| ▲ | ludwik 2 hours ago | parent | next [-] | | I think the point of comparison (whether I agree with it or not) is someone (or something) that is unable to feel remorse saying “I’m sorry” because they recognize that’s what you’re supposed to do in that situation, regardless of their internal feelings. That doesn’t mean everyone who says “sorry” is a psychopath. | | |
| ▲ | BoredPositron 2 hours ago | parent [-] | | We are talking about an LLM it does what it has learned. The whole giving it human ticks or characteristics when the response makes sense ie. saying sorry is a user problem. | | |
| ▲ | ludwik 2 hours ago | parent [-] | | Okay? I specifically responded to your comment that the parent comment implied "if you make a mistake and say sorry you are also a psychopath", which clearly wasn’t the case. I don’t get what your response has to do with that. |
|
| |
| ▲ | pyrale an hour ago | parent | prev | next [-] | | No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue. In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that. I don't think the comparison with people with psychopathy should be pushed further than this specific aspect. | | |
| ▲ | BoredPositron an hour ago | parent [-] | | You provided the logical explanation why the model acts like it does. At the moment it's nothing more and nothing less. Expected behavior. | | |
| ▲ | lazide 15 minutes ago | parent [-] | | Notably, if we look at this abstractly/mechanically, psychopaths (and to some extent sociopaths) do study and mimic ‘normal’ human behavior (and even the appearance of specific emotions) to both fit in, and to get what they want. So while internally (LLM model weight stuff vs human thinking), the mechanical output can actually appear/be similar in some ways. Which is a bit scary, now that I think about it. |
|
| |
| ▲ | camillomiller 2 hours ago | parent | prev [-] | | Are you smart people all suddenly imbeciles when it comes to AI or is this purposeful gaslighting because you’re invested in the ponzi scheme?
This is a purely logical problem. comments like this completely disregard the fallacy of comparing humans to AI as if a complete parity is achieved. Also the way this comments disregard human nature is just so profoundly misanthropic that it just sickens me. | | |
| ▲ | BoredPositron 2 hours ago | parent [-] | | No but the conclusions in this thread are hilarious. We know why it says sorry. Because that's what it learned to do in a situation like that. People that feel mocked or are calling an LLM psychopath in a case like that don't seem to understand the technology either. | | |
| ▲ | camillomiller an hour ago | parent [-] | | I agree, psychopath is the wrong adjective, I agree. It refers to an entity with a psyche, which the illness affects. That said, I do believe the people who decided to have it behave like this for the purpose of its commercial success are indeed the pathological individuals. I do believe there is currently a wave of collective psychopathology that has taken over Silicon Valley, with the reinforcement that only a successful community backed by a lot of money can give you. |
|
|
|
|