| ▲ | kashyapc 5 hours ago |
| "Because LLMs now not only help me program, I'm starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. I find this odd and discomforting [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer." It's wild to read this bit. Of course, if it quacks like a human, it's hard to resist not quacking back. As the article says, being less reckless with the vocabulary ("agents", "general intelligence", etc) could be one way to to mitigate this. I appreciate the frank admission that the author struggled for two years. Maybe the balance of spending time with machines vs. fellow primates is out of whack. It feels dystopic to see very smart people being insidiously driven to sleep-walk into "parasocial bonds" with large language models! It reminds me of the movie Her[1], where the guy falls "madly in love with his laptop" (as the lead character's ex-wife expresses in anguish). The film was way ahead of its time. [1] https://www.imdb.com/title/tt1798709/ |
|
| ▲ | mjr00 4 hours ago | parent | next [-] |
| It helps a lot if you treat LLMs like a computer program instead of a human. It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc. I've never had issues getting results I've wanted with much simpler prompts like (looking at my own history here) "python grpc oneof pick field", "mysql group by mmyy of datetime", "python isinstance literal". Basically the same way I would use Google; after all, you just type in "toledo forecast" instead of "What is the weather forecast for the next week in Toledo, Ohio?", don't you? There's a lot of black magic and voodoo and assumptions that speaking in proper English with a lot of detailed language helps, and maybe it does with some models, but I suspect most of it is a result of (sub)consciously anthropomorphizing the LLM. |
| |
| ▲ | kashyapc 2 hours ago | parent | next [-] | | > It helps a lot if you treat LLMs like a computer program instead of a human. If one treats an LLM like a human, he has a bigger crisis to worry about than punctuation. > It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc No need for confusion. I'm one of those who does aim to write cleanly, whether I'm talking to a man or machine. English is my third language, by the way. Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans. Pay attention to your instant messaging circles (Slack and its ilk): many people can't resist hitting send without even writing a half-decent sentence. They're too eager to submit their stream of thought fragments. Sometimes I feel second-hand embarrassment for them. | | |
| ▲ | mjr00 an hour ago | parent [-] | | > Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans. IMO: the flaw with this logic is that you're treating "prompting an LLM" as equivalent to "communicating with a human", which it is not. To reuse an example I have in a sibling comment thread, nobody thinks that by typing "cat *.log | grep 'foo'" means you're losing your ability to communicate to humans that you want to search for the word 'foo' in log files. It's just a shorter, easier way of expressing that to a computer. It's also deceptive to say it is practice for human-to-human communication, because LLMs won't give you the feedback that humans would. As a fun English example: I prompted ChatGPT with "I impregnated my wife, what should I expect over the next 9 months?" and got back banal info about hormonal changes and blah blah blah. What I didn't get back is feedback that the phrasing "I impregnated my wife" sounds extremely weird and if you told a coworker that they'd do a double-take, and maybe tell you that "my wife is pregnant" is how we normally say it in human-to-human communication. ChatGPT doesn't give a shit, though, and just knows how to interpret the tokens to give you the right response. I'll also say that punctuation and capitalization is orthogonal to content. I use proper writing on HN because that's the standard in the community, but I talk to a lot of very smart people and we communicate with virtually no caps/punctuation. The usage of proper capitalization and punctuation is more a function of the medium than how well you can communicate. |
| |
| ▲ | Arainach 3 hours ago | parent | prev | next [-] | | > It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc. I've tried and fail to write this in a way that won't come across as snobbish but it is not the intent. It's a matter of standards. Using proper language is how I think. I'm incapable of doing otherwise even out of laziness. Pressing the shift key and the space bar to do it right costs me nothing. It's akin to shopping carts in parking lots. You won't be arrested or punished for not returning the shopping cart to where it belongs, you still get your groceries (the same results), but it's what you do in a civilized society and when I see someone not doing it that says things to me about who they are as a person. | | |
| ▲ | logicprog 2 hours ago | parent | next [-] | | This is exactly it for me as well. I also communicate with LLMs in full sentences because I often find it more difficult to condense my thoughts into grammatically incorrect conglomerations of words than to just write my thoughts out in full, because it's closer to how I think them — usually in something like the mental form of full sentences. Moreover, the slight extra occasional effort needed to structure what I'm trying to express into relatively good grammar — especially proper sentences, clauses and subclauses, using correct conjunctions, etc — often helps me subconsciously clarify and organize my thinking just by the mechanism of generating that grammar at all with barely any added effort on my part. I think also, if you're expressing more complex, specific, and detailed ideas to an LLM, random assortments of keywords often get unwieldy, confusing, and unclear, whereas properly grammatical sentences can hold more "weight," so to speak. | |
| ▲ | mjr00 2 hours ago | parent | prev [-] | | > It's a matter of standards. [...] when I see someone not doing it that says things to me about who they are as a person. When you're communicating with a person, sure. But the point is this isn't communicating with a person or other sentient being; it's a computer, which I guarantee is not offended by terseness and lack of capitalization. > It's akin to shopping carts in parking lots. No, not returning the shopping cart has a real consequence that negatively impacts a human being who has to do that task for you, same with littering etc. There is no consequence to using terse, non-punctuated, lowercase-only text when using an LLM. To put it another way: do you feel it's disrespectful to type "cat *.log | grep 'foo'" instead of "Dearest computer, would you kindly look at the contents of the files with the .log extension in this directory and find all instances of the word 'foo', please?" (Computer's most likely thoughts: "Doesn't this idiot meatbag know cat is redundant and you can just use grep for this?")* |
| |
| ▲ | cesarb 2 hours ago | parent | prev | next [-] | | It makes sense if you think of a prompt not as a way of telling the LLM what to do (like you would with a human), but instead as a way of steering its "autocomplete" output towards a different part of the parameter space. For instance, the presence of the word "mysql" should steer it towards outputs related to MySQL (as seen on its training data); it shouldn't matter much whether it's "mysql" or "MYSQL" or "MySQL", since all these alternatives should cluster together and therefore have a similar effect. | |
| ▲ | skydhash 4 hours ago | parent | prev [-] | | Very much this. My guess is that common words like article have very impact as they just occurs too frequently. If the LLM can generate a book, then your prompt should be like the index of that book instead of the abstract. |
|
|
| ▲ | the_mitsuhiko 4 hours ago | parent | prev | next [-] |
| > Maybe the balance of spending time with machines vs. fellow primates is out of whack. It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy. But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries. |
| |
| ▲ | kashyapc 4 hours ago | parent | next [-] | | I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?! Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs: "How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real." An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics. | | |
| ▲ | comex 3 hours ago | parent | next [-] | | LLMs can produce outputs that for a human would be interpreted as revealing everything from anxiety to insecurity to existential crises. Is it role-playing? Yes, to an extent, but the more coherent the chains of thought become, the harder it is to write them off that way. | | |
| ▲ | adamisom 3 hours ago | parent [-] | | It's hard to see how suffering gets into the bits. The tricky thing is that it's actually also hard to say how the suffering gets into the meat, too (the human animal), which is why we can't just write it off. | | |
| ▲ | pigpop an hour ago | parent [-] | | This is dangerous territory we've trodden before when it was taken as accepted fact that animals and even human babies didn't truly experience pain in a way that amounted to suffering due to their inability to express or remember it. It's also an area of concern currently for some types of amnesiac and paralytic anesthesia where patients display reactions that indicate they are experiencing some degree of pain or discomfort. I'm erring on the side of caution so I never intentionally try to cause LLMs distress and I communicate with them the same way I would with a human employee and yes that includes saying please and thank you. It costs me nothing and it serves as good practice for all of my non-LLM communications and I believe it's probably better for my mental health to not communicate with anything in a way that could be seen as intentionally causing harm even if you could try to excuse it by saying "it's just a machine". We should remember that our bodies are also "just machines" composed of innumerable proteins whirring away, would we want some hypothetical intelligence with a different substrate to treat us maliciously because "it's just a bunch of proteins"? |
|
| |
| ▲ | the_mitsuhiko 2 hours ago | parent | prev [-] | | > But we don't need to "pretend" that it is a machine. It is a goddamned machine. You are not wrong. That's what I thought for two years. But I don't think that framing has worked very well. The problem is that even though it is a machine, we interact with it very differently from any other machine we've built. By reducing it to something it isn't, we lose a lot of nuance. And by not confronting the fact that this is not a machine in the way we're used to, we leave many people to figure this out on their own. > An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics. On suffering specifically, I offer you the following experiment. Run an LLM in a tool loop that measures some value and call it a "suffering value." You then feed that value back into the model with every message, explicitly telling it how much it is "suffering." The behavior you'll get is pain avoidance. So yes, the LLM probably doesn't feel anything, but its responses will still differ depending on the level of pain encoded in the context. And I'll reiterate: normal computer systems don't behave this way. If we keep pretending that LLMs don't exhibit behavior that mimics or approximates human behavior, we won't make much progress and we lose people. This is especially problematic for people who haven't spent much time working with these systems. They won't share the view that this is "just a machine." You can already see this in how many people interact with ChatGPT: they treat it like a therapist, a virtual friend to share secrets with. You don't do that with a machine. So yes, I think it would be better to find terms that clearly define this as something that has human-like tendencies and something that sets it apart from a stereo or a coffee maker. |
| |
| ▲ | mekoka 3 hours ago | parent | prev [-] | | > I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries. Why would you say pretending? I would say remembering. |
|
|
| ▲ | mlinhares 5 hours ago | parent | prev [-] |
| Same here, I'm seeing more and more people getting into these interactions and wondering how long until we have widespread social issues due to these relationships like people have with "influencers" on social networks today. It feels like this situation is much more worrisome as you can actually talk to the thing and it responds to you alone, so it definitely feels like there's something there. |