| ▲ | Fr0styMatt88 6 hours ago |
| I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people. The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker. It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing? The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get. I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example. |
|
| ▲ | MattJ100 6 hours ago | parent | next [-] |
| We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions. Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess. Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them. In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful. |
| |
| ▲ | strange_quark 15 minutes ago | parent | next [-] | | > In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful. The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky. I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes. | |
| ▲ | ethagnawl 37 minutes ago | parent | prev | next [-] | | This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO). I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.) | |
| ▲ | torginus 2 hours ago | parent | prev | next [-] | | I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs. From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong. I just reset the router, actually took the time to do everything by the docs, and then it worked. Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone. | |
| ▲ | 2ndorderthought 6 hours ago | parent | prev [-] | | Personally this type of behavior played a large part in why I left 2 oss communities. A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb. They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions. | | |
| ▲ | skydhash 5 hours ago | parent [-] | | I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans? | | |
| ▲ | 2ndorderthought 5 hours ago | parent [-] | | I should look into mailing lists. That would be a great filter for the "I need it now at any cost" interactions. Thank you for the indirect advice. |
|
|
|
|
| ▲ | notnullorvoid 18 minutes ago | parent | prev | next [-] |
| People are losing their ability to reason without prompting an LLM first. It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions. I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale. |
|
| ▲ | bulatb 43 minutes ago | parent | prev | next [-] |
| Is this an LLM responding to an LLM, lamenting the loss of human connection? What a world it is now. |
|
| ▲ | 2ndorderthought 6 hours ago | parent | prev | next [-] |
| There is a lot of wisdom in this. At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples. It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about. |
| |
| ▲ | avmich 5 hours ago | parent [-] | | > At the end of the day chatgpt won't be there Are you sure it won't? | | |
| ▲ | 2ndorderthought 4 hours ago | parent [-] | | Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways. |
|
|
|
| ▲ | lxgr 6 hours ago | parent | prev | next [-] |
| > if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker. Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of. |
|
| ▲ | hnthrow0287345 2 hours ago | parent | prev | next [-] |
| You could have done this with Google search or Wikipedia or reading through books though |
|
| ▲ | gonzalohm 5 hours ago | parent | prev | next [-] |
| I think you are right, but it also makes sense. Human communication is inherently inefficient.
Points of view, miscommunication, interpretation... It's the obvious point to automate.
Not defending it, just my thoughts |
|
| ▲ | croisillon 5 hours ago | parent | prev [-] |
| i see what you did there :) |