| ▲ | ramon156 7 hours ago |
| The laat comment is a person pretending to be a maintainer of Microsoft. I have a gut feeling that these kind of people will only increase, and we'll have vibe engineers scouring popular repositories to ""contribute"" (note that the suggested fix is vague). I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess. |
|
| ▲ | albert_e 7 hours ago | parent | next [-] |
| On the other hand ... I recently had to deal with official Microsoft Support for an Azure service degradation / silent failure. Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame. |
| |
| ▲ | datsci_est_2015 5 hours ago | parent | next [-] | | > The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues. The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme. Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer? | | |
| ▲ | shiandow 43 minutes ago | parent | next [-] | | > Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer? The company can't have it both ways. Either they have to admit the ai "support" is bollocks, or they are culpable. Either way they are in the wrong. | |
| ▲ | wat10000 5 hours ago | parent | prev [-] | | Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me. We need a law that forces management to be regularly exposed to their own customer service. | | |
| ▲ | datsci_est_2015 4 hours ago | parent [-] | | I knew someone would respond with this. HN is rampant with this sort of contrarian defeatism, and I just responded the other day to a nearly identical comment on a different topic, so: No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt. > At least the computer isn’t being forced to lie to me. Isn’t it though? > We need a law that forces management to be regularly exposed to their own customer service. Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit. | | |
| ▲ | bondarchuk 4 hours ago | parent | next [-] | | The world would not actually be improved by having 1000s of customer service reps genuinely authentically feel sorry. You're literally demanding real people to experience real negative emotions over some IT problem you have. | | |
| ▲ | consp 3 hours ago | parent | next [-] | | They don't have to be but they at least can try to help. When dealing with automated response units the outcome is the same: much talk, no solution. With a rep you can at lease see what's available within their means and if you are nice to them they might actually be able to help you or at least make you feel less bad about it. | |
| ▲ | wat10000 3 hours ago | parent | prev [-] | | But it would be improved by having them be honest and not say they’re sorry when they’re not. |
| |
| ▲ | yencabulator 2 hours ago | parent | prev | next [-] | | It's an Americanism. You might enjoy e.g. a Northern European culture more? | |
| ▲ | wat10000 4 hours ago | parent | prev [-] | | Lying means to make a statement that you believe to be untrue. LLMs don’t believe things, so they can’t lie. I haven’t had the pleasure of one of these phone systems yet. I think I’d still be more irritated by a human fake apology because the company is abusing two people for that. At any rate, I didn’t mean for it to be some sort of contest, more of a lament that modern customer service is a garbage fire in many ways and I dream of forcing the sociopaths who design these systems to suffer their own handiwork. |
|
|
| |
| ▲ | szundi 5 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | Cyphus 6 hours ago | parent | prev | next [-] |
| I wholly agree, the response screams “copied from ChatGPT” to me. “Contributions” like these comments and drive by PRs are a curse on open source and software development in general. As someone who takes pride in being thorough and detail oriented, I cannot stand when people provide the bare minimum of effort in response. Earlier this week I created a bug report for an internal software project on another team. It was a bizarre behavior, so out of curiosity and a desire to be truly helpful, I spent a couple hours whittling the issue down to a small, reproducible test case. I even had someone on my team run through the reproduction steps to confirm it was reproducible on at least one other environment. The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. I was offended on so many levels. For one, I wasn’t using the CLI tool in the way it describes, and even if I was it wouldn’t affect the bug. But the bigger problem is that this person thinks a screenshot of an AI conversation is an acceptable response. Is this what talking to semi technical roles is going to be like from now on? I get to argue with an LLM by proxy of another human? Fuck that. |
| |
| ▲ | bmurphy1976 5 hours ago | parent | next [-] | | That's when you use an LLM to respond pointing out all the ways the PM failed at their job. I know it sucks but fight fire with fire. Sites like lmgtfy existed long before AI because people will always take short cuts. | |
| ▲ | belter 6 hours ago | parent | prev [-] | | >> The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. You are still on time, to coach a model to create a reply saying the are completely wrong, and send back a print screen of that reply :-)) Bonus points for having the model include disparaging comments... |
|
|
| ▲ | cedws 37 minutes ago | parent | prev | next [-] |
| Etiquette on GitHub has completely gone out the window, many issues I look at these days resemble reddit threads more than any serious technical discussion. My inbox is frequently polluted by "bump" comments. This is going to get worse as LLMs lower the bar. |
|
| ▲ | iib 7 hours ago | parent | prev | next [-] |
| Some were already that and even more, because of other reasons. The Cathedral model, described in "The Cathedral and the Bazaar". |
| |
| ▲ | ForOldHack 5 hours ago | parent [-] | | I come to YCombinator, specifically because for some reason, some of the very brightest minds are here. |
|
|
| ▲ | markstos 7 hours ago | parent | prev | next [-] |
| No where in the comment do they assert they are work for Microsoft. This is a peer-review. |
| |
| ▲ | cmeacham98 7 hours ago | parent | next [-] | | It's not a peer review it's just AI slop. I do agree they don't seem to be intentionally posing as an MS employee. | |
| ▲ | PKop 7 hours ago | parent | prev | next [-] | | Let's just say they are pretending to be helpful, how about that? > "Peer review" no unless your "peers" are bots who regurgitate LLM slop. | | |
| ▲ | markstos 7 hours ago | parent [-] | | You think they lied about reproducing the issue? It’s useful to know if a bug can be reproduced. | | |
| ▲ | cmeacham98 6 hours ago | parent | next [-] | | We cannot know for sure but I think it's reasonably likely (say 50/50). Regurgitating an LLM for 90% of your comment does not inspire trust. | |
| ▲ | PKop 5 hours ago | parent | prev [-] | | Yes, of course I think they lied, because a trustworthy person would never consider 0-effort regurgitated LLM boilerplate as a useful contribution to an issue thread. It's that simple. |
|
| |
| ▲ | usefulposter 6 hours ago | parent | prev [-] | | It's performative garbage: authority roleplay edition. Let me slop an affirmative comment on this HIGH TRAFFIC issue so I get ENGAGEMENT on it and EYEBALLS on my vibed GitHub PROFILE and get STARS on my repos. |
|
|
| ▲ | falloutx 6 hours ago | parent | prev | next [-] |
| Exactly I have seen these know it all comments on my own repos and also tldraw's issues when adding issues. They add nothing to the conversation, they just paste the conversation into some coding tool and spit out the info. |
|
| ▲ | RobotToaster 7 hours ago | parent | prev | next [-] |
| > I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess. That repo alone has 1.1k open pull requests, madness. |
| |
| ▲ | embedding-shape 7 hours ago | parent [-] | | > That repo alone has 1.1k open pull requests, madness. The UI can't even be bothered to show the number of open issues, 5K+ :) Then they "fix it" by making issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open. | | |
| ▲ | PKop 7 hours ago | parent [-] | | > issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open. It's definitely a mess, but based on the massive decline in signal vs noise of public comments and issues on open source recently, that's not a bad heuristic for filtering quality. |
|
|
|
| ▲ | ForOldHack 5 hours ago | parent | prev [-] |
| Everyone is a maintainer of Microsoft. Everyone is testing their buggy products, as they leak information like a wire only umbrella. It is sad that more people who use co-pilot know that they are training it at a cost of millions of gallons of fresh drinking water. It was a mess before, and it will only get worse, but at least I can get some work done 4 times a day. |