| ▲ | ericmcer 3 days ago |
| Agreed, daily Cursor user. Just got out of a 15m huddle with someone trying to understand what they were doing in a PR before they admitted Claude generated everything and it worked but they weren't sure why... Ended up ripping about 200 LoC out because what Claude "fixed" wasn't even broken. So never let it generate code, but the autocomplete is absolutely killer. If you understand how to code in 2+ languages you can make assumptions about how to do things in many others and let the AI autofill the syntax in. I have been able to swap to languages I have almost no experience in and work fairly well because memorizing syntax is irrelevant. |
|
| ▲ | daymanstep 3 days ago | parent | next [-] |
| > I have been able to swap to languages I have almost no experience in and work fairly well because memorizing syntax is irrelevant. I do wonder whether your code does what you think it does. Similar-sounding keywords in different languages can have completely different meanings. E.g. the volatile keyword in Java vs C++. You don't know what you don't know, right? How do you know that the AI generated code does what you think it does? |
| |
| ▲ | jacobr1 3 days ago | parent | next [-] | | Beyond code-gen I think some techniques are very underutilized. One can generate tests, generate docs, explain things line by line. Explicitly explaining alternative approaches and tradeoffs is helpful too. While, as with everything in this space, there are imperfection, I find a ton of value in looking beyond the code into thinking through the use cases, alternative approaches and different ways to structure the same thing. | | |
| ▲ | pornel 3 days ago | parent [-] | | I've wasted time debugging phantom issues due to LLM-generated tests that were misusing an API. Brainstorming/explanations can be helpful, but also watch out for Gell-Mann amnesia. It's annoying that LLMs always sound smart whether they are saying something smart or not. | | |
| ▲ | Miraste 3 days ago | parent [-] | | Yes, you can't use any of the heuristics you develop for human writing to decide if the LLM is saying something stupid, because its best insights and its worst hallucinations all have the same formatting, diction, and style. Instead, you need to engage your frontal cortex and rationally evaluate every single piece of information it presents, and that's tiring. | | |
| ▲ | valenterry 3 days ago | parent [-] | | It's like listening to a politician or lawyer, who might talk absolute bullshit in the most persuading words. =) |
|
|
| |
| ▲ | spanishgum 3 days ago | parent | prev [-] | | The same way I would with any of my own code - I would test it! The key here is to spend less time searching, and more time understanding the search result. I do think the vibe factor is going to bite companies in the long run. I see a lot of vibe code pushed by both junior and senior devs alike, where it's clear not enough time was spent reviewing the product. This behavior is being actively rewarded now, but I do think the attitude around building code as fast as possible will change if impact to production systems becomes realized as a net negative. Time will tell. |
|
|
| ▲ | qingcharles 3 days ago | parent | prev | next [-] |
| The other day I caught it changing the grammar and spelling in a bunch of static strings in a totally different part of a project, for no sane reason. |
| |
| ▲ | bdamm 3 days ago | parent | next [-] | | I've seen it do this as well. Odd things like swapping the severity level on log statements that had nothing to do with the task. Very careful review of my commits is the only way forward, for a long time. | |
| ▲ | ericmcer 2 days ago | parent | prev [-] | | That sounds similar to what it was doing here. It basically took a function like `thing = getThing(); id = thing.id` and created `id = getThingId()` and replaced hundreds of lines and made a new API endpoint. Not a huge deal because it works, but it seems like you would have 100,000 extra lines if you let Claude do whatever it wanted for a few months. |
|
|
| ▲ | senko 3 days ago | parent | prev | next [-] |
| > Just got out of a 15m huddle with someone trying to understand what they were doing in a PR before they admitted Claude generated everything and it worked but they weren't sure why... But .. that's not the AI's fault. If people submit any PRs (including AI-generated or AI-assisted) without completely understanding them, I'd treat is as serious breach of professional conduct and (gently, for first-timers) stress that this is not acceptable. As someone hitting the "Create PR" (or equivalent) button, you accept responsibility for the code in question. If you submit slop, it's 100% on you, not on any tool used. |
| |
| ▲ | whatever1 3 days ago | parent | next [-] | | If you have to review what the LLM wrote then there is no productivity gain. Leadership asks for vibe coding | | |
| ▲ | senko 3 days ago | parent | next [-] | | > If you have to review what the LLM wrote then there is no productivity gain. I do not agree with that statement. > Leadership asks for vibe coding Leadership always asks for more, better, faster. | | |
| ▲ | mangamadaiyan 3 days ago | parent [-] | | > Leadership always asks for more, better, faster. More and faster, yes. Almost never better. |
| |
| ▲ | swat535 3 days ago | parent | prev | next [-] | | > If you have to review what the LLM wrote then there is no productivity gain. You always have to review the code, whether it's written by another person, yourself or an AI. I'm not sure how this translates into the loss of productivity? Did you mean to say that the code AI generates is difficult to review? In those cases, it's the fault of the code author and not the AI. Using AI like any other tool requires experience and skill. | |
| ▲ | fooster 3 days ago | parent | prev | next [-] | | I suggest you upgrade your code review skill. I find it vastly quicker in most cases to review code than write it in the first place. | | | |
| ▲ | epolanski 3 days ago | parent | prev | next [-] | | > If you have to review what the LLM wrote then there is no productivity gain Stating something with confidence does not make it automatically true. | |
| ▲ | WolfeReader 3 days ago | parent | prev [-] | | I've seen AI create incorrect solutions and deceptive variable names. Reviewing the code is absolutely necessary. |
| |
| ▲ | draxil 3 days ago | parent | prev [-] | | But it's pretty much a given at this point that if you use agents to code for any length of time it starts to atrophy your ability to understand what's going on. So, yeah. it's a bit of a devils chalice. |
|
|
| ▲ | epolanski 3 days ago | parent | prev [-] |
| You're blaming the tool and not the tool user. |