| ▲ | The Human in the Loop(adventures.nodeland.dev) |
| 42 points by artur-gawlik 4 days ago | 24 comments |
| |
|
| ▲ | andai 3 hours ago | parent | next [-] |
| > who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM? "A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s |
|
| ▲ | piker 3 hours ago | parent | prev | next [-] |
| > Mike asks: "If an idiot like me can clone a [Bloomberg terminal] that costs $30k per month in two hours, what even is software development?" So that’s the baseline intellectual rigor we’re dealing with here. |
|
| ▲ | mpalmer 5 hours ago | parent | prev | next [-] |
| Why would I want to take advice about keeping humans in the loop from someone who let an LLM write 90% of their blog post? |
| |
| ▲ | yohguy 5 hours ago | parent | next [-] | | I don't like reading AI text because I feel each word matters a lot less, however the message the author is conveying can be preserved. I read an article like this for the quality of the message not the craftsmen of the medium. | | |
| ▲ | mpalmer 4 hours ago | parent [-] | | If the author didn't have the good taste and decency to edit the painfully obvious generated text, I just assume the message is low quality. |
| |
| ▲ | scandox 4 hours ago | parent | prev | next [-] | | On what basis did you make this judgement? I found the article to be reasonable and not excessively padded. | | |
| ▲ | MrJohz 2 hours ago | parent | next [-] | | Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards. Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it). | |
| ▲ | insin 3 hours ago | parent | prev [-] | | But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it. | | |
| ▲ | ericyd 3 hours ago | parent [-] | | Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here. | | |
| ▲ | blenderob 3 hours ago | parent [-] | | Pretty sure they are mocking LLM outputs by making their own comment look like as if it came from LLM. It's sarcasm. |
|
|
| |
| ▲ | actionfromafar 5 hours ago | parent | prev [-] | | The human pressed the red button. :) |
|
|
| ▲ | scroot 3 hours ago | parent | prev | next [-] |
| These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education? The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue. |
| |
| ▲ | mpalmer an hour ago | parent | next [-] | | The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school. Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production. | |
| ▲ | 8organicbits 2 hours ago | parent | prev | next [-] | | Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't. | |
| ▲ | candiddevmike 3 hours ago | parent | prev | next [-] | | This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession. | |
| ▲ | id 3 hours ago | parent | prev | next [-] | | >software engineers will still need to apply their expertise and wisdom to generated outputs And in my experience they don't really do that. They trust that it'll be good enough. | |
| ▲ | dfxm12 2 hours ago | parent | prev | next [-] | | I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed. Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right? | |
| ▲ | echelon 3 hours ago | parent | prev [-] | | The invention of calculators did not cause society to collapse. Smart and industrious people will focus energy on economically important problems. That has always been the case. Everything will work out just fine. |
|
|
| ▲ | yohguy 5 hours ago | parent | prev | next [-] |
| There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top. |
| |
| ▲ | kilroy123 4 hours ago | parent [-] | | > LLMs can go 1 level higher anytime soon. Progress is not guaranteed. I tend to agree, but I do think we'll get there in the next 5-10 years. |
|
|
| ▲ | chrisjj 4 days ago | parent | prev | next [-] |
| > When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector? If you have to ask, then you'd be better putting that effort into fixing the test coverage. |
|
| ▲ | kardianos 4 hours ago | parent | prev | next [-] |
| > My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse. I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse. |
|
| ▲ | movedx01 5 hours ago | parent | prev | next [-] |
| AI derived piece arguing with another AI derived piece about AI. It's slop all the way down. |
|
| ▲ | TZubiri 2 hours ago | parent | prev [-] |
| What is the bloomberg terminal thing? Did someone vibecode a competitor? |