| ▲ | TrackerFF an hour ago |
| I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff. But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever. If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow? |
|
| ▲ | Aurornis 22 minutes ago | parent | next [-] |
| > Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever. Honestly I think you’re swallowing some of the hype here. I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly. The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle. The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors. I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively. |
|
| ▲ | davidmurdoch 41 minutes ago | parent | prev | next [-] |
| This just happened to me this week. I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant. AI just can't cope with this yet. So my team has been told that we are too slow. Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently). |
| |
|
| ▲ | syllogism 31 minutes ago | parent | prev | next [-] |
| I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out. It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface. The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time. LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone. |
| |
| ▲ | arscan 9 minutes ago | parent [-] | | > It's really easy to use LLMs to shift work onto other people. This is my biggest gripe with LLM use in practice. |
|
|
| ▲ | atleastoptimal 4 minutes ago | parent | prev | next [-] |
| Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains. |
|
| ▲ | fileeditview an hour ago | parent | prev | next [-] |
| The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over. The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse. edit: sentence fixed. |
| |
| ▲ | afro88 28 minutes ago | parent | next [-] | | I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable. You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software. We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care. | | |
| ▲ | kiba 12 minutes ago | parent [-] | | Poor quality is not synonymous with mass production. It's just cheap crap made with little care. |
| |
| ▲ | lxgr an hour ago | parent | prev [-] | | > The era of software mass production has begun. We've been in that era for at least two decades now. We just only now invented the steam engine. > I wonder how long it takes until this comes all crashing down. At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs. | | |
| ▲ | pacifika 16 minutes ago | parent | next [-] | | Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output. Perhaps that’s what lead to a decline in accountability and quality. | |
| ▲ | knollimar 40 minutes ago | parent | prev | next [-] | | There's a buge difference between possible and likely. Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet | | |
| ▲ | lxgr 36 minutes ago | parent [-] | | Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it. The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software. |
| |
| ▲ | goldeneas 39 minutes ago | parent | prev | next [-] | | > Bad engineering is possible with and without LLMs That's obvious. It's a matter of which makes it more likely | |
| ▲ | 40 minutes ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | AndrewKemendo 34 minutes ago | parent | prev | next [-] |
| > If your org is blindly data/metric driven Are there for profit companies (not non profits, research institutes etc…) that are not metric driven? |
| |
| ▲ | intothemild 7 minutes ago | parent [-] | | Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output. It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven. |
|
|
| ▲ | zwnow an hour ago | parent | prev [-] |
| > You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem.
Also I can write code for free, LLMs coding assistants aren't free.
I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now.
If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into. |
| |
| ▲ | Aurornis 35 minutes ago | parent [-] | | > I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways. I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself. There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them. > If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into. Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie. | | |
| ▲ | bgwalter 20 minutes ago | parent | next [-] | | Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite. | | |
| ▲ | Aurornis 14 minutes ago | parent [-] | | > Is that why open source progress has generally slowed down since 2023? Citation needed for a clam of that magnitude. |
| |
| ▲ | zwnow 28 minutes ago | parent | prev [-] | | > But you have to actually learn how to use them. This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it. We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point. I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them. | | |
| ▲ | Aurornis 15 minutes ago | parent [-] | | > This probably is the issue for me, I am simply not willing to do so. Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs. > We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point. For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding. It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself. The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap. I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back. |
|
|
|