| ▲ | heytakeiteasy 3 days ago |
| Feels like a false equivalency. It's just my experience, but I've completely ignored crypto and the metaverse, and I don't get the sense I'm missing out on much.
In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Transformative for the better? Time will tell I suppose, but I'm really enjoying it so far. |
|
| ▲ | xondono 3 days ago | parent | next [-] |
| This depends very much on your line of work. As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time. |
| |
| ▲ | pennomi 3 days ago | parent | next [-] | | For sure. The more specialized or obscure of things you have to do, the less LLMs help you. Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster. Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space. | | |
| ▲ | heytakeiteasy 3 days ago | parent | next [-] | | > The more specialized or obscure of things you have to do, the less LLMs help you. I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at. But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes. I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out. | | |
| ▲ | jatora 3 days ago | parent [-] | | Agreed. This sentiment you are replying to is a common one and is just people self-aggrandizing. No, almost nobody is working on code novel enough to be difficult for an LLM. All code projects build on things LLM's understand very well. Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos. | | |
| |
| ▲ | monsieurbanana 3 days ago | parent | prev | next [-] | | Specialized is probably not the word I'd use, because llms are generally useful to understand more specialized / obscure topics. For example I've never randomly heard people talking about the dicom standard, llms have no trouble with it. | | |
| ▲ | phil21 3 days ago | parent | next [-] | | I think there is a sweet spot for the training(?) on these LLMs where there is basically only "professional" level documentation and chatter, without the layman stuff being picked up from reddit and github/etc. I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents. I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic. | |
| ▲ | i_cannot_hack 3 days ago | parent | prev | next [-] | | The standard for obscurity is different for LLMs, something can be very widespread and public without the average person knowing about it. DICOM is used at practically every hospital in the world, there's whole websites dedicated to browsing the documentation, companies employ people solely for DICOM work, there's popular maintained libraries for several different languages, etc, so the LLM has an enormous amount of it in its training data. The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either. | |
| ▲ | aleph_minus_one 3 days ago | parent | prev [-] | | > llms are generally useful to understand more specialized / obscure topics A very simple kind of query that in my experiences causes problems to many current LLMs is: "Write {something obscure} in the Wolfram programming language." | | |
| ▲ | AlotOfReading 3 days ago | parent [-] | | One tendency I've noticed is that LLMs struggle with creativity. If you give them a language with extremely powerful and expressive features, they'll often fail to use them to simplify other problems the way a good programmer does. Wolfram is a language essentially designed around that. I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that. | | |
| ▲ | aleph_minus_one 3 days ago | parent [-] | | > Do you know if it also fails for "mathematica" code? My experience concerning using "Mathematica" instead of "Wolfram" in AI tasks is similar. |
|
|
| |
| ▲ | someguynamedq 3 days ago | parent | prev | next [-] | | Several years ago is ancient with the rate of advancement that LLMs have had recently | |
| ▲ | archagon 3 days ago | parent | prev [-] | | > Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster. This is actually where I would be most reluctant to use an LLM. Your website represents your product, and you probably don’t want to give it the scent of homogenized AI slop. People can tell. | | |
| ▲ | pennomi 2 days ago | parent [-] | | They can tell if you let it use whatever CSS it wants (Claude will nearly always make a purple or blue website with gross rainbow gradients). They can also tell if you let it write your marketing copy. If you decide on your own brand colors and wording, there’s very little left about the code that can’t be done instantly by an LLM (at least on a marketing website). | | |
| ▲ | Starman_Jones 18 hours ago | parent [-] | | I just read Claude's front-end design instructions, and it now explicitly bans purple gradients. Curious to see what new pattern it will latch on to. |
|
|
| |
| ▲ | muskstinks 3 days ago | parent | prev | next [-] | | But this learning is also value. Without playing around with it, you wouldn't know when to use an LLM and when not. | | |
| ▲ | daveguy 3 days ago | parent [-] | | But the models change every 3-6 months. What's the use of learning what they can and can't do when what they can and can't do changes so frequently? | | |
| ▲ | christofosho 2 days ago | parent | next [-] | | Some subscriptions offer "unlimited tokens" for certain models. i.e. GitHub co-pilot can be unlimited for GPT-4o and GPT-4.1 (and, actually, GPT-5 mini!). So: I spent some time with those models to see what level of scaffolding and breaking things down (hand holding) was required to get them to complete a task. Why would I do that? Well, I wanted to understand more deeply how differences in my prompting might impact the outcomes of the model. I also wanted to get generally better at writing prompts. And of course, improving at controlling context and seeing how models can go off the rails. Just by being better at understanding these patterns, I feel more confident in general at when and how to use LLMs in my daily work. I think, in general, understanding not only that earlier models are weaker, but also _how_ they are weaker, is useful in its own right. It gives you an extra tool to use. I will say, the biggest findings for "weaknesses" I've found are in training data. If you're keeping your libraries up-to-date, and you're using newer methods or functionality from those libraries, AI will constantly fail to identify with those new things. For example, Zod v4 came out recently and the older models absolutely fail to understand that it uses some different syntax and methods under the hood. Jest now supports `using` syntax for its spyOn method, and models just can't figure it out. Even with system prompts and telling them directly, the existing training data is just too overpowering. | |
| ▲ | muskstinks 2 days ago | parent | prev | next [-] | | I would say they are not changing but evolving and you evolve with them. For example: gemini became a lot better in a lot more tasks. How do I know? because i also have very basic benchmarks or lets say "things which haven't worked" are my benchmark. | |
| ▲ | joquarky 3 days ago | parent | prev [-] | | If you had to hold on for dear life during the ~2014-2017 JavaScript framework chaos, then 3-6 months is peanuts. This is an industry that requires continuous learning. |
|
| |
| ▲ | peacebeard 3 days ago | parent | prev [-] | | Honestly I think this is the primary explanation for why there is so much disagreement on if LLMs are useful or not. If you leave out the more motivated arguments in particular. |
|
|
| ▲ | datsci_est_2015 3 days ago | parent | prev | next [-] |
| > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Feels like a false dichotomy. Have I become faster with LLMs? Yes, maybe. Is it 10x or 1000x or 10,000x? Definitely not. I think actually in the past I would have leaned more on senior developers, books, stack overflow etc. but now I can be much more independent and proactive. LLM-based tools are a wide spectrum, and to argue that the whole spectrum is worth exploring because one sliver of it has definite utility is a bit wonky. Kind of like saying $SHITCOIN is worth investing in because $BITCOIN mooned as a speculative asset: - I’m bullish on LLMs chat interfaces replacing StackOverflow and O’Reilly
- I could not be more bearish on Agents automating software engineering
Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead. |
| |
| ▲ | hungryhobbit 3 days ago | parent | next [-] | | >Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead I truly believe so much of the anti-AI sentiment is the same as the Luddites. They're often used as a meme now, but they were very real people, faced with a real and present risk to their livelihoods. They acted out of fear, but not just irrational fear. AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity ... and also, unquestionably, a threat to programmer jobs. Maybe the OP is right about waiting, but to me whenever new tech is disrupting jobs, that seems like the best time to learn it. If you don't, it's not just FOMO as the author suggests ... it's failing to keep up on the skills that keep you employed. | | |
| ▲ | mekoka 3 days ago | parent | next [-] | | > it's failing to keep up on the skills that keep you employed. I judge "failing to keep up" by my ability to "catch up". Right now if I search for paying courses on AI-assisted coding, I get a royal bunch for anything between 3$ to about 25$. These are distilled and converging observations by people who have had more time playing around with these toys than me. Most are less than 10 hours (usually 3 to 5). I also find countless free ones on YouTube popping up every week that can catch me up to a decent bouquet of current practices in an hour or two. They all also more or less need to be updated to relevancy after a few months (e.g. I've recently deleted my numerous bookmarks on MCP). Don't get me wrong, LLM-assisted coding is disruptive, but when practice becomes obsolete after a few months it's not really what's keeping you employed. If after you've spent much time and effort to live near that edge, the gap that truly separates you from me in any meaningful way can be covered in a few hours to catch up, you're not really leaving me behind. | |
| ▲ | whazor 3 days ago | parent | prev | next [-] | | I have found that maximising AI coding is a skill on its own. There is a lot of context switching. There is making sure agents are running in loops. Keeping the quality high is also important, as they often take shortcuts. And finally you need an somewhat of an architectural vision to ensure agents don’t just work in a single file. This is all very tiring and difficult. You can be significantly better than other people at this skill. | | |
| ▲ | datsci_est_2015 2 days ago | parent [-] | | This is not an argument for its revolutionary utility. Balancing rocks on the beach is very tiring and difficult for some people, and you can be significantly better at it. Not really bringing anything to the immediate conversation with that insight. |
| |
| ▲ | datsci_est_2015 3 days ago | parent | prev | next [-] | | The burden of proof lies with he who makes grand claims. My counterargument in the face of your lack of evidence is: “Where are all the improvements to my daily life? Where are the disrupting geniuses who go-to market 100x faster than their Luddite counterparts?” To paraphrase another analogy that I enjoyed, it’s a bit like when 3d printing became a thing and hype con artists claimed that no one would buy anything anymore, you could just 3d print it. | | |
| ▲ | therealdrag0 3 days ago | parent [-] | | You don’t need 100x productivity to be disruptive. In business 10% gain can be quite enormous. My senior engineers are estimating 25-50% gains. That is a far cry from your 100,000% gain, but very real and meaningful. | | |
| ▲ | tastyface 3 days ago | parent | next [-] | | The last study that came out on this showed that engineers were significantly overestimating their own productivity gains. If a stat like that is not accurately measured, it's useless. | |
| ▲ | datsci_est_2015 2 days ago | parent | prev [-] | | This is a completely different claim than the commenter made that I was responding to |
|
| |
| ▲ | toraway 3 days ago | parent | prev [-] | | AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity .
And yet, the only research that tries to evaluate this in a controlled, scientific way does not actually show this. Critics then say those studies aren’t valid because of X, Y or Z but don’t provide anything stronger than anecdotes in rebuttal.It’s ridiculous double standard and poisons any reasonable discussion to assert something is a fact and anyone who disagrees is a hysterical Luddite based on no actual evidence. |
| |
| ▲ | JumpCrisscross 3 days ago | parent | prev [-] | | > Have I become faster with LLMs? Yes, maybe. The question isn’t if you’ve improved. It’s if the path you took to getting to your current improvement could have been shortcut with the benefit of hindsight. Given the number of dead ends we’ve traversed, the answer almost certainly is yes. |
|
|
| ▲ | spelunker 3 days ago | parent | prev | next [-] |
| Crypto and the Metaverse were solutions in search of a problem. LLMs kind of felt like that until tooling arrived that enabled doing a lot more than copying + pasting chat conversations. Sure, maybe crypto changed some lives, but an entire industry? I think ALL of software dev is going under a transformation and I think we're past the point of "wait it out" IMO. Or I'm wrong, but right I'm being paid to develop a new skill professionally. Maybe the skill ends up not being useful - ok, back to writing code the old way then. |
|
| ▲ | locknitpicker 3 days ago | parent | prev | next [-] |
| > Feels like a false equivalency. It's clearly a textbook example of survivorship bias. In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history. It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time. And coding is the lowest of low hanging fruits. |
| |
| ▲ | ThrowawayR2 3 days ago | parent | next [-] | | In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history". Some of those bets were prescient but too early but many of those bets never made any sense. The dotcom bust was worse than the software industry crash we're experiencing now. "It's rather obvious that this AI thing is a transformative event in world history" perhaps but it's not at all obvious how it's going to shake out or which bets are sensible. | | |
| ▲ | locknitpicker 3 days ago | parent [-] | | > In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history". I think you are missing the point, and also the very site you're posting on. Look at the top 50 list of most valuable companies in the world. Over half of the total market value reported today is attributed to companies which were either dotcom startups or whose growth was driven by the dotcom growth period. Dismissing the advent of the internet as anything short of revolutionary is disingenuous, no matter how many zombo.com companies failed. LLMs have the exact same transformative impact on humanity. | | |
| ▲ | danaris 3 days ago | parent [-] | | > LLMs have the exact same transformative impact on humanity. But this is begging the question. Yes, we can see that the internet was radically transformative. But you are arguing that this somehow proves that LLMs are too, when there's wildly insufficient evidence—either on where LLMs are going in themselves, or in the comparison—to credibly make that claim. |
|
| |
| ▲ | disgruntledphd2 3 days ago | parent | prev | next [-] | | > It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time. It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. Certainly I remember having to go down a few results to get SO results for a while, even when the top results were just copypasta from SO. | | |
| ▲ | locknitpicker 3 days ago | parent [-] | | > It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. I don't think that's it. SO was the go-to page for troubleshooting, whose traffic was not exactly originating from web search. Also, the LLM-correlated drop in traffic is also reported by search engines. Stack Overflow just so happens to be a specialized service with a very specialized audience whose demand is perfectly dominated by LLM chatbots. | | |
| |
| ▲ | kjkjadksj 3 days ago | parent | prev | next [-] | | Internet is something new. By definition llm coding isn’t doing anything you couldn’t have done already. Once the agents aren’t writing a human syntax based language but are spitting out opaque functions in binary machine code, then they are doing something new and compelling imo because there are real performance gains with that. | | |
| ▲ | bitwize 3 days ago | parent [-] | | No, this is wrong. AI has drastically shortened the time and effort between idea and implementation. The upshot being, that not only do you get things done faster, but things you wouldn't otherwise countenance doing are now within reach. | | |
| ▲ | kjkjadksj 3 days ago | parent [-] | | So where is all the new tech years into ai? Turns out that wasn’t the limiting factor. The limiting factor is still what it was 10,000 years ago in business: accumulating capital to start and finding a fit in the market to last. | | |
| ▲ | ctoth 2 days ago | parent [-] | | Wait. Which one is it? Is Show HN overwhelmed with too much new stuff because of AI? Or is there no new stuff? I am so confused! | | |
| ▲ | ThrowawayR2 2 days ago | parent [-] | | As someone who does monitor /new, Show HN is overwhelmed with copycat low effort AI rubbish, not innovative AI generated tech that is interesting or useful. How many AI powered coloring book generators and AI powered anime-ification sites does the world really need? |
|
|
|
| |
| ▲ | irishcoffee 3 days ago | parent | prev | next [-] | | > In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history. Allow me to introduce you to the dot-com boom, where everyone who bet on the internet went broke. | | |
| ▲ | aworks 3 days ago | parent [-] | | The Google founders did ok. | | |
| ▲ | irishcoffee 2 days ago | parent [-] | | Google raised only a single large round of institutional funding before going public, with the investors contributing $12.5 million I can be much more specific about “everyone” if you’d like. |
|
| |
| ▲ | lapcat 3 days ago | parent | prev [-] | | > In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history. Almost all people are "forgotten" by history. In any case, people who were not even born yet in the 1990s are using the internet today, very successfully, so clearly you can wait. |
|
|
| ▲ | ErroneousBosh 3 days ago | parent | prev | next [-] |
| > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. I can't really agree. I've never seen anything from an LLM that I would consider even helpful, never mind transformative. How are you supposed to use them? |
| |
| ▲ | kaffekaka 3 days ago | parent [-] | | Is this seriously so? Have you never seen anything helpful from an LLM? That seems such a black and white statement that I get confused. I am conservative regarding AI driven coding but I still see tremendous value. It makes me want to ask you: do you ever see helpful things from your colleagues at all? | | |
| ▲ | ErroneousBosh 3 days ago | parent [-] | | > Is this seriously so? Have you never seen anything helpful from an LLM? No, not at all. I may be using it wrong. I put in "write me a library that decodes network packets in <format I'm working with>" and it had no idea where to start. What part of it is it supposed to do? I don't want to do any more typing than I have to. | | |
| ▲ | colonCapitalDee 3 days ago | parent | next [-] | | You're right, you are using it wrong. An LLM can read code faster than you can, write code faster than you can, and knows more things than you do. By "you" I mean you, me, and anyone with a biological brain. Where LLMs are behind humans is depth of insight. Doing anything non-trivial requires insight. The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work. Kind of like paint by numbers. In your case, I would recommend some combination of defining the API of the library you want yourself manually, thinking through how you would implement it and writing down the broad strokes of the process for the LLM, and collecting reference materials like a format spec, any docs, the code that's creating these packets, and so on. | | |
| ▲ | ErroneousBosh 2 days ago | parent [-] | | > An LLM can read code faster than you can, write code faster than you can, and knows more things than you do. I don't agree. It can't write code at all, it can only copy things it's already seen. But, if that is true, why can't it solve my problem? > The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work Okay, so how do I do that? Remember, I want to do ZERO TYPING. I do not want to type a single character that is not code. I already know what I want the code to do, I just want it typed in. I just don't think AI can ever solve a problem I have. |
| |
| ▲ | linhns 3 days ago | parent | prev [-] | | Well, if you ask it to write a library at the start, it's likely it will not do that well. Start small, spoon feed some examples. | | |
| ▲ | xigoi 3 days ago | parent | next [-] | | If you have to put in this much effort, why not just write it yourself? | | |
| ▲ | HDThoreaun 3 days ago | parent | next [-] | | When you write a library the first step is always designing it. LLMs dont get rid of that step, they get rid of the next step where you implement your design. | | |
| ▲ | xigoi 3 days ago | parent [-] | | They also added an additional step where you have to explain your design using vague natural language. | | |
| ▲ | irishcule 2 days ago | parent [-] | | Is this really "additional"? do you not do design docs/adrs/rfcs etc and talk about them with your team? do you take any notes or write out your design/plan in some way even for yourself? Why can't you just pass any of those to an AI? | | |
| ▲ | whateveracct 2 days ago | parent | next [-] | | If I'm writing a library to work with a binary format, there is very little English in my head required, let alone written English. That is a heavily symbolic exercise. I will "read" the spec, but I will not pronounce it in literal audible English in my head (I'm a better reader than that.) I write Haskell tho so maybe I'm biased. I do not have an inner narrative when programming ever. | |
| ▲ | xigoi 2 days ago | parent | prev [-] | | I’m not part of any team, I work on my projects alone. I rarely write long-form design documents; usually I either just start coding or write very vague notes that only make sense when combined with what’s in my head. |
|
|
| |
| ▲ | whateveracct 2 days ago | parent | prev [-] | | some people suck ass at programming so they'd rather use English |
| |
| ▲ | ErroneousBosh 3 days ago | parent | prev [-] | | So I have to do a lot of typing? Because the typing is the bit I don't want to do. Actually writing code is the fun and easy bit. |
|
|
|
|
|
| ▲ | newsoftheday 3 days ago | parent | prev | next [-] |
| > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation It may have reduced the time to an implementation, based on my experiences I sincerely doubt the veracity of applying the adjective "working". |
|
| ▲ | lapcat 3 days ago | parent | prev [-] |
| > Transformative for the better? Time will tell I suppose That's the point of the blog post. If you can't even say right now whether it's for the better, then there's no reason to rush in. |
| |
| ▲ | colejohnson66 3 days ago | parent | next [-] | | I read OP as saying it is transformative, at least for them. Whether it's transformative for society is left to be decided. | |
| ▲ | nehal3m 3 days ago | parent | prev [-] | | And conversely if it is, then there is no point to getting in early since the whole point is to externalize knowledge and experience |
|