| |
| ▲ | baxtr 5 hours ago | parent | next [-] | | Isn’t most standard software these days a permutation of things already done before? | | |
| ▲ | Gabriel439 4 hours ago | parent | next [-] | | Author here: it's not even clear that agents can reliably permute their training data (I'm not saying that it's impossible or never happens but that it's not something we can take for granted as a reliable feature of agentic coding). As I mentioned in one of the footnotes in the post: > People often tell me "you would get better results if you generated code in a more mainstream language rather than Haskell" to which I reply: if the agent has difficulty generating Haskell code then that suggests agents aren't capable of reliably generalizing beyond their training data. If an agent can't consistently apply concepts learned in one language to generate code in another language, then that calls into question how good they are at reliably permuting the training dataset in the way you just suggested. | | |
| ▲ | rytis 4 hours ago | parent | next [-] | | > if the agent has difficulty generating Haskell code then that suggests agents aren't capable of reliably generalizing beyond their training data. doesn't that apply to flesh-and-bone developers? ask someone who's only working in python to implement their current project in haskell and I'm not so sure you'll get very satisfying results. | | |
| ▲ | Frieren 3 hours ago | parent | next [-] | | > doesn't that apply to flesh-and-bone developers? No, it does not. If you have a developer that knows C++, Java, Haskell, etc. and you ask that developer to re-implement something from one language to another the result will be good. That is because a developer knows how to generalize from one language (e.g. C++) and then write something concrete in the other (e.g. Haskell). | |
| ▲ | ozlikethewizard 3 hours ago | parent | prev | next [-] | | The hard bit of programming has never been knowing the symbols to tell the computer what to do. It is more difficult to use a completely unknown language, sure, but the paradigms and problem solving approaches are identical and thats the actual work, not writing the correct words. | | |
| ▲ | lukevp 2 hours ago | parent [-] | | Saying that the paradigms of Python and Haskell are the same makes it sound like you don’t know either or both of those languages. They are not just syntactically different. The paradigms literally are different. Python is a high level duck typed oo scripting language and Haskell is a non-oo strongly typed functional programming language. They’re extremely far apart. |
| |
| ▲ | cassianoleal 2 hours ago | parent | prev | next [-] | | Your argument fails where it equates someone who only codes in one language to an LLM who is usually trained in many languages. In my experience, a software engineer knows how to program and has experience in multiple languages. Someone with that level of experience tends to pick up new languages very quickly because they can apply the same abstract concepts and algorithms. If an LLM that has a similar (or broader) data set of languages cannot generalise to an unknown language, then it stands to reason that it is indeed only capable of reproducing what’s already in its training data. | |
| ▲ | debugnik 3 hours ago | parent | prev [-] | | But the model has seen pretty much all the public Haskell code around, and possibly been trained to write it in different settings. |
| |
| ▲ | mike_hearn 2 hours ago | parent | prev | next [-] | | Your argument is far too dependent on observations made about the model's ability with Haskell, which is irrelevant. The concepts in Haskell are totally different to almost any other language - you can't easily "generalize" from an imperative strict language like basically everything people really use to a lazy pure FP language that uses monads for IO like Haskell. The underlying concepts themselves are different and Haskell has never been mainstream enough for models to get good at it. Pick a good model, let it choose its own tools and then re-evaluate. | |
| ▲ | graemep 3 hours ago | parent | prev | next [-] | | I am very sceptical mainstream languages will be better. I have seen plenty of bad Python from LLMs. Even with simple CRUD apps and when provided with detailed instructions. | |
| ▲ | lukan 4 hours ago | parent | prev | next [-] | | "that suggests agents aren't capable of reliably generalizing beyond their training data." Yes? If they could, we would have a strong general intelligence by now and only few people are claiming this. | |
| ▲ | ChrisGreenHeur 4 hours ago | parent | prev [-] | | It can also mean that the other programming language is above the cognitive abilities of the LLM |
| |
| ▲ | roarcher 4 hours ago | parent | prev | next [-] | | I'd say that's pretty much the definition of standard, yeah. And it's why you can't make a profit selling a simple ToDo app. If you expect people to pay for what you build, you have to build something that doesn't have a thousand free clones on the app store. | | |
| ▲ | baxtr 4 hours ago | parent [-] | | I politely disagree. I think you’re conflating software and product. A product can be a recombination of standard software components and yet be something completely new. |
| |
| ▲ | loveparade 4 hours ago | parent | prev | next [-] | | But what's the point of re-building "standard software" if it is so standard that it already exists 100 times in the training data with slight variations? | | |
| ▲ | baxtr an hour ago | parent | next [-] | | See here: https://news.ycombinator.com/item?id=47435808 | |
| ▲ | ChrisGreenHeur 4 hours ago | parent | prev | next [-] | | The point is the small variations | |
| ▲ | lynx97 4 hours ago | parent | prev [-] | | I read this attitude very often on HN. "If someone else has already built it before, your effort is a waste of time." To me, it has this "Someone else already makes money from it, go somewhere else where you dont have competition." Well, I get the drift... But... Not everyone is into getting rich. You know, some of us just have fun building things and learning while doing so. It really doesn't matter if the path has been walked before. Not everything has to be plain novelty to count. | | |
| ▲ | loveparade 3 hours ago | parent [-] | | If you do it for fun then why do you care whether an LLM can do it well or not, which was the original argument? Shouldn't matter to you in that case. |
|
| |
| ▲ | layer8 4 hours ago | parent | prev | next [-] | | That isn’t saying much. Every software is a permutation of zeros and ones. The novelty or ingenuity, or just quality and fitness for purpose, can lie in the permutation you come up with. And an LLM is limited by its training in the permutations it is likely to come up with, unless you give it heaps of specific guidance on what to do. | |
| ▲ | mfabbri77 5 hours ago | parent | prev | next [-] | | In my experience, the further you move away from the user and toward the hardware and fundamental theoretical algorithms, the less true this becomes. This is very true for an email client, but very untrue for an innovative 3D rendering engine technology (just an example). | | |
| ▲ | layer8 4 hours ago | parent | next [-] | | An email client is highly nontrivial, due to the complexities of the underlying standards, and how the real implementations you have to be compatible with don’t strictly follow them. Making an email client that doesn’t suck and is fully interoperable is quite an ambitious endeavor. | | |
| ▲ | mfabbri77 4 hours ago | parent | next [-] | | The point was to answer the question: "Can every piece of software be viewed as a permutation of software that has already been developed?"
In my opinion, an email client is a more favorable example than a 3D engine. In fields where it is necessary to differentiate, improve, or innovate at the algorithmic level, where research and development play a fundamental role, it is not simply a matter of permuting software or leveraging existing software components by simply assembling them more effectively. | | |
| ▲ | Archer6621 3 hours ago | parent [-] | | Actually, in the specific case of a 3D program it's the current generation of LLM's complete lack of ability in spatial reasoning that prevents them from "understanding" what you want when you ask it to e.g. "make a camera that flies in the direction you are looking at". It necessarily has to derive it from examples of cameras that fly forward that it knows about, without understanding the exact mathematical underpinnings that allow you to rotate a 3D perspective camera and move along its local coordinate system, let alone knowing how to verify whether its implementation functions as desired, often resulting in dysfunctional garbage. Even with a human in the loop that provides it with feedback and grounds it (I tried), it can't figure this out, and that's just a tiny example. Math is precise, and an LLM's fuzzy approach is therefore a bad fit for it. It will need an obscene amount of examples to reliably "parrot" mathematical constructs. | | |
| ▲ | debugnik 3 hours ago | parent [-] | | > "make a camera that flies in the direction you are looking at" That's not the task of a renderer though, but its client, so you're talking past your parent comment. And given that I've seen peers one-shot tiny Unity prototypes with agents, I don't really believe they're that bad at taking an educated guess at such a simple prompt, as much as I wish it were true. | | |
| ▲ | Archer6621 3 hours ago | parent [-] | | You're right. My point was more that LLMs are bad at (3D) math and spatial reasoning, which applies to renderers. Since Unity neatly abstracts the complexity away of this through an API that corresponds well to spoken language, and is quite popular, that same example and similar prototypes should have a higher success rate. I guess the less detailed a spec has to be thanks to the tooling, the more likely it is that the LLM will come up with something usable. But it's unclear to me whether that is because of more examples existing due to higher user adoption, or because of fewer decisions/predictions having to be made by the LLM. Maybe it is a bit of both. |
|
|
| |
| ▲ | umanwizard an hour ago | parent | prev [-] | | What complexities specifically? Implementing SMTP (from the client’s perspective) that other SMTP servers can understand is not very hard. I have done it. Does it follow every nuance of the standard? I don’t know, but it works for me. I haven’t implemented IMAP but I don’t see why it should be much harder. Is there a particular example you have in mind? |
| |
| ▲ | fmbb 4 hours ago | parent | prev [-] | | I would be surprised if there are more working email clients out there than working 3D engines. The gaming market is huge, most people do not pay to use email, hobbyists love creating game engines. | | |
| ▲ | umanwizard 4 hours ago | parent [-] | | Idk, a working basic email client is just not that hard to write though. SMTP and IMAP are simple protocols and the required graphical interface is a very straightforward combination of standard widgets. |
|
| |
| ▲ | 4 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | wongarsu 5 hours ago | parent | prev [-] | | Most software written today (or 10 years ago, or 50 years ago) is not particularly unique. And even in that software that is unusual you usually find a lot of run-of-the-mill code for the more mundane aspects | | |
| ▲ | smackeyacky 4 hours ago | parent | next [-] | | I don’t think this is true. I’ve been doing this since the 1980s and while you might think code is fairly generic, most people aren’t shipping apps they’re working on quiet little departmental systems, or trying to patch ancient banking systems and getting a greenfield gig is pretty rare in my experience. So for me the code is mundane but it’s always unique and rarely do you come across the same problems at different organisations. If you ever got a spec good enough to be the code, I’m sure Claude or whatever could absolutely ace it, but the spec is never good enough. You never get the context of where your code will run, who will deploy it or what the rollback plan is if it fails. The code isn’t the problem and never was. The problem is the environment where your code is going. The proof is bit rot. Your code might have been right 5 years ago but isn’t any more because the world shifted around it. I am using Claude pretty heavily but there are some problems it is awful at, e.g I had a crusty old classic ASP website to resuscitate this week and it would not start. Claude suggested all the things I half remembered from back in the day but the real reason was Microsoft disabled vbscript in windows 11 24H2 but that wasn’t even on its radar. I have to remind myself that it’s a fancy xerox machine because it does a damn good job of pretending otherwise. | |
| ▲ | nostrademons 4 hours ago | parent | prev [-] | | Most of the economically valuable software written is pretty unique, or at least is one of few competitors in a new and growing niche. This is because software that is not particularly unique is by definition a commodity, with few differentiators. Commodity software gets its margins competed away, because if you try to price high, everybody just uses a competitor. So goes the AI paradox: it's really effective at writing lots and lots of software that is low value and probably never needed to get written anyway. But at least right now (this is changing rapidly), executives are very willing to hire lots of coders to write software that is low value and probably doesn't need to be written, and VCs are willing to fund lots of startups to automate the writing of lots of software that is low value and probably doesn't need to be written. | | |
| ▲ | philipp-gayret 4 hours ago | parent | next [-] | | Could you give some examples? I can only imagine completely proprietary technology like trading or developing medicine. I have worked in software for many years and was always paid well for it. None of it was particularly unique in any way. Some of it better than others, but if you could show that there exists software people pay well for that AI cannot make I would be really impressed. With my limited view as software engineer it seems to me that the data in the product / its users is what makes it valuable. For example Google Maps, Twitter, AirBnB or HN. | | |
| ▲ | Toutouxc 4 hours ago | parent | next [-] | | All it takes is a sufficiently big pile of custom features interacting. I work on a legal tech product that automates documents. Coincidentally, I'm just wrapping up a rewrite of the "engine" that evaluates how the documents will come out. The rewrite took many months, the code uses graph algorithms and contains a huge amount of both domain knowledge and specific product knowledge. Claude Code is having the hardest time making sense of it and not breaking everything every step of the way. It always wants to simplify, handwave, "if we just" and "let's just skip if null", it has zero respect for the amount of knowledge and nuance in the product. (Yes, I do have extensive documentation and my prompts are detailed and rarely shorter than 3 paragraphs.) | |
| ▲ | krethh 4 hours ago | parent | prev | next [-] | | You know how whenever you shuffle a deck of cards you almost certainly create an order that has never existed before in the universe? Most software does something similar. Individual components are pretty simple and well understood, but as you scale your product beyond the simple use cases ("TODO apps"), the interactions between these components create novel challenges. This applies to both functional and non-functional aspects. So if "cannot make with AI" means "the algorithms involved are so novel that AI literally couldn't write one line of them", then no - there isn't a lot of commercial software like that. But that doesn't mean most software systems aren't novel. | |
| ▲ | nostrademons 4 hours ago | parent | prev [-] | | Were you around when any of Google Maps, Twitter, AirBnB, or HN were first released? Aside from AirBnB (whose primary innovation was the business model, and hitting the market right during the global financial crisis when lots of families needed extra cash), they were each architecturally quite different from software that had come before. Before Google Maps nobody had ever pushed a pure-Javascript AJAX app quite so far; it came out just as AJAX was coined, when user expectations were that any major update to the page required a full page refresh. Indeed, that's exactly what competitor MapQuest did: you had to click the buttons on the compass rose to move the map, it moved one step at a time, and it fully reloaded the page with each move. Google Maps's approach, where you could just drag the map and it loaded the new tiles in the background offscreen, then positioned and cropped everything with Javascript, was revolutionary. Then add that it gained full satellite imagery soon after launch, which people didn't know existed in a consumer app. Twitter's big innovation was the integration of SMS and a webapp. It was the first microblog, where the idea was that you could post to your publicly-available timeline just by sending an SMS message. This was in the days before Twilio, where there was no easy API for sending these, you had to interface with each carrier directly. It also faced a lot of challenges around the massive fan-out of messages; indeed, the joke was that Twitter was down more than it was up because they were always hitting scaling limits. HN has (had?) an idiosyncratic architecture where it stores everything in RAM and then checkpoints it out to disk for persistence. No database, no distribution, everything was in one process. It was also written in a custom dialect of Lisp (Arc) that was very macro-heavy. The advantage of this was that it could easily crank out and experiment with new features and new views on the data. The other interesting thing about it was its application of ML to content moderation, and particularly its willingness to kill threads and shadowban users based on purely algorithmic processes. |
| |
| ▲ | pjmlp 4 hours ago | parent | prev [-] | | Agencies have switched to SaaS products and integrations via serverless or low code tooling, exactly because there is already too much of the same. |
|
|
|