| ▲ | Uehreka 12 hours ago |
| > I fear LLMs have frozen programming language advancement and adoption for anything past 2021. Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different. |
|
| ▲ | miki123211 10 hours ago | parent | next [-] |
| There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in. Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up. |
| |
| ▲ | crystal_revenge 9 hours ago | parent | next [-] | | > Because LLMs make it that much faster to develop software I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against. I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code. | | |
| ▲ | Verdex 6 hours ago | parent [-] | | This last week: * One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests. * One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story. * And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete. LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset. |
| |
| ▲ | treyd 4 hours ago | parent | prev | next [-] | | I don't think this is actually true. LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere. If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones. | | |
| ▲ | troupo 2 hours ago | parent [-] | | > LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere. That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer". Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago. |
| |
| ▲ | croes 6 hours ago | parent | prev [-] | | I bet LLMs create their version of Jevons paradox. More trial and error because trial is cheap, in the end less typing but hardly faster end results |
|
|
| ▲ | schrodinger 11 hours ago | parent | prev [-] |
| The motivation isn’t there to create new languages for humans when you’re programming at a higher level of abstraction now (AI prompting). It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly. I hope it’s not true, but I believe that’s what OP meant and I think the concern is valid! |
| |
| ▲ | abound 10 hours ago | parent | next [-] | | I would argue it's more important than ever to make new languages with new ideas as we move towards new programming paradigms. I think the existence of modern LLMs encourages designing a language with all of the following attributes: - Simple semantics (e.g. easy to understand for developers + LLMs, code is "obviously" correct) - Very strongly typed, so you can model even very complex domains in a way the compiler can verify - Really good error messages, to make agent loops more productive - [Maybe] Easily integrates with existing languages, or at least makes it easy to port from existing languages We may get to a point where humans don't need to look at the code at all, but we aren't there yet, so making the code easy to vet is important. Plus, there's also a few bajillion lines of legacy code that we need to deal with, wouldn't it be cool if you could port (or at least extend it) it into some standardized, performant, LLM-friendly language for future development? | | |
| ▲ | kevindamm 10 hours ago | parent | next [-] | | I think that LLMs will be complemented best with a declarative language, as inserting new conditions/effects in them can be done without modifying much (if any!) of the existing code. Especially if the declarative language is a logic and/or constraint-based language. We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet. | |
| ▲ | aaronblohowiak 9 hours ago | parent | prev [-] | | This is why I use rust for everything practicable now. Llms make the tedious bits go away and I can just enjoy the fun bits. |
| |
| ▲ | pxc 11 hours ago | parent | prev | next [-] | | > It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly. Isn't that what WASM is? Or more or less what is going on when people devise a new intermediate representation for a new virtual machine? Creating new assembly languages is a useful thing that people continue to do! | |
| ▲ | merlincorey 10 hours ago | parent | prev | next [-] | | I believe prompting an AI is more like delegation than abstraction especially considering the non-deterministic nature of the results. | | |
| ▲ | sarchertech 10 hours ago | parent [-] | | It does further than non-determinism. LLM output is chaotic. 2 nearly identical prompts with a single minor difference can result in 2 radically different outputs. |
| |
| ▲ | rapind 11 hours ago | parent | prev [-] | | We may end up using AI to create simplified bespoke subset languages that fit our preferences. Like a DSL of sorts but with better performance characteristics than a traditional DSL and a small enough surface area. |
|