| ▲ | hippo22 14 hours ago | |||||||
I agree it will hold back new technologies, but, at the same time, I'm not sure what the value add of new technologies will be going forward. Often, as is the case with git vs. jj, the value add of a new technology is mostly ergonomic. As AI becomes more ingrained in the development flow, engineers won't engage with the underlying tech directly, and so ergonomic benefits will be diminished. New technologies that emerge will need to provide benefits to AI-agents, not to engineers. Should such a technology emerge, agent developers will likely adopt it. For this reason, programming languages, at least how we understand them today, have reached a terminal state. I could easily make a new language now, especially with the help of Claude Code et al, but there would never be any reason for any other engineer to use it. | ||||||||
| ▲ | desmondwillow 6 hours ago | parent | next [-] | |||||||
I'm quite surprised to hear that "programming languages have reached a terminal state": there are (imo) at least four in-progress movements in the industry right now: 1. Memory safety 2. Better metaprogramming capabilities 3. Algebraic effects 4. Solver/prover awareness Even if LLMs become capable of writing all code, I think there's a good chance that we'd want those LLMs writing code in a language with memory safety and one amenable to some sort of verification. | ||||||||
| ▲ | surajrmal 12 hours ago | parent | prev | next [-] | |||||||
Even if you're not authoring changes as much, change management is likely still to be a very useful activity for a long while. Also note that not everyone is using AI today, and many that do only use it as glorified auto complete. It will take many more years for it's adoption to put us in a situation like your describing, why halt progress in the meantime? My personal productivity increased greatly by switching to jj, perhaps more than adding Gemini CLI to my workflow. I can more confidently work on several changes in parallel while waiting on things like code review. This was possible before but rebasing it and dealing with merge conflicts tended to limit me from doing it beyond a handful of commits. Now I can have 20+ outstanding commits (largely with no interdependencies) and not feel like I'm paying much management overhead while doing so. I can also get them reviewed in parallel more easily. | ||||||||
| ▲ | wenc 8 hours ago | parent | prev | next [-] | |||||||
> For this reason, programming languages, at least how we understand them today, have reached a terminal state. I could easily make a new language now, especially with the help of Claude Code et al, but there would never be any reason for any other engineer to use it. This is an interesting opinion. I feel we are nowhere near the terminal state for programming languages. Just as we didn't stop inventing math after arithmetic, we will always need to invent higher abstractions to help us reason beyond what is concrete (for example, imaginary numbers). A lot of electrical engineering wouldn't be possible to reason about without imaginary numbers. So new higher abstractions -- always necessary, in my opinion. That said, I feel your finer point resonates -- about how new languages might not need to be constrained to the limitations of human ergonomics. In fact, this opens up new space of languages that can transcend human intuition because they are not written for humans to comprehend (yet are provably "correct"). As engineers, we care about human intuition because we are concerned about failure modes. But what if we could evolve along a more theoretical direction, similar to the one Haskell took? Haskell is basically "executable category theory" with some allowances for humans. Even with those tradeoffs, Haskell remains hard for most humans to write, but what if we could create a better Haskell? Then farther along, what if we created a LEAN-adjacent language, not for mathematical proofs, but for writing programs? We could throw in formal methods (TLA+) type thinking. Today formal methods give you a correctness proof, but are disconnected to implementation (AWS uses TLA+ for modeling distributed systems, but the final code was not generated from TLA+, so there's disconnect). What if one day we can write a spec, and it generates a TLA+ proof, which we can then use to generate code? In this world, the code generator is simply a compiler -- from mathematically rigorous spec to machine code. (that said, I have to check myself. I wonder what this would look like in a world full of exceptions, corner cases, and ambiguities that cannot be modeled well? Tukey's warning comes to mind: "Far better an approximate answer to the right question, than an exact answer to the wrong question") | ||||||||
| ▲ | sunshowers 11 hours ago | parent | prev [-] | |||||||
Ah yes, the inevitable future where the only way we'll know to interact with the machine is through persuading a capricious LLM. We'll spend our days reciting litanies to the machine spirits like in 40k. | ||||||||
| ||||||||