Remix.run Logo
zoogeny 4 hours ago

One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.

But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.

If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.

krashidov 4 hours ago | parent | next [-]

The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.

The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop

superxpro12 an hour ago | parent | next [-]

I often find myself wondering about these things in the context of star trek... like... could Geordi actually code? Could he actually fix things? Or did the computer do all the heavy lifting. They asked "the computer" to do SO MANY things that really parallel today's direction with "AI". Even Data would ask the computer to do gobs of simulations.

Is the value in knowing how to do an operation by hand, or is the value in knowing WHICH operation to do?

root_axis 4 hours ago | parent | prev | next [-]

That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.

However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.

mixdup 4 hours ago | parent [-]

Will they be willing to offer their content for training AI models?

atomic128 2 hours ago | parent [-]

Probably not.

We have witnessed, over the past few years, an "AI fair use" Pearl Harbor sneak attack on intellectual property.

The lesson has been learned:

In effect, intellectual property used to train LLMs becomes anonymous common property. My code becomes your code with no acknowledgement of authorship or lineage, with no attribution or citation.

The social rewards (e.g., credit, respect) that often motivate open source work are undermined. The work is assimilated and resold by the AI companies, reducing the economic value of its authors.

The images, the video, the code, the prose, all of it stolen to be resold. The greatest theft of intellectual property in the history of Man.

mosura 4 hours ago | parent | prev | next [-]

There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.

We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.

gritspants 3 hours ago | parent [-]

Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.

On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.

Also on the list to knock out, as we're not impressed with Wiz's resource usage.

Aeolun 4 hours ago | parent | prev | next [-]

Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.

zozbot234 4 hours ago | parent | prev | next [-]

AI will finally rewrite everything in Rust.

ModernMech 4 hours ago | parent | prev | next [-]

> New languages and technology will be derivatives of existing tech.

This has always been true.

> There will be no React successor.

No one needs one, but you can have one by just asking the AI to write it if that's what we need.

> There will never be a browser that can run something other than JS.

Why not, just tell the AI to make it.

> And the reason for that is because in 20 years the new engineers will not know how to code anymore.

They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.

Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.

lock1 2 hours ago | parent [-]

  > Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.

  > The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.

Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.

With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.

oblio 4 hours ago | parent | prev [-]

This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.

Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.

10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.

mosura 4 hours ago | parent | prev | next [-]

This massively confusing phase will last a surprisingly long time, and will conclude only if/when definitive proof of superintelligence arrives, which is something a lot of people are clearly hoping never happens.

Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.

So get used to being ever more confused.

lmeyerov 4 hours ago | parent | prev | next [-]

I've been calling this Software Collapse, similar to AI Model Collapse.

An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. However, fundamentally new ideas aren't being added: This is recombination, translation, and reapplication of existing ideas and tools. As the cost to clone good ideas goes to zero, software converges towards the existing best ideas & tools across the field and stops differentiating.

It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.

The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.

zozbot234 3 hours ago | parent | next [-]

> Things are getting higher quality but bland.

"Bland" is not a bad thing. The FLOSS ecosystem we have today is quite "bland" already compared to the commercial and shareware/free-to-use software ecosystem of the 1980s and 1990s. It's also higher quality by literally orders of magnitude, and saves a comparable amount of pointless duplicative effort.

Hopefully AI will be a similar story, especially if human reviewing/surveying effort (the main bottleneck if AI coding proves effective) can be mitigated via the widespread adoption of rigorous formal metods, where only the underlying specification has to be reviewed whereas its implementation is programmatically checkable.

titzer 3 hours ago | parent | prev [-]

The dark side of this is that everyone has graduated to prompt engineering and there's no one with expertise left who can debug it. We'll be entirely dependent on AIs to do the debugging too. When whoever controls the AIs decides to enshittify that service, we'll be truly screwed. That is, if we can't run competitive models locally at reasonable efficiency and price.

I don't know how this will play out, except that I've been so cowed by the past 15 years of enshittification that I don't feel hopeful.

jellyroll42 4 hours ago | parent | prev [-]

By its nature, it can only produce _another_ Redis, not _the next_ Redis.

Philpax 4 hours ago | parent | next [-]

The human operator controls what gets built. If they want to build Redis 2, they can specify it and have it built. If you can't take my word for it, take those of the creator of Redis: https://antirez.com/news/159

smokel 4 hours ago | parent | prev | next [-]

This is probably an outdated understanding of how LLMs work. Modern LLMs can reason and they are creative, at least if you don't mind stretching the meaning of those words a bit.

The thing they currently lack is the social skills, ambition, and accountability to share a piece of software and get adoption for it.

dingnuts 4 hours ago | parent | prev [-]

[dead]