Remix.run Logo
bigbones 2 days ago

I expect pretty much the opposite to happen: it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI. If a machine can act more reliably by simplifying its inputs at a fraction of the cost of the equivalent human labour, the system has always adjusted to accommodate the machine.

The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.

I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.

JumpCrisscross 2 days ago | parent | next [-]

> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI

The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.

At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.

This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.

SkiFire13 2 days ago | parent | next [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism.

Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.

> At that point, the utility of human-readable computer languages sort of goes out the door.

Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.

JumpCrisscross a day ago | parent [-]

> Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change

What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited.

> You'll never get this with an LLM because its very premise is using natural language, which is ambiguous

That's the point of locking the model. You need the prompts and the interpreter.

SkiFire13 a day ago | parent [-]

> That's the point of locking the model. You need the prompts and the interpreter.

This still doesn't seem to work for me:

- even after locking the LLM state you still need to understand how it processes your input, which is a task nobody has been able to do yet. Even worse, this can only happen after locking it, so it needs to be done for every project.

- the prompt is still ambiguous, so either you need to refine it to the point it becomes more similar to a programming language or you need an unlimited set of rules for how it should be disambiguated, which an auditor needs to learn. This makes the job of the auditor much harder and error prone.

bigbones 2 days ago | parent | prev [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism

I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.

JumpCrisscross 2 days ago | parent [-]

> the majority of buyers and users of software rarely ask for or need this level of perfection

You're right. Maybe just reliable replicability, then.

The core point is, the next step is the LLM talking directly to the control unit. No human-readable code in between. The prompts are the code.

toprerules 2 days ago | parent | prev | next [-]

You're missing the point, there are specific reasons why these stacks have grown in complexity - even if you introduce "API for AI interface" as a requirement, you still have to balance that with performance, reliability, interfacing with other systems, and providing all of the information necessary to debug when AI gets it wrong. All of the same things that humans need apply to AI - the claim for AI isn't that it deterministically solve every problem it can comprehend.

So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.

2 days ago | parent | next [-]
[deleted]
bigbones 2 days ago | parent | prev [-]

Based on what I've seen so far, I'm thinking a timeline more like 5-10 years where anything involving at least frontend has all but evaporated. What value is there in having a giant app team grind for 2 years on the perfect Android app when a user can simply ask for the display they want, and 5 variants of it until they are happy, all in a couple of seconds while sitting in the back of a car. What happens to all the hundreds of UI frameworks when a system as a widespread as Android adopts a technology approach like this?

Backend is significantly murkier, there are many tasks it seems unlikely an AI will accomplish any time soon (my toy example so far is inventing and finalizing the next video compression standard). But a lot of the complexity in backend derives from supporting human teams with human styles of work, and only exists due to the steady cashflow generated by organizations extracting tremendous premiums to solve problems in their particular style. I have no good way to explain this - what value is a $500 accounting system backend if models get good enough at reliably spitting out bespoke $15 systems with infinite customizations in a few seconds for a non-developer user, and what of all the technologies whose maintenance was supported by the cashflows generated by that $500 system?

tehjoker 2 days ago | parent [-]

These don't sound like the kinds of problems programmers solve. Users don't want to customize their UI (well some do), they want a UI that is adapted to their needs. They only want to customize it when it doesn't meet their needs (for example, if a corporation tries to use addicting features or hide things they need to increase engagement).

Accounting software has to be validated, and part of the appeal is that it simplified and consolidates workflows across huge bureaucracies. I don't see how on earth you can just spit one out from a prompt and expect that to replace anything.

I work on a compression algorithm myself, and I've found AI of limited utility. It does help me translate things for interfacing between languages and it can sometimes help me try out ideas, but I have to write almost everything myself.

EDIT: It is true, that lower skilled jobs are going to change or reduce in quantity in the short term. To a certain degree there might be a Jevon's paradox in terms of code quantity that needs management.

Imagine companies churning out tons and tons of code that no one understands that behaves bizzarely. Maybe it will become a boutique thing for companies to have code that works properly and people will just accept broken user interfaces or whatever so long as there are workarounds.

dmix 2 days ago | parent | prev [-]

I wonder if AI is going to reduce the amount of JS UIs. AI bots can navigate simple HTML forms much easier than crazy React code with 10 layers of divs for a single input. It's either that or people create APIs for everything and document how they are related and interact with documentation.

baq 2 days ago | parent [-]

Claude is so good at react the amount of UIs will increase.