Remix.run Logo
JumpCrisscross 2 days ago

> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI

The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.

At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.

This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.

SkiFire13 2 days ago | parent | next [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism.

Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.

> At that point, the utility of human-readable computer languages sort of goes out the door.

Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.

JumpCrisscross 2 days ago | parent [-]

> Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change

What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited.

> You'll never get this with an LLM because its very premise is using natural language, which is ambiguous

That's the point of locking the model. You need the prompts and the interpreter.

SkiFire13 a day ago | parent [-]

> That's the point of locking the model. You need the prompts and the interpreter.

This still doesn't seem to work for me:

- even after locking the LLM state you still need to understand how it processes your input, which is a task nobody has been able to do yet. Even worse, this can only happen after locking it, so it needs to be done for every project.

- the prompt is still ambiguous, so either you need to refine it to the point it becomes more similar to a programming language or you need an unlimited set of rules for how it should be disambiguated, which an auditor needs to learn. This makes the job of the auditor much harder and error prone.

bigbones 2 days ago | parent | prev [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism

I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.

JumpCrisscross 2 days ago | parent [-]

> the majority of buyers and users of software rarely ask for or need this level of perfection

You're right. Maybe just reliable replicability, then.

The core point is, the next step is the LLM talking directly to the control unit. No human-readable code in between. The prompts are the code.