Remix.run Logo
bandrami 18 hours ago

We've had deterministic automation of tier one response for over a decade now. What value would indeterminacy add to that?

tayo42 18 hours ago | parent [-]

To deal with the problems where there is ambiguity in the problem and the approach to solving it. Not everything is a basic decision tree. Humans aren't deterministic either, the way we woukd approach a problem is probably different. Is one of us right or wrong? We're generally just focused on end results.

Maybe 2 years ago Ai was doing random stuff and we got all those funny screenshots of dumb gemini answers. The indeterminism leading to random stuff isn't really an issue any more.

The way it thinks keeps it on track.

bandrami 16 hours ago | parent [-]

Two weeks ago I asked a frontier model to list five mammals without "e" in their name and number four was "otter"

tayo42 8 hours ago | parent [-]

Is identifying mammals without the letter E part of your ops work flow?

Opus 4.6 didn't have an issue with this question though.

thewebguyd 7 hours ago | parent [-]

> Is identifying mammals without the letter E part of your ops work flow?

No, but it can show unreliability for adjacent tasks. Identifying a CIDR block in traffic logs is a normal part of an ops work flow. It means it's more likely to fail if you need to generate a complex Regex to filter PII from a terabyte of logs. If the model has a blind spot for specific characters because it tokenizes words instead of seeing individual characters, then it can miss a critical path of failure because the service name didn't fit its probabilistic training.

Maybe you need to boilerplate Terraform. If the model can't reliably (reliably, as in, 100% deterministic, does this without fail) parse constraints, it's not just a funny mistake it's a potential 5 figure billing error.

Ops can't run on "mostly accurate." That's just simply not good enough. We need deterministic precision.

For AI to be useful in this world to the extent others have claimed it is for software eng, we'll likely need more advanced world models, not just something that can predict the next most likely token.

tayo42 5 hours ago | parent [-]

Your terraform written by a person already doesn't have deterministic precision. Ai isn't messing these things up either.

If your Ai work flow is still dumping logs into a chat and saying search it for some pattern, then you should see what something like Claude code approaches problems. These agents aren't building scripts to solve problems. Which is your deterministic solution.

thewebguyd 5 hours ago | parent | next [-]

That still only just makes it a force multiplier for engineers, like any other tech, not a replacement as it's being hyped and sold as.

Claude resorting to writing code for everything, because that's all the model can do without too many hallucinations and context poisoning, is just a higher speed REPL. Great, that's useful.

But that's not what is being hyped and sold. What's being hyped and sold is "You don't need an Ops guy anymore, just talk to the computer." Well, what happens when the AI decides the "fix" is to just open up 0.0.0.0/0 to the world to make the errors go away? The non technical minimum wage person now just talking to the computer has no idea they just pwned the company.

If AI's answer is "Just write a script to solve the prompt" then you still need technical people, and it's vasly over hyped.

I'll be interested when you actually can just dump logs in a chat and analyze it without the model having to resort to writing code to solve the problem. That will be revolutionary. Imagine all the time I'd save by not having to make business reports, I can just tell the business people to point AI at terabytes of CSV exports and just ask it questions. That is when it will stop just being labor compression for existing engineers, and start being a world changing paradigm shift.

For now, it's just yet another tool in my toolbelt.

tayo42 2 hours ago | parent [-]

Not sure why the implementation is important or not. The point is the system will be triggered by some text input and complete the task asynchronously on its own.

bandrami 2 hours ago | parent | prev [-]

> Your terraform written by a person already doesn't have deterministic precision

Can you expand on that? Because it sure seems to me like it is in fact deterministic unless the person deliberately made it otherwise

tayo42 2 hours ago | parent [-]

If i give you a task to write terraform or any code, you won't write what I write, you probably won't even write the same thing twice. You can introduce a bug too, we're not perfect. The output of the task "write some terraform" already isn't deterministic when dealing with people.