Remix.run Logo
throwmeaway820 a day ago

> it appears to me to be really hard to guard against

I don't want to sound glib, but one could simply not let an LLM execute arbitrary code without reviewing it first, or only let it execute code inside an isolated environment designed to run untrusted code

the idea of letting an LLM execute code it's dreamt up, with no oversight, in an environment you care about, is absolutely bananas to me

blibble a day ago | parent | next [-]

> the idea of letting an LLM execute code it's dreamt up, with no oversight, in an environment you care about, is absolutely bananas to me

but if a skilled human has to check everything it does then "AI" becomes worthless

hence... YOLO

Terr_ a day ago | parent | next [-]

> if a skilled human has to check everything it does then "AI" becomes worthless

Well, perhaps not worthless, but certainly not "a trillion-dollar revolution that will let me fire 90% of my workforce and then execute my Perfect Rich Guy Visionary Ideas without any more pesky back-talk."

That said, the "worth" is brings to the shareholders will likely be a downgrade for everybody else, both workers and consumers, because:

> The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire 9/10s of your radiologists [...] and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

> “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

> This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies [calls] an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.

-- https://doctorow.medium.com/https-pluralistic-net-2025-12-05...

mannanj a day ago | parent [-]

The good ol Reverse-Centaur.

It's also like simultaneously a hybrid-zoan-Elephant in the room the CEOs don't want us to talk about.

Terr_ a day ago | parent [-]

The UPS delivery scenario is also evocative:

> Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

> The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

I guess it resonates for me because it strikes at my own justification for my work automating things, as I'm not mercenary or deluded enough to enjoy the idea of putting people out of work or removing the fun parts. I want to make tools that empower individuals, like how I felt the PC of the 1990s was going to give people more autonomy and more (effective, desirable) choices... As opposed to, say, the dystopian 1984 Telescreen.

mlyle a day ago | parent | prev | next [-]

I have to check what junior engineers do before running it in production. And AI is just really fast junior engineering.

raesene9 a day ago | parent [-]

The really fast part is the challenge though. If we assume that in pre-LLM world, there was enough resource for mid/senior level engineers to review junior engineer code and then in LLM world, lets say we can produce 10x the code, unless we 10x the number of mid/senior level engineering resource dedicated to review, what was once possible is no longer possible...

mlyle a day ago | parent | next [-]

I do feel like I can review 2-3x with a quicker context switching loop. Picking back up and following what the junior engineer did a a couple of weeks after we discussed the scope of work is hard.

hu3 a day ago | parent | prev [-]

We all know what will happen in many apps.

The user will test most of the code.

Just like we did test yesterday when Claude Code broke because CHANGELOG.md had an unexpected date.

ertian a day ago | parent | prev [-]

It could be as useful as a junior dev. You probably shouldn't let a junior dev run arbitrary commands in production without some sort of oversight or rails, either.

Even as a more experienced dev, I like having a second pair of eyes on critical commands...

alexjplant a day ago | parent | prev | next [-]

I think a nice compromise would be to restrict agentic coding workflows to cloud containers and a web interface. Bootstrap a project and new functional foundations locally using traditional autocomplete/chat methods (which you want to anyway to avoid a foundation of StackOverflow-derived slop) then implement additional features using the cloud agents. Don't commit any secrets to SCM and curate the tools that these agents can use. This way your dev laptops are firmly in human control (with IDEs freed up for actual coding) while LLMs are safelt leveraged. Win-win.

sigmonsays a day ago | parent | prev [-]

just wait until the exploit is so heavily obfuscated that you just review and allow it to get the project done.

therobots927 a day ago | parent [-]

You could literally ask the LLM to obfuscate it and I bet it would do a pretty good job. Good luck parsing 1,000 lines of code manually to identify an exploit that you’re not even specifically looking for.

lazide a day ago | parent [-]

Yup, add in some poetic prompt injection…..