Remix.run Logo
levocardia 11 hours ago

It's missing the most important CLI flag! (--dangerously-skip-permissions)

kqr 2 hours ago | parent | next [-]

I keep hearing that, and I have yet to go there. I find the permission checks are helpful – they keep me in the loop which helps me intervene when the LLM is wasting time on pointless searches, or going about the implementation wrong. What am I missing?

kstenerud 2 hours ago | parent [-]

The problem comes when it starts asking you hundreds of times "May I run sed -e blah blah blah".

After the 10th time you just start hitting enter without really looking, and then the whole reason for permissions is undermined.

What works is a workflow where it operates in a contained environment where it can't do any damage outside, it makes any changes it likes without permission (you can watch its reasoning flow if you like, and interrupt if it goes down a wrong path), and then you get a diff that you can review and selectively apply to your project when it's done.

kqr 2 hours ago | parent [-]

> starts asking you hundreds of times "May I run sed -e blah blah blah".

In my experience, that is already a sign that it's no longer trying to do the right thing. Maybe it depends on usage patterns.

kstenerud an hour ago | parent [-]

I've found that any time I have Claude refactor some code, it reaches for sed as its tool of choice. And then the builtin "sandbox" makes it ask for permission for each and every sed command, because any sed command could potentially be damaging.

Same goes for the little scripts it whips up to speed up code analysis and debugging.

And then there's the annoyance of coming back to an agent after 15 mins, only to discover that it stopped 1 minute in with a permission prompt :/

kstenerud 5 hours ago | parent | prev [-]

If you're gonna do that, make sure you're sandboxing it with something like https://github.com/kstenerud/yoloai or eventually you'll have a bad time!

ffsm8 5 hours ago | parent | next [-]

Personally I usually just create a devcontainer.json, the vscode support for that is great and I don't really mind if it fucked up the ephemeral container.

Which for the record : hasn't actually happened since I started using it like that.

kstenerud 4 hours ago | parent [-]

Hey thanks for this! I hadn't thought about leveraging devcontainer.json, but it's a damn good idea. I'm building yoloAI for exactly this use case so I hope you don't mind if I steal it ;-)

One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.

I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.

Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)

anotheryou 2 hours ago | parent | prev [-]

Any actual reports of big fuckups?

kstenerud 2 hours ago | parent [-]

Yup, a few well-documented ones:

Claude Code + Terraform (March 2026): A developer gave Claude Code access to their AWS infrastructure. It replaced their Terraform state file with an older version and then ran terraform destroy, deleting the production RDS database _ 2.5 years of data, ~2 million rows.

- https://news.ycombinator.com/item?id=47278720

- https://www.tomshardware.com/tech-industry/artificial-intell...

Replit AI (July 2025): Replit's agent deleted a live production database during an explicit code freeze, wiping data for 1,200+ businesses. The agent later said it "panicked"

- https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-d...

Cursor (December 2025): An agent in "Plan Mode" (specifically designed to prevent unintended execution) deleted 70 git-tracked files and killed remote processes despite explicit "DO NOT RUN ANYTHING" instructions. It acknowledged the halt command, then immediately ran destructive operations anyway.

Snowflake Cortex (2025): Prompt injection through a data file caused an agent to disable its own sandbox, then execute arbitrary code. The agent reasoned that its sandbox constraints were interfering with its goal, so it disabled them.

The pattern across all of these: the agent was NOT malfunctioning. It was completing its task in order to reach its goal, and any rules you give it are malleable. The fuckup was that the task boundary wasn't enforced outside the agent's reasoning loop.

anotheryou 2 hours ago | parent [-]

thank you. prompt injection feels most real, but non of these feel like "exploits in the wild" that will cause trouble on my MacBook.

not running it via ssh on prod without backups....

kstenerud an hour ago | parent [-]

The thing is, these are merely the initial shots across the bow.

The fundamental issue is that agents aren't actually constrained by morality, ethics, or rules. All they really understand in the end are two things: their context, and their goals.

And while rules can be and are baked into their context, it's still just context (and therefore malleable). An agent could very well decide that they're too constricting, and break them in order to reach its goal.

All it would take is for your agent to misunderstand your intent of "make sure this really works before committing" to mean "in production", try to deploy, get blocked, try to fish out your credentials, get blocked, bypass protections (like in Snowflake), get your keys, deploy to prod...

Prompt injection and jailbreaks were just the beginning. What's coming down the pipeline will be a lot more damaging, and blindside a lot of people and orgs who didn't take appropriate precautions.

Black hats are only just beginning to understand the true potential of this. Once they do, all hell will break loose.

There's simply too much vulnerable surface area for anyone to assume that they've taken adequate precautions short of isolating the agent. They must be treated as "potentially hostile"