Remix.run Logo
tacker2000 3 hours ago

This guy is vibing some react app, doesnt even know what “npm run dev” does, so he let the LLM just run commands. So basically a consumer with no idea of anything. This stuff is gonna happen more and more in the future.

spuz 3 hours ago | parent | next [-]

There are a lot of people who don't know stuff. Nothing wrong with that. He says in his video "I love Google, I use all the products. But I was never expecting for all the smart engineers and all the billions that they spent to create such a product to allow that to happen. Even if there was a 1% chance, this seems unbelievable to me" and for the average person, I honestly don't see how you can blame them for believing that.

ogrisel 3 hours ago | parent | next [-]

I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.

We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.

cube2222 2 hours ago | parent | next [-]

Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.

In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.

[0]: https://code.claude.com/docs/en/sandboxing

BrenBarn 2 hours ago | parent | prev [-]

We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.

jpc0 2 hours ago | parent | next [-]

This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

It's perfectly within the capabilities of the car to do so.

The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.

pas 2 hours ago | parent | prev [-]

there are laws about waiving liability for experimental products

sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one

Vinnl 37 minutes ago | parent | prev [-]

Didn't sound to me like GP was blaming the user; just pointing out that "the system" is set up in such a way that this was bound to happen, and is bound to happen again.

benrutter an hour ago | parent | prev | next [-]

Yup, 100%. A lot of the comments here are "people should know better" - but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".

The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.

tarsinge an hour ago | parent | prev | next [-]

And is vibing replies to comments too in the Reddit thread. When commenters points out they shouldn’t run in YOLO/Turbo mode and review commands before executing the poster replies they didn’t know they had to be careful with AI.

Maybe AI providers should give more warnings and don’t falsely advertise capabilities and safety of their model, but it should be pretty common knowledge at this point that despite marketing claims the models are far from being able to be autonomous and need heavy guidance and review in their usage.

fragmede an hour ago | parent [-]

In Claude Code, the option is called "--dangerously-skip-permissions", in Codex, it's "--dangerously-bypass-approvals-and-sandbox". Google would do better to put a bigger warning label on it, but it's not a complete unknown to the industry.

blitzar 2 hours ago | parent | prev | next [-]

Natural selection is a beautiful thing.

Den_VR 3 hours ago | parent | prev | next [-]

It will, especially with the activist trend towards dataset poisoning… some even know what they’re doing

ares623 3 hours ago | parent | prev | next [-]

This is engagement bait. It’s been flooding Reddit recently, I think there’s a firm or something that does it now. Seems very well lubricated.

Note how OP is very nonchalant at all the responses, mostly just agreeing or mirroring the comments.

I often see it used for astroturfing.

spuz 3 hours ago | parent [-]

I'd recommend you watch the video which is linked at the top of the Reddit post. Everything matches up with an individual learner who genuinely got stung.

camillomiller 2 hours ago | parent | prev [-]

Well but 370% of code will be written by machines next year!!!!!1!1!1!!!111!

actionfromafar 2 hours ago | parent [-]

And the price will have decreased 600% !