Remix.run Logo
snowmobile 14 hours ago

Bit of a wider discussion, but how do you all feel about the fact that you're letting a program use your computer to do whatever it wants without you knowing? I know right now LLMs aren't overly capable, but if you'd apply this same mindset to an AGI, you'd probably very quickly have some paperclip-maximizing issues where it starts hacking into other systems or similar. It's sort of akin to running experiments on contagious bacteria in your backyard, not really something your neighbors would appreciate.

devolving-dev 14 hours ago | parent | next [-]

Don't you have the same issue when you hire an employee and give them access to your systems? If the AI seems capable of avoiding harm and motivated to avoid harm, then the risk of giving it access is probably not greater than the expected benefit. Employees are also trying to maximize paperclips in a sense, they want to make as much money as possible. So in that sense it seems that AI is actually more aligned with my goals than a potential employee.

johndough 14 hours ago | parent | next [-]

I do not believe that LLMs fear punishment like human employees do.

devolving-dev 13 hours ago | parent [-]

Whether driven by fear or by their model weights or whatever, I don't think that the likelihood of an AI agent, at least the current ones like Claude and Codex, acting maliciously to harm my systems is much different than the risk of a human employee doing so. And I think this is the philosophical difference between those who embrace the agents, they view them as akin to humans, while those who sandbox them view them as akin to computer viruses that you study within a sandbox. It seems to me that the human analogy is more accurate, but I can see arguments for the other position.

snowmobile 11 hours ago | parent [-]

Sure, current agents are harmless, but that's due to their low capability, not due to their alignment with human goals. Can you explain why you'd view them as more similar to humans than to computer viruses?

devolving-dev 9 hours ago | parent [-]

It's just in my personal experience, I ask AI to help me and it seems to do it's best. Sometimes it fails because it's incapable. It's similar to an employee in that regard. Whereas when I install a computer virus it instantly tries to do malicious things to my computer, like steal my money or lock my files or whatever, and it certainly doesn't try to help me with my tasks. So that's the angle that I'm looking at it from. Maybe another good example would be to compare it to some other type of useful software like a web browser. The web browser might contain malicious code and stuff, but I'm not going to read through all of the source code. I haven't even checked if other people have audited the source code. I just feel like the risk of chrome or Firefox messing with my computer is kind of low based on my experience and what people are telling me, so I install it on my computer and give it the necessary permissions.

snowmobile 8 hours ago | parent [-]

Sure, it's certainly closer to a browser than a virus. But it's pretty far from a human and comparing it to one is dangerous in my opinion. Maybe it's similar to a dog. Not in the sense of moral value, but rather an entity (or something resembling an entity at least) with its own unknowable motivations. I think that analogy fits at least my viewpoint, where members of the public would be justifiably upset if you let your untrained do walk around without a leash.

snowmobile 11 hours ago | parent | prev [-]

An AI has no concept of human life nor any morals. Sure, it may "act" like it, but trying to reason about its "motivations" is like reasoning about the motivations of smallpox. Humans want to make money, but most people only want that in order provide a stable life for their family. And they certainly wouldn't commit mass murder for a billion dollars, while an AGI is capable of that.

> So in that sense it seems that AI is actually more aligned with my goals than a potential employee.

It may seem like that but I recommend you reading up on different kinda of misalignment in AI safety.

andai 13 hours ago | parent | prev | next [-]

Try asking the latest Claude models about self replicating software and see what happens...

(GPT recently changed its attitude on this subject too which is very interesting.)

The most interesting part is that you will be given the option to downgrade the conversation to an older model. Implying that there was a step change in capability on this front in recent months.

snowmobile 7 hours ago | parent [-]

I suppose that returns some guardrail text about how it's not allowed to talk about it? Meanwhile we see examples of it accidentally deleting files, writing insecure code and whatnot. I'm more worried about a supposedly "well-meaning" model doing something bad simply because it has no real way to judge the morality of its actions. Playing whack-a-mole with the flavor of the day "unsafe" text string will not change that.

wilsonnb3 6 hours ago | parent | prev | next [-]

Programs can’t want things, it’s no different than running any other program as your user

snowmobile 6 hours ago | parent [-]

It's different in the sense that LLMs are unpredictable and if you let them take arbitrary actions you don't know what's gonna happen, unlike any other program. A random generator doesn't "want" anything but putting it in control of steering my car is a bad idea.

theptip 13 hours ago | parent | prev | next [-]

The point of TFA is that you are not letting it do whatever it wants, you are restricting it to just the subset of files and capabilities that you mount on the VM.

snowmobile 11 hours ago | parent [-]

Sure, and right now they aren't very capable, so it's fine. But I'm interested in the mindset going forward. I've read a few stories about people handling radioactive materials at home, they usually explain the precautions they take, but still many would condemn them for the unnecessary risk. Compare it to road racing, whose advocates usually claim they pose no danger to the general public.

deegles 14 hours ago | parent | prev [-]

I run mine in a docker container and they get read only access to most things.