| ▲ | I don't want AI agents controlling my laptop(sophiebits.com) |
| 75 points by Bogdanp 2 days ago | 42 comments |
| |
|
| ▲ | LudwigNagasena 2 days ago | parent | next [-] |
| I want AI agents controlling my laptop. And not only AI. There are lots of cool software programs I want on my laptop besides AI too. The problem is not AI, the problem is the awful security model that is the foundation of all modern operating systems. |
| |
| ▲ | yupyupyups 2 days ago | parent | next [-] | | I think most people would find automatic feeding of arbitrary private information from their devices to an external server to be problematic. If the AI is running offline and is non-destructive/safe then that's a different story. | |
| ▲ | DauntingPear7 a day ago | parent | prev [-] | | What would be a better security foundation? | | |
| ▲ | LudwigNagasena a day ago | parent | next [-] | | Mandatory access controls, fine-grained capabilities, temporary capability grants, risk scoring, progressive disclosure, high-level intents that encapsulate permissions, and most importantly, an audit system. We have the hindsight of developing highly distributed low-trust systems. We can do better than `curl -fsSL https://remote_install_script/ | sudo sh`. | |
| ▲ | beefnugs a day ago | parent | prev [-] | | qubesOS too complicated for most people |
|
|
|
| ▲ | spaceman_2020 2 days ago | parent | prev | next [-] |
| I don’t understand the author’s complaint - are the AI agents forcibly installing themselves on your computer? Are they shipping these agents without settings to change permission seeking behavior? This is just a rant about something you absolutely don’t have to do |
| |
| ▲ | resonious 2 days ago | parent | next [-] | | Windows is heading in that direction. | | |
| ▲ | WD-42 2 days ago | parent | next [-] | | Yup and once it arrives there the MS fans will be in here telling everyone to “just use the server edition” like they do now to anyone who says they don’t like ads and spyware in their OS today. | |
| ▲ | 2 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | 000ooo000 a day ago | parent | prev | next [-] | | >Microsoft re-enables disabled abc after xyz >Microsoft launches shitfoo; users livid they can't disable it If I had a dollar for every time one of these headlines has scrolled across my screen.. just recently CoPilot has drawn the ire of devs who don't want it involved with their repos. https://www.theregister.com/2025/09/05/github_copilot_compla... If you think MS isn't interested in shoving all this down your throat on Windows at the earliest opportunity, then I dunno what to tell you. But people are allowed to have opinions on things and TFA is just that. | |
| ▲ | realz a day ago | parent | prev [-] | | They say during the gold rush, the people selling shovels made more money than the miners themselves. AI has a similar pattern. Those profiting from AI hype it relentlessly. Meanwhile the butthurts with nothing to sell become the loudest critics, just to stay relevant. We get it guys. AI sucks and you don't like it. You need not turn yourself into a parrot. Nobody's in the market for your outrage. |
|
|
| ▲ | yeputons 2 days ago | parent | prev | next [-] |
| > Sure, there are some protections like “you can’t record the screen without the user granting explicit permission”, Are there? Any app on Windows screenshot and access camera, microphone, whatever. Aren't permissions for Windows Store-style apps only? |
|
| ▲ | theden 2 days ago | parent | prev | next [-] |
| I must be out of the loop, I didn't know people were actually doing this in their workflow. When I do use LLMs, it's in a separate app, where I can cherry pick what I input and output at my own pace. Maybe I'm naive, but the ever-increasing tradeoffs for even more velocity does not seem worth it. |
| |
| ▲ | WD-42 2 days ago | parent | next [-] | | Don’t worry, the only people that are doing this are creating absolute dumpster fires. | |
| ▲ | pikseladam a day ago | parent | prev [-] | | same. I think this is the way for most of good engineers |
|
|
| ▲ | manofmanysmiles 2 days ago | parent | prev | next [-] |
| What I've been doing is running an agent inside a locked down k8s environment. Agents are spun up by operator, and have access to a single namespace. It's not perfect, as container escape is not entirely unlikely. I am working in a future version where all agents run inside firecracker VMs, log all actions logged externally. With Kubernetes it's like having a bunch of virtual employees making git commits, firing up name-spaced ephemeral resources and collaborating like "remote" employees. It's certainly fun, but I haven't quite polished it to the point where I recommend this architecture to anyone. |
| |
| ▲ | throwaway6977 a day ago | parent [-] | | I just spent a lot of yesterday tweaking a docker image with xfce and vs code so I can just let codex go full access mode without too much worry in a throwaway sandbox. The agent runs similarly-namespace-constrained and without sudo. I think it's a relatively safe middleground- do you really think container escape is still a big deal here? Finally getting this setup also allowed me to very quickly troubleshoot what was breaking my build in the codex cloud hosted container which obviously has even less risk attached. Now I'm juggling and strategizing branches like coding is an RTS game... and it feels like a super power. It's almost like unlocking an undiscovered tech tree. |
|
|
| ▲ | throwmeaway222 2 days ago | parent | prev | next [-] |
| I think we're at 3rd wave AI and it's got a RPM of maybe 20-40. In a year or two we'll be at 900 RPM and having the user give permissions won't really be feasible. So perhaps a secondary AI prompt will "validate" the request for you - we're only going in this direction. Sure, comment on the time we're at, but it won't be relevant for a while. |
|
| ▲ | cadamsdotcom 2 days ago | parent | prev | next [-] |
| It's really a question of whether you want a deterministic machine or a probabilistic one. Depends on the use case, really. |
| |
| ▲ | jckahn 2 days ago | parent [-] | | What's the use case for a probabilistically controlled computer? | | |
|
|
| ▲ | hoppp 2 days ago | parent | prev | next [-] |
| Me neither. I would maybe run them in a VM but don't use them at all right now Would be cool to run them in freebsd jails |
|
| ▲ | dankwizard 2 days ago | parent | prev | next [-] |
| I do, it's called embracing the future - Either get with it or get out of the game. If you aren't giving your AI untethered sudo access then honestly its more of a reflection on you and your inability to accept change in the workplace. |
| |
| ▲ | grugagag 2 days ago | parent | next [-] | | Yeah, relinquish all control because .. embracing the future. | | |
| ▲ | 2 days ago | parent | next [-] | | [deleted] | |
| ▲ | LtWorf 2 days ago | parent | prev | next [-] | | I'm pretty sure he's sarcastic. | | |
| ▲ | thomasdziedzic 2 days ago | parent [-] | | What makes you pretty sure he's sarcastic? | | |
| ▲ | saagarjha 2 days ago | parent [-] | | I mean it's possibly Poe's Law but this is the amalgamated rhetoric of the most crazy people pushing this kind of thing |
|
| |
| ▲ | throwmeaway222 2 days ago | parent | prev [-] | | we already have - every time you get in a car you're elevating the chance of death considerably from before you hopped in. (seriously it's like 1000000x more dangerous than just sitting on a sofa) Granted it's a low chance, but it's also similarly low that your bank account will be drained to zero because you codex --yolo'd it. If that DOES happen to someone then yeah, I'd consider changing my behavior. For example there's no fucking way I would FSD in a Tesla. | | |
| |
| ▲ | sys_64738 2 days ago | parent | prev | next [-] | | Surely all your laptop's data should be in the cloud so giving AI access to that data is the way to go. | |
| ▲ | akomtu 2 days ago | parent | prev | next [-] | | I can imagine the same conversation 10 years later: "The productivity boost of AI implants is obvious by now, it gives at least +50 IQ points. Those stubborn employees should just yield and grant full control to their brains if they want to stay relevant." | |
| ▲ | pessimizer 2 days ago | parent | prev [-] | | It's even worse: it's a sign of insecurity and the lack of the ability to just trust and let go of control. Often related to malignant narcissism. I recommend SSRIs and inpatient therapy. You should probably give up custody of your children, too, unless you want them to grow up with the same weaknesses. | | |
| ▲ | autoexec 2 days ago | parent [-] | | Everyone should give up custody of their children to the state. Refusal to give your children to the state is a sign of insecurity and the lack of the ability to just trust and let go of control. You should really just give up all of your freedom. Refusal to give up your freedom is a sign of insecurity and the lack of the ability to just trust and let go of control. |
|
|
|
| ▲ | evgpbfhnr 2 days ago | parent | prev | next [-] |
| bwrap. I don't run AI, but anything I don't fully trust 200% runs without access to my home, and if it doesn't really need internet without internet either.
bwrap commands can be a mouthful so I suggest making a script for things you commonly do, e.g. "run with this directory as $HOME" or "run with empty home, keeping just this directory as is", with a couple of flags to enable networking or wayland/sound... Once you have this there really is no benefit to not sandboxing.
It's probably not as good as running in a full VM, but it's good enough for me. |
|
| ▲ | dr_win 2 days ago | parent | prev | next [-] |
| What about buying a dedicated machine for running agents? One macbook for agents and one for personal/private work plus a good KVM switch maybe or remote desktop. |
| |
| ▲ | statguy 2 days ago | parent [-] | | It doesn't work that way. The dedicated machine for running agents will have very limited utility because it will not have access to anything it needs like your credit card to automatically purchase stuff on your behalf etc. | | |
| ▲ | tonypapousek 2 days ago | parent [-] | | > credit card to automatically purchase stuff on your behalf Why would anyone _want_ that? Or, let’s pretend for a moment they did, wouldn’t it make more sense to grant access to a purchasing account (e.g. Amazon) with payment info pre-linked? Especially given the “record absolutely everything for evidence” approach companies are taking, giving them auto access to payment info isn’t very smart. |
|
|
|
| ▲ | wallopinski 2 days ago | parent | prev | next [-] |
| 2004 me and my friends: "I don't want all my public information online." GenZ; publishes every possible detail on TikTok. In 20 years we've done a cultural 180 on privacy. I bet in 20 years Gen5 (three generations from now?) will be fine with AI agents running their lives. Meanwhile I'll be 80 and still not on social media, just message boards like HN. Using new frequent accounts and changing my wirting style to defeat stylometrics (sorry dang). |
| |
| ▲ | autoexec 2 days ago | parent [-] | | > GenZ; publishes every possible detail on TikTok. In 20 years we've done a cultural 180 on privacy. the results of that has only proved you were right. I'll go on record now that the people who don't want corporate controlled AI in their personal lives today are also going to be proven right when the next generation of suckers comes along and gives up what they had because a corporation told them too. |
|
|
| ▲ | perryizgr8 a day ago | parent | prev | next [-] |
| I want ai agents controlling my laptop, my desktop and my phone. I'm tired of doing everything manually. These personal devices should be brimming with intelligence, anticipating my every move, offering to complete my tasks, touching up photos and videos automatically, having a perfect memory and awareness of my online and offline life. I think the real barrier right now is cost. But i can't wait to get to that future. Example: sometimes i start working on a thing on my laptop in the living room, realise I would rather finish it on the desktop. My laptop has a camera, the desktop has a webcam, my phone has multiple cameras. An ai agent should be monitoring all these and more sensors and my laptop screen and be able to deduce that I want to continue on the desktop. By the time I reach the desktop it should be awake, and in the same state I left off on the laptop. |
|
| ▲ | jmclnx 2 days ago | parent | prev [-] |
| >modern desktop operating systems are not really designed for strong security boundaries I agree, I do not want AI anywhere near my Laptop. But there are Operating Systems that do not and probably never be controlled by "AI". The quote above is curious, there are OSs with strong security. OpenBSD is touted as one, plus there is Linux and other BSDs, which can be configured to be far more secure than the operating systems the article is referring to. |