| ▲ | dfabulich 8 hours ago |
| > Separate Accounts for your OpenClaw > As I have mentioned, treat OpenClaw as a separate entity. So, give it its own Gmail account, Calendar, and every integration possible. And teach it to access its own email and other accounts. In addition, create a separate 1Password account to store credentials. It’s akin to having a personal assistant with a separate identity, rather than an automation tool. The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it. Which is to say, there is no way to run OpenClaw safely at all, and there literally never will be, because the "lethal trifecta" problem is inherently unsolvable. https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/ |
|
| ▲ | mbesto 8 hours ago | parent | next [-] |
| > The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it. Hard disagree. I have OpenClaw running with its own gmail and WhatsApp running on its own Ubuntu VM. I just used it to help coordinate a group travel trip. It posted a daily itinerary for everyone in our WhatsApp group and handled all of the "busy work" I hate doing as the person who books the "friend group" trip. Things like "what time are doing lunch at the beach club today?" to "whats the gate code to get into the airbnb again?" My next step is to have it act on my behalf "message these three restaurants via WhatsApp and see which one has a table for 12 people at 8pm tonight". I'm not comfortable yet to have it do that for me but I'm getting there. Point is, I get to spend more valuable time actually hanging out and being present with my friends. That's worth every dollar it costs me ($15/month Tmobile SIM card). |
| |
| ▲ | vardalab 7 hours ago | parent [-] | | Do you need the simcard for WhatsApp? | | |
| ▲ | BoppreH 6 hours ago | parent [-] | | I believe you only need a unique phone number to create the account, then you can use WhatsApp Web as client. Be very careful with alternative clients, as I've had an account banned in the past for this (and therefore a phone number blacklisted), even without messaging anybody. I think that clients that run WhatsApp Web in a web view (like https://github.com/rafatosta/zapzap) are safe. I think they started banning unauthorized API users around the time that "WhatsApp For Business" was introduced, because it was competing with that product. Unfortunately WhatsApp For Business is geared toward physical products and services with registered companies, so home automation and agents are left with no options. |
|
|
|
| ▲ | stavros 5 hours ago | parent | prev | next [-] |
| Of course there is! You want an AI agent to be able to do some things, but not others. OpenClaw currently gets access to both those sets. There's no reason to. I've made my own AI agent (https://github.com/skorokithakis/stavrobot) and it has access to just that one WhatsApp conversation (from me). It doesn't get to read messages coming from any other phone numbers, and can't send messages to arbitrary phone numbers. It is restricted to the set of actions I want it to be able to perform, and no more. It has access to read my calendar, but not write. It has access to read my GitHub issues, but not my repositories. Each tool has per-function permissions that I can revoke. "Give it access to everything, even if it doesn't need it" is not the only security model. |
| |
| ▲ | dfabulich 5 hours ago | parent | next [-] | | > "Give it access to everything, even if it doesn't need it" is not the only security model. You're using stavrobot instead of OpenClaw precisely because the purpose of OpenClaw is to do everything; a tool to do everything needs access to everything. OpenClaw could be kinda useful and secure if it were stavrobot instead, if it could only do a few limited things, if everything important it tried to do required human review and intervention. But stavrobot isn't a revolutionary tool to do everything for you, and that's what OpenClaw is, and that's why people are excited about it, and why its problems can never be fixed. | | |
| ▲ | stavros 4 hours ago | parent [-] | | Yeah, I don't know, I don't see what I'm missing out on. There isn't something I wanted it to do but couldn't because of the security model. |
| |
| ▲ | renewiltord 2 hours ago | parent | prev [-] | | I also have the same thing but it’s not useful to anyone outside my family. The use cases are not the same for everyone. |
|
|
| ▲ | BeetleB 4 hours ago | parent | prev | next [-] |
| > The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it. Every submission I've seen on HN involving OpenClaw will have a comment with this sentiment. "What's the point if you don't give it access to your data ... And if you do, it's a security nightmare ... hence OpenClaw is evil" It's a quick way to spot the person who's never spent any real time with OpenClaw. I always used to give use cases that don't have you give it much (if any) of your data. Examples on how you can give it only a tiny amount of data (many HN users give more just in their HN profile). But I tire of countering folks who clearly have not even tried it. (And I'm not even that pro-OpenClaw. I was using it, then a bug on my system prevented me from using it - a week without OpenClaw and so far no withdrawal symptoms). |
|
| ▲ | kube-system 6 hours ago | parent | prev | next [-] |
| There are plenty of ways to use openclaw that aren’t with your own data. You can use it with any kind of data. |
|
| ▲ | 8 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | thorio 7 hours ago | parent | prev | next [-] |
| While technically this is rooted in the technological misconstruction of a missing separation of data and instructions. However my point is: on the other hand, that would be the same if you outsourced those tasks to a human, isn't it? I mean sure, a human can be liable and have morals and (ideally) common sense, but most major screw ups can't be fixed by paying a fine and penalty only. |
| |
| ▲ | dfabulich 6 hours ago | parent | next [-] | | Yes and no. You're right to notice that this is an example of a more general problem called the principal-agent problem. https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble... We have no general-purpose solutions to the principal-agent problem, but we have partial solutions, and they only work on humans: make the human liable for misconduct, pay the human a percentage of the profits for doing a good job, build a culture where dishonesty is shameful. The "lethal trifecta" is just like that other infamously unsolvable problem, but harder. (If you could solve the lethal trifecta, you could solve the principal-agent problem, too.) Since we've been dealing with the principal-agent problem in various forms for all of human history, I don't feel lucky that we'll solve a more difficult version of it in our lifetime. I think we'll probably never solve it. | |
| ▲ | rahkiin 6 hours ago | parent | prev [-] | | A person can be blamed though. And people have a social fabric with understanding about human mistakes or even about people having lied to your etc. We have no such thing for AI yet. |
|
|
| ▲ | latand6 6 hours ago | parent | prev | next [-] |
| Definitely, the whole point of openclaw is to operate on your data. It's just.. Be prepared to lose it I guess. The one thing I'm definitely not giving access to yet - the payments. I think we'll develop a way to handle that though |
|
| ▲ | scuff3d 8 hours ago | parent | prev | next [-] |
| Give it a hundred years or so and we're gonna have robots wandering around who about 10% of the time go totally insane and kill anyone around them. But we'll all just shrug and go about our day, because they generate so much revenue for the corporate overlords. What are a few lives when stockholder value is on the line. |
| |
| ▲ | philipallstar 7 hours ago | parent [-] | | It's governments that tend to declare war and kill people. | | |
| ▲ | scuff3d 6 hours ago | parent [-] | | Millions of people die every year from tobacco, and tobacco companies fought for decades to deny their product causes cancer. In the 20th century alone it's estimated something like 100 million people died world wide thanks to smoking. That's just one example off the top of my head. There are countless others involving corporations killing people either directly or indirectly in the pursuit of profits. And that's before you start looking at human rights violations, ecological damage, overthrowing of sovereign governments around the world... |
|
|
|
| ▲ | Trufa 8 hours ago | parent | prev [-] |
| I wonder how many inherently unsolvable problems have been fixed before. |
| |
| ▲ | jesse_dot_id 8 hours ago | parent | next [-] | | This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. I think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible. However, because these problems are unsolvable right now, anyone who grants autonomous agents access to anything of value in their digital life is making a grave miscalculation. There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming. | | |
| ▲ | threethirtytwo 5 hours ago | parent | next [-] | | >think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating. The world has seen a massive reduction in the problems you talk about since the inception of chatgpt and that is compelling (and obvious) to anyone with a foot in reality to know that from our vantage ppoint, solving the problem is more than likely not infeasible. That alone is proof that your claim here has no basis in truth. > There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming. Also this is just false. It is not guaranteed it will destroy your digital life. There is a risk in terms of probability but that risk is (anecdotally) much less than 50% and nowhere near "inevitable" as you claim. There is so much anti-ai hype on HN that people are just being irrational about it. Don't call others to deploy critical thinking when you haven't done so yourself. | | |
| ▲ | jesse_dot_id 4 hours ago | parent [-] | | I'm a LLM evangelist. I think the positive impacts will far outweigh any negatives against it over time. That said, I'm not delusional about the limitations of the technology and there are a lot of them. > This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating. The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about. I don't fear remediated CVEs. I fear zero day prompt injection attacks and I fear hallucinations, which have NOT been solved for. I don't know what you're talking about there. If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly. The only reason those lies aren't destructive is because I'm already a skilled engineer and I catch them before the LLM makes the changes. These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time. You can defend against the ones you find via reports/telemetry but it's like trying to bale water out of a boat with a colander. You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why. |
| |
| ▲ | enraged_camel 8 hours ago | parent | prev [-] | | >> This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs? | | |
| ▲ | g947o 7 hours ago | parent | next [-] | | I don't think that is in the scope of the discussion here. You can be as much of a futurist as you'd like, but bear in mind that this post is talking about OpenClaw. | |
| ▲ | jesse_dot_id 7 hours ago | parent | prev [-] | | No? That's why I said "If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible." The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented. Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two. This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out. If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area. |
|
| |
| ▲ | j16sdiz 8 hours ago | parent | prev | next [-] | | Human make error too, but we held them liable for lots of the mistakes they make. Can we make the agent liable? or the company behind the model liable? | | |
| ▲ | 2OEH8eoCRo0 5 hours ago | parent | next [-] | | If we made companies liable then these things are DoA. I think a lot of our problems stem from a severe lack of liability. | |
| ▲ | dheera 8 hours ago | parent | prev | next [-] | | Humans fear discomfort, pain, death, lack of freedom, and isolation. That's why holding them liable works. Agents don't feel any of these, and don't particularly fear "kill -9". Holding them liable wouldn't do anything useful. | |
| ▲ | throwaway613746 8 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | jrflowers 8 hours ago | parent | prev [-] | | There are a ton if you count “don’t use the thing that causes the problem” as a solution. |
|