Remix.run Logo
Trufa 8 hours ago

I wonder how many inherently unsolvable problems have been fixed before.

jesse_dot_id 8 hours ago | parent | next [-]

This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. I think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible. However, because these problems are unsolvable right now, anyone who grants autonomous agents access to anything of value in their digital life is making a grave miscalculation. There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.

threethirtytwo 5 hours ago | parent | next [-]

>think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable.

This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.

The world has seen a massive reduction in the problems you talk about since the inception of chatgpt and that is compelling (and obvious) to anyone with a foot in reality to know that from our vantage ppoint, solving the problem is more than likely not infeasible. That alone is proof that your claim here has no basis in truth.

> There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.

Also this is just false. It is not guaranteed it will destroy your digital life. There is a risk in terms of probability but that risk is (anecdotally) much less than 50% and nowhere near "inevitable" as you claim. There is so much anti-ai hype on HN that people are just being irrational about it. Don't call others to deploy critical thinking when you haven't done so yourself.

jesse_dot_id 4 hours ago | parent [-]

I'm a LLM evangelist. I think the positive impacts will far outweigh any negatives against it over time. That said, I'm not delusional about the limitations of the technology and there are a lot of them.

> This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.

The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about. I don't fear remediated CVEs. I fear zero day prompt injection attacks and I fear hallucinations, which have NOT been solved for. I don't know what you're talking about there. If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly. The only reason those lies aren't destructive is because I'm already a skilled engineer and I catch them before the LLM makes the changes.

These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time. You can defend against the ones you find via reports/telemetry but it's like trying to bale water out of a boat with a colander.

You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.

enraged_camel 7 hours ago | parent | prev [-]

>> This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks.

Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs?

g947o 7 hours ago | parent | next [-]

I don't think that is in the scope of the discussion here.

You can be as much of a futurist as you'd like, but bear in mind that this post is talking about OpenClaw.

jesse_dot_id 7 hours ago | parent | prev [-]

No? That's why I said "If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible."

The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented.

Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two.

This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out.

If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area.

j16sdiz 8 hours ago | parent | prev | next [-]

Human make error too, but we held them liable for lots of the mistakes they make.

Can we make the agent liable? or the company behind the model liable?

2OEH8eoCRo0 5 hours ago | parent | next [-]

If we made companies liable then these things are DoA. I think a lot of our problems stem from a severe lack of liability.

dheera 8 hours ago | parent | prev | next [-]

Humans fear discomfort, pain, death, lack of freedom, and isolation. That's why holding them liable works.

Agents don't feel any of these, and don't particularly fear "kill -9". Holding them liable wouldn't do anything useful.

throwaway613746 8 hours ago | parent | prev [-]

[dead]

jrflowers 8 hours ago | parent | prev [-]

There are a ton if you count “don’t use the thing that causes the problem” as a solution.