| ▲ | throwaway613746 14 hours ago |
| [flagged] |
|
| ▲ | bee_rider 14 hours ago | parent | next [-] |
| It is absurd enough of a project that everybody basically expects it to be secure, right? It is some wild niche thing for people who like to play with new types of programs. This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride. |
| |
| ▲ | DrewADesign 13 hours ago | parent [-] | | I think a lot of people have been spoiled (beneficially) by using large, professionally-run SaaS services where your only serious security concerns were keeping your credentials secret, and mitigating the downstream effects of data breaches. I could see having a fundamentally different understanding of security having only experienced that. What people are talking about doing with OpenClaw I find absolutely insane. | | |
| ▲ | dmix 13 hours ago | parent [-] | | > What people are talking about doing with OpenClaw I find absolutely insane. Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas. [1] https://openclaw.ai/blog/introducing-openclaw |
|
|
|
| ▲ | elictronic 14 hours ago | parent | prev | next [-] |
| Apple had problems with just the Chatbot side of LLMs because they couldn't fully control the messaging. Add in a small helping of losing your customers entire net worth and yeah. These other posters have no idea what they are talking about. |
| |
| ▲ | joshstrange 14 hours ago | parent [-] | | Exactly, Apple is entirely too conservative to shine with LLMs due to their uncontrollability, Apple likes their control and their version of "protecting people" (which I don't fully agree with) which includes "We are way too scared to expose our clients to something we can't control and stop from doing/saying anything bad!", which may end up being prudent. They won't come close to doing something like OpenClaw for at least a few more years when the tech is (hopefully) safer and/or the Overton Window has shifted. | | |
| ▲ | FireBeyond 14 hours ago | parent [-] | | And yet they'll push out AI-driven "message summaries" that are horrifically bad and inaccurate, often summarizing the intent of a message as the complete opposite of the full message up to and including "wants to end relationship; will see you later"? | | |
| ▲ | fennecbutt 13 hours ago | parent [-] | | Was about to point out the same thing. Apple's desperate rush to market, summarising news headlines badly and sometimes just plain hallucinating stuff causing many public figured to react when they end up the target of such mishaps. |
|
|
|
|
| ▲ | gordonhart 14 hours ago | parent | prev [-] |
| Clawdbot/Moltbot/OpenClaw is so far from figuring out the “trust” element for agents that it’s baffling the OP even chose to bring it up in his argument |