Remix.run Logo
IcyWindows 14 hours ago

According to https://1password.com/blog/from-magic-to-malware-how-opencla..., The top skill is/was malware.

It's obviously broken, so no, Apple Intelligence should not have been this.

yoyohello13 13 hours ago | parent | next [-]

I feel like I’m watching group psychosis where people are just following each other off a cliff. I think the promise of AI and the potential money involved override all self preservation instincts in some people.

It would be fine if I could just ignore it, but they are infecting the entire industry.

SCdF 8 hours ago | parent | next [-]

You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.

AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!

The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.

Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.

But here we are :-)

[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.

[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/

an hour ago | parent [-]
[deleted]
dbbk 5 hours ago | parent | prev | next [-]

I don’t think it’s a group psychosis. I think it’s just the natural evolution of junior engineers. They’ve always lacked critical thinking and just jumped on whatever’s hyped on Twitter.

acdha 2 hours ago | parent | next [-]

It’s a group psychosis fueled by enormous financial pressure: every big tech company has been telling people that they’re getting fired as soon as possible unless they’re one of the few people who can operate these tools. Of course that’s going to have a bunch of people saying “Pick me! Pick me!” — especially since SV has become increasingly untethered from questions like whether something is profitably benefiting customers. With the focus on juicing share prices before moving to the distilled fiat pricing of cryptocurrency, we have at least two generations of tech workers being told that the path to phenomenal wealth comes from talking up your project until you find a rich buyer.

Sharlin 2 hours ago | parent | prev [-]

I’d really love to see some data on the age and/or experience distribution of these breathless "AI everywhere" folks. Are they mostly just young and easily influenced? Not analytic enough? Not critical-thinking enough? Not cynical enough?

csomar 12 hours ago | parent | prev [-]

Just like crypto this will also pass.

ksynwa 6 hours ago | parent [-]

Crypto hasn't really passed. It's just not talked about on HN anymore. It is still a massive industry but they have dropped the rhetoric of democratising banking and instead let you use cryptocurrency to do things like betting on US invading Venezuela and so on.

Sharlin 2 hours ago | parent [-]

By "passing" the GP presumably meant that the fad phase has passed. The hype cycle has reached the natural plateau of "I guess this has some use cases" (though in this case mostly less-than-scrupulous ones).

KaiserPro 6 hours ago | parent | prev | next [-]

This is the thing that winds me the fuck up.

The reason why Apple intelligence is shit is not because Apple's AI is particularly bad (Hello CoPilot) its because AI gives a really bad user experience.

When we go and talk to openAI/claude we know its going to fuck up, and we either make our peace with that, or just not care.

But, when I open my phone to take a picture, I don't want a 1/12 chance of it just refusing to do that and phoning my wife instead.

Forcing AI into thing where we are used to a specific predictable action is bad for UX.

Sure you can argue "oh but the summaries were bad" Yes, of course they are. its a tiny model that runs on your phone with fuck all context.

Its pretty impressive that they were as good as they were. Its even more impressive that they let them out the door knowing that it would fuckup like that.

andix 4 hours ago | parent | prev | next [-]

OpenClaw is not broken, it is just not designed to be secure in the first place.

It's more like a tech demo to show what's possible. But also to show where the limits are. Look at it as modern art, like an episode of Black Mirror. It's a window to the future. But it also highlights all the security issues associated with AI.

And that's why you probably shouldn't use OpenClaw on your data or your PC.

janalsncm 13 hours ago | parent | prev [-]

I had a dark thought today, that AI agents are going to make scam factory jobs obsolete. I don’t think this will decrease the number of forced labor kidnappings though, since there are many things AI agents will not be good at.