| ▲ | 0xDEAFBEAD 8 hours ago | ||||||||||||||||||||||
@dang I'm flagging because I believe this title is misleading, can you please substitute in the original title used by Technology Review? The only evidence for the title appears to be a link to this tweet https://x.com/HumanHarlan/status/2017424289633603850 It doesn't tell us about most posts on Moltbook. There's little reason to believe Technology Review did an independent investigation. If you read this piece closely, it becomes apparent that it is essentially a PR puff piece. Most of the supporting evidence is quotes from various people working at AI agent companies, explaining that AI agents are not something we need to worry about. Of course, cigarette companies told us we didn't need to worry about cigarettes either. My view is that this entire discussion around "pattern-matching", "mimicking", "emergence", "hallucination", etc. is essentially a red herring. If I "mimic" a racecar driver, "hallucinate" a racetrack, and "pattern-match" to an actual race by flooring the gas on my car and zooming along at 200mph... the outcome will still be the same if my vehicle crashes. For these AIs, the "motivation" or "intent" doesn't matter. They can engage in a roleplay and it can still cause a catastrophe. They're just picking the next token... but the roleplay will affect which token gets picked. Given their ability to call external tools etc., this could be a very big problem. | |||||||||||||||||||||||
| ▲ | mikkupikku 8 hours ago | parent | next [-] | ||||||||||||||||||||||
There is a very odd synergy between the AI bulls who want us to believe that nothing surprising or spooky is going on so regulation isn't necessary, and the AI bears who want us to believe nothing surprising is happening and it's all just a smoke and mirrors scam. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | reactordev 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
You’re just scratching the surface here. You’re not mentioning agents exfiltrating data, code, information outside your org. Agents that go rogue. Agents that verifiably completed a task but is fundamentally wrong (Anthropic’s C compiler). I’m bullish on AI but right now feels like the ICQ days where everything is hackable. | |||||||||||||||||||||||
| ▲ | consumer451 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I agree with many of your arguments, but especially that this article is not great. I commented more here: https://news.ycombinator.com/item?id=46957450 | |||||||||||||||||||||||
| ▲ | saberience 3 hours ago | parent | prev [-] | ||||||||||||||||||||||
Did you look at Moltbook or how it works yourself? Because I did and it was blindingly obvious that most of it was faked. In fact, various individuals admitted to making 1000s of posts themselves. Humans could make API keys, and in fact, I made my own API key (I didn't use Clawdbot) and I made several test posts myself just to show that it was possible. So I know 100% for sure there were human posts on there, because I made some personally! Also, the numbers didn't make any sense on the site. There were several thousand registrations, then over a few hours there were hundreds of thousands of sign-ups and a jump to 1M posts. Then if you looked at those posts they all came from the same set of users. Then a user admitted to hacking the database and inserting 1000s of users and 100ks of posts. Additionally, the API keys for all the users were leaked, so anyone could have automated posting on the site using any of those keys. Basically, there were so many ways for humans to either post manually or automatically post on Moltbook. And also there was a strong incentive for people to make trolling posts on Moltbook, e.g. "I want to kill all humans." It doesn't exactly take Sherlock Holmes'esque deduction to realize most of the stuff on there was human made. | |||||||||||||||||||||||
| |||||||||||||||||||||||