Remix.run Logo
carlgreene 4 hours ago

I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.

I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.

Aurornis 19 minutes ago | parent | next [-]

For a while there were a lot of posts from people experimenting with ChatGPT to write anger bait posts on Reddit where they would later edit the post to say it was fake, written by ChatGPT.

I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.

However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.

This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.

In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.

chromacity 15 minutes ago | parent [-]

We do precisely the same thing here. Here's a relatively recent post that, to me, seem obviously LLM-written and that just seems to rattle off some management platitudes:

https://news.ycombinator.com/item?id=47913650

It had 639 comments and 866 upvotes. And that's not a one-off.

vohk 3 hours ago | parent | prev | next [-]

I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust. Or rather turn them into little better than comment sections on news sites; thriving but worthless.

I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.

But I agree the golden age of easy anonymous connections online has ended.

folderquestion 6 minutes ago | parent | next [-]

The web could become a way to indicate identity if public institutions publish for example www.university-country/professors/John. And that implies that John is a professor. I designed a 6000 lines protocol, but anyone could construct that web using hmac(salt+ url).

tardedmeme 2 hours ago | parent | prev | next [-]

Note that "attestation through a web of trust" means something like needing an invite from an existing user. It doesn't have to mean mass surveillance.

g3f32r 2 hours ago | parent | next [-]

Private torrent trackers have been doing this for a while. If some number of your downstreams act like shitheads - you get nipped and so do your other downstreams.

2ndorderthought 2 hours ago | parent | next [-]

This seems like the best way to handle it. Also, smaller communities. It's cool to do the global thing, but once you have 10k active users you can't moderate it with a team of 5 volunteers.

I think the attestation approach works best if there are different reasons for the punishment. Eg someone inviting a turd doesn't ban the person who invited them. Someone going full ai spam should.

kitsune1 an hour ago | parent [-]

[dead]

irishcoffee 2 hours ago | parent | prev [-]

Was it demonoid? That was like this way back in the day? Needed an invite and if you leeched you were cut.

platevoltage an hour ago | parent [-]

Demonoid was semi private, but yes, most private trackers require you to keep up some kind of seeding ratio to remain a member.

michaelt 2 hours ago | parent | prev | next [-]

PGP’s web of trust was kinda bad privacy-wise in some regards, as it basically revealed your IRL social network.

If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.

Are there successful newer designs, which avoid this problem?

pjc50 an hour ago | parent [-]

The IRL social network is actually the important part of the trust structure.

The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.

LtWorf an hour ago | parent [-]

You need to meet 2 actually :)

nicbou 18 minutes ago | parent | prev | next [-]

Then how can you have a community that is welcoming to people who are not part of the ingroup?

I want to create a community for immigrants. How would I make it welcoming to recent immigrants for whom no one can vouch?

A web of trust is a wonderful tool, but it's exclusive by design. This is a problem for some communities, even though it makes others much better.

AnthonyMouse 21 minutes ago | parent | prev | next [-]

> Note that "attestation through a web of trust" means something like needing an invite from an existing user.

It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.

ghaff 2 hours ago | parent | prev [-]

Which is, funnily (?) enough, how a lot of IRL organizations used to be. And basically don't be of the wrong ethnicity or religion.

It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.

Exoristos an hour ago | parent [-]

This was cogent in 1910.

ghaff an hour ago | parent [-]

A lot more recently than that--and even today but more under the table. A lot of clubs still excluded members within the past few decades.

kitsune1 41 minutes ago | parent [-]

[dead]

fidotron 3 hours ago | parent | prev | next [-]

> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.

This seems self evident to me too.

It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.

baxuz 7 minutes ago | parent | next [-]

EU's ZKP implementation provides complete anonymity and untrackability:

https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...

bluefirebrand 3 hours ago | parent | prev [-]

I'd be interested in working on a problem like that.

I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity

I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity

I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.

I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though

tracker1 an hour ago | parent | next [-]

I'm not sure that it would be too hard technically... basically, auth+social-network. Basically Facebook auth without the rest of facebook, adding attestation.

IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.

The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.

I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.

bluefirebrand 36 minutes ago | parent [-]

I don't think the main challenge is building this system, the main challenge is getting enough people using it to make it worthwhile.

Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't

Morromist 2 hours ago | parent | prev | next [-]

I agree its a very, very interesting problem. Maybe one of the biggest problems of the coming decade.

I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.

The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?

Anyway, good luck on your work!

baxuz 6 minutes ago | parent | next [-]

https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...

WillPostForFood 2 hours ago | parent | prev | next [-]

*You need an authoratitive source to say "This person is real"*

Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

Morromist 5 minutes ago | parent | next [-]

Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.

Karrot_Kream 2 hours ago | parent | prev | next [-]

If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.

kingleopold an hour ago | parent [-]

you under estimate human population in certain countries, literally

Karrot_Kream an hour ago | parent [-]

I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can the just use a slow mode type mechanic to halt this kind of attack.

nemomarx 30 minutes ago | parent [-]

Have you considered sock puppets? It's not out of the question to handle with human mods but detecting them automatically is pretty bad if someone is supplying credentials to each one, and sometimes it does take months or years to notice that new user Y is banned user X.

bluefirebrand 42 minutes ago | parent | prev [-]

> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.

They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web

kingleopold an hour ago | parent | prev [-]

it can also be "rented" btw, rented by llms? interesting

Karrot_Kream 2 hours ago | parent | prev | next [-]

Verifiable credentials are all about this. You need some sort of credentialing body that generates the credential for you, but after that you'll just have an opaque identifier. Any caller that wants to verify whether you're human submits the id to a verifier and the verifier says yes or no. You can also do attestations like age, so gate a forum on 16+ or something. You never end up having to actually give away your name or any other details.

rgblambda 34 minutes ago | parent [-]

What happens when someone agrees to sell or give away their id? The credentialing body could catch the very worst abusers who seem to be signing in to various sites and services multiple times an hour, but would fail to catch anything else.

Karrot_Kream 29 minutes ago | parent [-]

I don't think you'll ever be fully free of spam, so you'll still need to filter bad content. If credentials get sold and used to spam, they'll get banned.

kolmogorov 25 minutes ago | parent | prev [-]

world.org is doing exactly that including the privacy aspect. the iris scan aspect is scary but the alternatives don't seem to solve the problem either.

vlod an hour ago | parent | prev | next [-]

> without either proof of identity or attestation through a web of trust.

Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).

How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.

Would it stop this sort of "get human id", past it into .env, so agents can use it?

toofy an hour ago | parent [-]

this eye thing will never work. people in general are realizing the last people we should trust with our personal stuff are tech bro billionaires. they’ve broken trust too many times.

even worse many of them are just plain vocal about their disdain for people in general.

at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.

cryptoz 19 minutes ago | parent [-]

“If McDonald’s offered three free Big Macs for a DNA sample, there would be lines around the block.” - Bruce

I have no idea about the eye thing taking off. But I think your comment is very HN and a bit out-of-touch with regular people. What "you're seeing" is a bubble and not representative of the general population. The eye thing is a slow frog boil and it will be commonplace before you can blink.

TulliusCicero 2 hours ago | parent | prev | next [-]

> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.

I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.

That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.

jredwards 2 hours ago | parent | next [-]

In order to make this viable, wouldn't you have to verify identity repeatedly? What's to stop me from providing a valid identity and then handing my account over to an agent after I'm verified?

Bjartr an hour ago | parent [-]

That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots. In theory at least. It's certainly more complicated than only that in practice.

ssl-3 4 minutes ago | parent [-]

If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.

And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.

However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.

Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.

And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.

---

It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.

janalsncm 2 hours ago | parent | prev | next [-]

I guess it would have to be something like a service which confirms whether a person already has an account on the site but doesn’t have to track which particular account it is.

I’m not sure if that would work for account deletions though.

XorNot 2 hours ago | parent | prev | next [-]

That is effectively impossible though. There's data centers of stripped down phones, so "it's actually a phone" doesn't do it.

Citizen_Lame an hour ago | parent | prev [-]

What's stoping bots to verify identity? This will not work, especially with frequent data breaches.

20k an hour ago | parent | prev | next [-]

Personally I think we need to start utilising the safety features built into AI, to ensure that who we're talking to is a human. We'll start to have to only reply to people who talk in nsfw cursewords (like cocks), or profess their love of capybaras

patrickmay 26 minutes ago | parent | next [-]

Who doesn't love capybaras?

sunnybeetroot an hour ago | parent | prev [-]

LLMs can curse without issue

baxuz 8 minutes ago | parent | prev | next [-]

It'll come back again once ZKPs become standardized and become baked into devices:

https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...

I personally can't wait for a mechanism to kill 99% of bot traffic.

Galanwe an hour ago | parent | prev | next [-]

Im not sure proof of identity solves anything. People will still have LLMs with their real identity verified.

SV_BubbleTime an hour ago | parent [-]

I’m imagining like, a physical place you would go and get your text spoken out of your personal speaker directly into someone else’s microphones.

NoMoreNicksLeft 3 hours ago | parent | prev | next [-]

>I think it's going to effectively kill public chat communities without either proof of identity

How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.

Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?

I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).

tardedmeme 2 hours ago | parent [-]

Digital totalitarianism would prevent it. The moment you were found to be running a bot, your identity would be blacklisted across the entire internet.

bossyTeacher 2 hours ago | parent | next [-]

> The moment someone steals your identity, your identity would be blacklisted across the entire internet.

FTFY.

There isn't a clear solution. And if there is, this ain't it.

NoMoreNicksLeft 38 minutes ago | parent | prev [-]

You claim this, but you've not presented any evidence. Who would be the enforcement agency for that? Where and how would you train them? Can the money be scrounged up to do it properly? As you blacklist people from the internet, you lose their tax revenue (they're locked out of the economy), but you also make it impossible for them to tell people how bad it was... most of the deterrent effect is gone. But the incentives are only ever growing higher, as people surmise that running their own little bot farm is a way to get ahead when hustling. Any you do hunt down and disconnect are now highly radicalized and desperate, but you've just turned off the feeb's ability to monitor them and intervene.

China gets away with this shit because they've been conditioning their population for 60 years... everyone's eased into it. Elsewhere, not even slightly so.

ubermonkey 3 hours ago | parent | prev [-]

"I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust."

Those sorts of places were always the only places with reliably good communities.

bigyabai 3 hours ago | parent [-]

To the contrary, platforms like Facebook and X demonstrate that even personal verification won't save you from identity politics.

pjc50 an hour ago | parent | next [-]

People will post appalling racism in newspapers under their own bylines and photos. Identity verification does not moderate.

tardedmeme 2 hours ago | parent | prev [-]

What is identity politics, is that age verification?

JTbane 3 hours ago | parent | prev | next [-]

Reddit is more or less dead to me, as the popular subs are botfests and the niche subs are empty. I'm lucky to get a single reply on gaming subs.

sgarman 39 minutes ago | parent | next [-]

The fact that reddit enabled hiding your posts is crazy to me. In a time where knowing who's engaging in a community is more important than ever (am I talking to a bot or a troll?) reddit removes even more options to validate.

rgblambda 26 minutes ago | parent | next [-]

I interpreted that as an attempt to mask the number of bots on the site so as to not scare paying advertisers into thinking their ads won't be seen by real humans.

baxuz 5 minutes ago | parent | prev [-]

I enabled hiding my posts because I kept getting harrassed and even doxxed.

tardedmeme 2 hours ago | parent | prev [-]

There's also a third category where the sub looks organic because the moderator deletes and bans anyone who doesn't post exactly what the moderator wants.

jsbisviewtiful 3 hours ago | parent | prev | next [-]

> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.

Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.

grey-area 3 hours ago | parent | prev | next [-]

Did you ever introspect about who ruined Reddit?

nickff 3 hours ago | parent | next [-]

It’s a tragedy of the commons, many have done it, but no one user did it.

Jordan-117 2 hours ago | parent | next [-]

I'd argue that Reddit leadership, which insulted, hobbled, and wrote off its mods and power users (destroying projects like /r/BotDefense) while doing little to crack down on the proliferation of bot repost content, had a major role in encouraging this. They might even like it better this way -- lots of extra fake engagement boosting traffic stats without messy human drama, which they can then ironically sell back to AI labs as training data.

bmurphy1976 2 hours ago | parent | next [-]

Let's never forget the summer of 2023 when Reddit forceably removed mods from many major communities and replaced them with corporate shills. That was a major loss of dedicated people who cared more for their communities than Spez's pocket book.

alex1138 18 minutes ago | parent [-]

The internet is rather trending in that direction, isn't it? Youtube got rid of downvotes and apparently upload dates, which seems like an easier way to trick people into ads. And Reddit, like you said

If these platforms had to listen to "their customers" (here comes the inevitable comment about how users aren't customers; yes, I know)? They'd all be fired. They'd have to find a new job. They all act in incredibly insulting ways with a too big to fail attitude

traderj0e an hour ago | parent | prev [-]

It was bogus even before that. I heard complaints at some point that API changes broke bots, which actually sounds good.

an hour ago | parent | prev [-]
[deleted]
raincole 3 hours ago | parent | prev | next [-]

Yeah, if carlgreene specifically stopped doing that Reddit would be saved. They are the one savior.

ishouldstayaway an hour ago | parent [-]

Do you sincerely believe that that's how grey-area's comment was meant to be read?

an hour ago | parent [-]
[deleted]
sieabahlpark an hour ago | parent | prev [-]

[dead]

fpoling 18 minutes ago | parent | prev | next [-]

There was a post today that Google introduced unbreakable capture that required unrooted phone to pass its QR code.

We may end up with things like that…

lbriner 3 hours ago | parent | prev | next [-]

Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?

thegrim33 3 hours ago | parent | next [-]

We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.

thesuitonym 3 hours ago | parent [-]

That's definitely the most insidious use, but I think the larger portion is advertisers and karma farmers (who later sell to advertisers).

dylan604 2 hours ago | parent [-]

https://www.npr.org/2024/09/05/nx-s1-5100829/russia-election...

If Russia is willing to spend cash like that, then of course they're willing to run massive bot farms to pollute any forums they can. I'd be shocked if the US was not doing the same in any way they can. You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.

pessimizer 2 hours ago | parent [-]

> Trump killed Radio Free America as well

Not sure how this relates to the subject in a direct way. Radio Free America was a outlet explicitly created and utilized to spread US propaganda, but kinda sorta barely disguised as a journalistic enterprise (not really, if you were listening to RFA you knew what you were listening to.) Shutting it down seems to be a counterpoint to all of the covert participation of US intelligence on the web which has done nothing but escalate.

dylan604 40 minutes ago | parent [-]

It was a head scratching decision that few believe was for the stated reason. Other countries are ramping up their propaganda arms while Trump shut down part of the US'. The reasoning was cost, but that doesn't make a lot of sense in the grand scheme of things. Foil hat types would easily believe it was the puppet doing the bidding of the one that pulls the strings. RFA has been a thorn in despots' side for a long time.

afavour an hour ago | parent | prev | next [-]

It's very common for folks to search Reddit to find reviews of products etc. these days. If you can have a bot account post a fake review of how awesome your product us, and have that upvoted, it can pay huge dividends.

simsla 3 hours ago | parent | prev | next [-]

Established accounts are worth money, often for scamming/propaganda.

Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.

mrhottakes 3 hours ago | parent | prev | next [-]

People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.

Rebelgecko 3 hours ago | parent | prev | next [-]

Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.

fidotron 3 hours ago | parent | prev | next [-]

HN has historically been gamed for visibility. The stakes for doing this can be quite high if you can pull it off.

KajMagnus 3 hours ago | parent | prev | next [-]

My impression is that they're sometimes unemployed people or students hoping to create a popular open source project, and use it to find a job.

They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.

Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work

pessimizer 3 hours ago | parent | prev [-]

If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.

There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.

tardedmeme 2 hours ago | parent [-]

There are certain topics that seem to get instantly flag-killed unusually often. IPv6 is one.

traderj0e an hour ago | parent | next [-]

I've seen a lot of ipv6 wars here without flagkilling happening

pessimizer 2 hours ago | parent | prev [-]

I've been more disturbed by comments that were flagkilled just for being wrongthink, not because they were rude or not well argued. I've also seen a lot less of those flagkills over the last 6 months, which makes me feel like there were some fake accounts that got caught and culled.

sillysaurusx 30 minutes ago | parent | prev | next [-]

> I do know for a fact that many "users" here are LLMs.

HN autokills comments it detects as LLM. I think maybe you're not giving HN enough credit. :)

bobomonkey 27 minutes ago | parent | next [-]

It needs help. I often pipe my screed though an LLM and post it. I do request that it use a 10th grade reading level, and no emdashes.

For giggles, here's how it would look for this comment. Rather meta, but in this case it removed the "It needs hellp" so here we are.

I often run my screed through an LLM before posting. I ask it to keep the writing at about a 10th grade reading level and to avoid em dashes.

sltkr 28 minutes ago | parent | prev | next [-]

The question is how reliable that detection is.

fn-mote 23 minutes ago | parent | prev [-]

I have read enough “you are replying to an LLM” comments that I am pretty sure this is still a hit or miss process.

mmooss 10 minutes ago | parent [-]

Why do you think those comments are accurate? Maybe those comments are by LLMs? If you believe crowd wisdom on its face, you will have big problems with LLMs.

z3t4 3 hours ago | parent | prev | next [-]

There's this old meme where someone asks what will happen when AI bots posts helpful, curious and thoughtful messages!? That's mission accomplish :D They can't be better then the average human though because of training data, so I don't worry about AI comments getting up-voted by real humans, I am however worried about fake upvotes.

altcognito an hour ago | parent | next [-]

> They can't be better then the average human though because of training data

Is this based on the belief that an LLM can only represent an "average" human being?

hleszek 43 minutes ago | parent | prev | next [-]

It is not a meme, it's an xkcd: https://xkcd.com/810/

16 minutes ago | parent | prev [-]
[deleted]
order-matters 4 hours ago | parent | prev | next [-]

Public* online communities are dying. Discord is thriving

2ndorderthought 2 hours ago | parent | next [-]

Discord is terrible. Full of bots, creeps and ai slopped to the gills.

Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.

There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.

bsder an hour ago | parent | prev | next [-]

I really don't understand the folks fleeing to Discord. A mailing list does 99% of the same thing for most of the communities.

Sure, if you want to chat while gaming, that's the whole point of Discord. Ganbatte.

But, for everything else, Discord is such a horrible misfit that I don't understand why it's the default.

zahlman an hour ago | parent [-]

> I don't understand why it's the default.

Because it equally well supports real-time communication.

And it looks shiny.

And some people use it to e.g. watch a video together, or other social purposes.

GaryBluto 2 hours ago | parent | prev | next [-]

If all you value is sub-IRC level irreverent discussion, maybe.

echelon 3 hours ago | parent | prev [-]

This. Everything important has moved to discord. Which is sad because of how undiscoverable and unsearchable it is.

Keyframe 3 hours ago | parent | next [-]

I'm more sad about how the UI of it all is just clunky. Even though it resembles ye olde IRC clients like mIRC, nowhere near readable for some reason.

cloverich 3 hours ago | parent | prev | next [-]

are those attributes now assets?

pjc50 an hour ago | parent | next [-]

Pretty much. It's the survivability onion. You can't be destroyed if you can't be discovered.

bluefirebrand 3 hours ago | parent | prev [-]

Sort of, except if no one can ever discover a community it is always dying by default

Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays

order-matters 3 hours ago | parent [-]

we were made to socialize in person. you can mimic it online and nourish existing connections over it but nothing helps build friendship more than being in the same place at the same time a few different times and talking to each other

ceejayoz 3 hours ago | parent | prev [-]

This shit will come to Discord too.

order-matters 3 hours ago | parent | next [-]

on the public servers yeah. but the ones im in with real people who know each other will be fine.

I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite

On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.

thesuitonym 3 hours ago | parent | prev [-]

It's already there.

zahlman an hour ago | parent | prev | next [-]

> As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer.

I don't suppose you could show some examples? How convincing is the state of the art now?

ge96 an hour ago | parent | prev | next [-]

Doesn't help there is that feature that hides the user's posts and comments

traderj0e an hour ago | parent | prev | next [-]

It's easy to botspam Reddit because even the real users always acted like bots. The big subreddits were the worst, but contrary to how the users keep saying "it's good if you find the right subs," no it's not. Wrote that place off like 10 years ago.

mmooss 14 minutes ago | parent | prev | next [-]

> I do know for a fact that many "users" here are LLMs

What factual basis do you have for that?

TacticalCoder 2 hours ago | parent | prev | next [-]

> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.

You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.

And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.

Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.

You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.

Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.

You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".

Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.

P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.

10xDev 4 hours ago | parent | prev | next [-]

Unless their account is <1 year I wouldn't assume they are a bot.

transcriptase 3 hours ago | parent | next [-]

Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”

teraflop an hour ago | parent | next [-]

It used to be super common that when you spotted a bot post and clicked through to the user's history, you'd see very average, human-looking activity from years ago, followed by a long gap of inactivity, and then a flurry of obvious bot comments.

It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.

And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)

transcriptase an hour ago | parent | next [-]

That and locking down the API meant no more sites offering readily available visualizations of this type of thing

ishouldstayaway an hour ago | parent | prev [-]

> allowing users to hide their history from public view

Yeah it's become my default assumption that any user who does this is either a bot or a bad-faith troll.

arjie 3 hours ago | parent | prev | next [-]

I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.

0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...

pessimizer an hour ago | parent [-]

> duplicating a human post and its comments is a good way to appear human

Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.

10xDev 3 hours ago | parent | prev [-]

So what is the comment frequency of these bots? There must be some signal in the activity even if the comments themselves pass the turing test.

transcriptase 3 hours ago | parent | next [-]

Even if there was, I doubt Reddit cares enough to go after them when it’s boosting their valuation

Rebelgecko 3 hours ago | parent | prev | next [-]

If you find one account you can find a few dozen spam accounts by building a graph of what posts they reply to

dns_snek 2 hours ago | parent [-]

Most of them have private profiles these days

jayd16 3 hours ago | parent | prev [-]

Does it matter? With enough you can just have them upvote each other.

embedding-shape 4 hours ago | parent | prev [-]

So easy to purchase online accounts nowadays, neither karma nor age of the account matters anything anymore.

paganel 3 hours ago | parent | prev | next [-]

Reddit was already on its way way before this LLM craze, hopefully the recent tech-related changes will only accelerate that process.

onlytue 3 hours ago | parent | prev | next [-]

I find it amusing that this is the top comment. Reddit is so awful you finally wrote it off, but not before you used it to try to “karma farm and do some covert advertising”. It’s on-brand for HN hypocritical bullshit. But, since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard, have an upboat fellow traveler.

ishouldstayaway an hour ago | parent [-]

> since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard

Same as it ever was.

alex1138 an hour ago | parent | prev | next [-]

It might come down to shareholder/IPO stuff but you can tell Reddit doesn't actually care to put the effort in to crack down on bots (however you'd do that) because they already don't give communities proper moderation tools/third party tools and the site does censor

Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed

(Yes, I know moderating this stuff at scale is hard)

- A human. Beep boop.

Keyframe 3 hours ago | parent | prev | next [-]

How do we know now that this comment wasn't written by LLM?

carlgreene 3 hours ago | parent [-]

You don't and that's the problem :)

voicedYoda 4 hours ago | parent | prev | next [-]

I feel you. Especially in the larger subreddita. i participate, and mod, a few small ones, and the community there is pretty strong and folks shut down ai slop pretty quickly.

I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.

That said, your experiment scares me as well.

carlgreene 3 hours ago | parent [-]

I will say that I believe you probably have absolutely no idea because it's not "slop". It looks like every other reddit comment you see.

My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.

tayo42 2 hours ago | parent | prev | next [-]

Do you have an example of comments people engaged with?

culebron21 2 hours ago | parent | prev | next [-]

I wonder, how much of the discussions on the results of agentic coding is just LLM slop.

echelon 4 hours ago | parent | prev | next [-]

> where I had an agent karma

Was this a browser using agent? What did you use?

carlgreene 3 hours ago | parent [-]

It used the browser agent to grab user cookies after signing in, then made API calls iirc.

Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.

echelon 3 hours ago | parent [-]

I'm surprised these platforms don't have advanced heuristics to detect API calls and inauthentic traffic.

Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?

I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.

Maybe their protections just aren't that sophisticated.

tardedmeme 2 hours ago | parent [-]

Reddit is known to fingerprint TLS and quickly shadowban accounts that don't have the fingerprints of browsers.

echelon 2 hours ago | parent [-]

TLS fingerprinting and Cloudflare are easy to bypass. There are lots of libraries that do so.

The application-layer stuff is harder. Each application can develop its own heuristics, and that's difficult to automate in a cross-cutting fashion.

Forgeties79 4 hours ago | parent | prev | next [-]

> I am not quite there with Hacker News but I do know for a fact that many here are LLM's.

Please don’t do this here.

jayd16 3 hours ago | parent | next [-]

Don't do it anywhere. He's a jerk for doing it on reddit.

slaw 3 hours ago | parent [-]

Reddit is the sewer of the internet. Good place for LLMs.

rexpop 2 hours ago | parent [-]

People live in and depend on that waterway. Just because it's beneath your standards doesn't mean it isn't vital.

You're giving "let them eat cake" energy.

fl4regun an hour ago | parent | next [-]

I can assure you nobody in the world "lives in" nor "depends" on reddit to live.

slaw 2 hours ago | parent | prev [-]

You shouldn't live in a sewer.

skupig 4 hours ago | parent | prev | next [-]

People are definitely trying to make HN bots because I have seen several get flagged. No idea to what end though.

mghackerlady 3 hours ago | parent | next [-]

the suits or suit minded people have realised that HN is good for advertising to the kind of demographic that'll give them free labour and is easily swayed by whatever the latest trend is

fullshark an hour ago | parent | prev | next [-]

Why would reddit bots exist? (In)organic advertising, same concept here.

tardedmeme 2 hours ago | parent | prev | next [-]

The ones you see flagged are the very obvious bots. What about the more sophisticated ones? How do I know skupig isn't a bot?

krapp 4 hours ago | parent | prev | next [-]

Possibly to test reactions to a bot they plan to build a startup around.

I've seen some claim they do it to avoid stylometry or being fingerprinted, or because of social anxiety problems.

Some people just have a compulsive need to optimize everything, and HN's guidelines and tone policing are more easily followed by a bot than a human.

isityettime 3 hours ago | parent [-]

> HN's guidelines and tone policing are more easily followed by a bot than a human.

HN's guidelines aren't that strict and the mod hammer is a plushie. It's not difficult to get by here. It's also kind of useful for critical reflection/self-regulation to hear the occasional "you came in too hot" or "don't be boring" from a moderator.

Seems better to me to just try to be sort of reasonable and let the mods nudge you if they need to and let your comments be downvoted from time to time. What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?

ceejayoz 3 hours ago | parent | next [-]

> What is the goal of these people, to never experience correction in their lives?

Look at all the people who complain about cancel culture. There's a huge swath of people who don't ever want to hear "that was mean/bad/shitty".

krapp 3 hours ago | parent | prev [-]

>What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?

Yes?

isityettime 38 minutes ago | parent [-]

That seems like a really extreme goal to me. I should hope there's a better way to address the anxiety or whatever it is that's motivating it.

Forgeties79 2 hours ago | parent | prev [-]

I didn’t say that people weren’t doing it. I was asking this person not to do that here since it sort of sounds like they have plans to

WolfeReader 4 hours ago | parent | prev | next [-]

He's stating a fact. Turn on showed in your options and scroll to the bottom of the comments on any popular story. There are so many agentic users here.

layer8 2 hours ago | parent | next [-]

Having to turn on showdead (which I have turned on by default) demonstrates that’s it’s not much of a problem in practice.

Freedom2 4 hours ago | parent | prev [-]

I generally disagree, because the level of discourse here has always been very high, curious and intellectual.

carlgreene 3 hours ago | parent | next [-]

It has, and the well prompted agents still give that. It's very weird.

Forgeties79 an hour ago | parent [-]

I just don’t even understand the appeal of having a bot interact on forums for you unless you’re astroturfing for your company or personal brand or whatever

hackable_sand 3 hours ago | parent | prev [-]

Maybe 1:100 comments match any one of those attributes.

Most comments are just grammatically "correct". Not a high bar.

3 hours ago | parent | prev | next [-]
[deleted]
chumblywumbly 3 hours ago | parent | prev [-]

This site is CLEARLY astroturfed to hell and back and infested with bots. Any attempted discussion of this fact gets killed REALLY fast.

This part of the guidelines is a 15 year out-of-date bad joke:

> Please don't post insinuations about astroturfing, shilling, brigading, > foreign agents, and the like. It degrades discussion and is usually > mistaken. If you're worried about abuse, email hn@ycombinator.com and > we'll look at the data.

"We'll look at the data". Sure buddy. You'll do what you always do, which is apply to banhammer to anyone that's not following your talking points, and tone police the actual humans.

Enjoy "conversing curiously" with bots while the mods tone-police non-bots out of existence.

paganel 3 hours ago | parent [-]

For what it's worth the admins here have let the tone of conversation slip a little when it comes to AI, as in there are many people who now openly mock (and worse) the AI zealots and there's no admin coming in and "saving" the metaphorical day anymore. In the not so distant past that kind of behaviour was almost instantly reprimanded, kindergarten-style.

cactusplant7374 4 hours ago | parent | prev | next [-]

Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.

kube-system 3 hours ago | parent | next [-]

With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.

All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.

dgellow 4 hours ago | parent | prev | next [-]

The obvious ones are the ones you notice

cactusplant7374 3 hours ago | parent [-]

LLMs are not good at writing. If they were we would have entire libraries of new, amazing literature.

Tanoc 3 hours ago | parent | next [-]

Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.

romanhn 3 hours ago | parent | prev | next [-]

Neither are most humans

mrhottakes 3 hours ago | parent [-]

Agreed, some humans are good writers, and no LLMs are good writers.

dwringer 3 hours ago | parent | prev [-]

This is rather moving the goalposts from "plausibly human comment" to "meaningful literature", I think

cactusplant7374 3 hours ago | parent [-]

No. I'm drawing it out to its logical conclusion.

mr_toad an hour ago | parent [-]

It’s poor logic, a non sequitur. An absurd reduction. By your argument anyone who hasn’t written a great literary work is a poor writer, and would be bad at writing online comments.

LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.

cactusplant7374 an hour ago | parent [-]

No, that is a mischaracterization of what I wrote. They are great writers if you enjoy formulaic writing.

4 hours ago | parent | prev | next [-]
[deleted]
3 hours ago | parent | prev | next [-]
[deleted]
carlgreene 3 hours ago | parent | prev | next [-]

I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.

If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot

crooked-v 3 hours ago | parent | prev | next [-]

[flagged]

potsandpans 4 hours ago | parent | prev [-]

People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.

The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.

bee_rider 3 hours ago | parent | next [-]

The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.

I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.

transcriptase 3 hours ago | parent [-]

It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.

cactusplant7374 3 hours ago | parent | prev [-]

A mere opinion is not mental illness.

potsandpans 3 hours ago | parent [-]

I wasn't suggesting you have a mental illness for having an opinion.

More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.

So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.

fwip 3 hours ago | parent [-]

The threads that have the top comment saying "this is AI slop" are nearly always about an article that is obvious AI slop.

Threads that aren't - like this one - don't.

potsandpans 3 hours ago | parent [-]

If you need to tell yourself that in order to cope that's fine with me.

layer8 2 hours ago | parent [-]

I’m thinking that I may actually prefer undetectable AI slop to human comments like that. I do agree with your upthread comments.

roysting 3 hours ago | parent | prev | next [-]

On the other hand, I’ve been accused of being AI/bot and if I say things the mod doesn’t like and is not their favorite thing to hear I’m “flamebaiting” or engaging in personal attacks when pointing out specific things.

Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.

jrflowers an hour ago | parent [-]

> America really did not have the self-respect or confidence anymore to enforce the Constitution online.

“Mods are Unconstitutional” lmao

bossyTeacher 2 hours ago | parent | prev | next [-]

> I do know for a fact that many "users" here are LLMs.

Name and shame.

Gigachad 39 minutes ago | parent [-]

If you look at the bottom of most threads here you’ll see a bunch of green username dead LLM comments. Those are just the obvious ones though.

napierzaza 3 hours ago | parent | prev | next [-]

[dead]

imadierich an hour ago | parent | prev | next [-]

[dead]

jmyeet 4 hours ago | parent | prev | next [-]

I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political". This is somewhat related to the Overton window but really a bunch of (mostly conservative) ideas get normalized so aren't deemed "political".

I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.

I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).

I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.

ryandrake 2 hours ago | parent [-]

> I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political".

This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."

antisthenes 2 hours ago | parent | prev | next [-]

> I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

This just makes me wonder...so what?

Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).

To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?

We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.

ninkendo a few seconds ago | parent | next [-]

[delayed]

krapp an hour ago | parent | prev [-]

An irascible human being with "wrong" opinions is still better than a polite and factually correct bot because there's no fucking point in having a conversation with a bot. We're here to have conversations with people, not to prove fact beyond a reasonable doubt.

Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?

rgovostes 3 hours ago | parent | prev | next [-]

So you ran an "experiment" where you deliberately made someone else's community worse to see what would happen? Cool project.

sovenyr 3 hours ago | parent | prev [-]

Dead Internet theory ?