Remix.run Logo
pyrale 7 days ago

On one hand, I agree with you that there is some fun in experimenting with silly stuff. On the other hand...

> Claude was trying to promote the startup on Hackernews without my sign off. [...] Then I posted its stuff to Hacker News and Reddit.

...I have the feeling that this kind of fun experiments is just setting up an automated firehose of shit to spray places where fellow humans congregate. And I have the feeling that it has stopped being fun a while ago for the fellow humans being sprayed.

the__alchemist 7 days ago | parent | next [-]

This is an excellent point that will immediately go off-topic for this thread. We are, I believe, committed, into a mire of CG content enveloping the internet. I believe we will go through a period where internet communications (like HN, Reddit, and pages indexed by search engines) in unviable. Life will go on; we will just be offline more. Then, the defense systems will be up to snuff, and we will find a stable balance.

mettamage 7 days ago | parent | next [-]

I hope you're right. I don't think you will be, AI will be too good at impersonating humans.

lukan 7 days ago | parent [-]

"we will just be offline more"

I think it will be quite some time into the future, before AI can impersonate humans in real life. Neither hardware, nor software is there, maybe something to fool humans for a first glance maybe, but nothing that would be convincing for a real interaction.

theshrike79 7 days ago | parent | prev | next [-]

My theory (and hope) is the rise of a web of trust system.

Implemented so that if a person in your web vouches for a specific url (“this is made by a human”) you can see it in your browser.

Analemma_ 7 days ago | parent | next [-]

If your solution to this problem is the web of trust, to be blunt, you don't have a solution. I am techie whose social circle is mostly other techies, and I know precisely zero people who have ever used PGP keys or any other WoT-based system, despite 30 years of evangelism. It's just not a thing anybody wants.

theshrike79 6 days ago | parent [-]

It's 99.99% an UI issue.

If Google wouldn't have let perfect be the enemy of good and had added PGP support to Gmail early on (even just the shittiest signatures that are automatically applied and verified), the world would be a completely different place. Scams just wouldn't exist at this scale when signing mails with a known key would be the standard.

The tech is there, now we have Matrix and XMPP and PubSub and god knows how many protocols to share keys. Even Keybase.io still kind of exists.

What is lacking is a browser ecosystem for people to use their known identities to vouch for a specific url (with smart hashing so that changing the content would invalidate the trust).

We have the technology. Someone(tm) "just" needs to build it :)

gowld 7 days ago | parent | prev [-]

"Web of Trust" has been the proposed answer for, what, 30 years now? But everyone is too lazy to implement and abide by it.

vineyardmike 7 days ago | parent [-]

Don’t worry, it’s coming for real this time. The governments have been proposing a requirement that web companies connect accounts to government IDs.

If that isn’t exciting enough, Sam Altman (yea the one who popularized this LLM slop) will gladly sell you his WorldCoin to store your biometric data on the blockchain!

johnecheck 7 days ago | parent | prev [-]

Indeed. I worry though. We need those defense systems ASAP. The misinformation and garbage engulfing the internet does real damage. We can't just tune it out and wait for it to get better.

epiccoleman 7 days ago | parent | prev | next [-]

I definitely understand the concern - I don't think I'd have hung out on HN for so long if LLM generated postings were common. I definitely recognize this is something you don't want to see happening at scale.

But I still can't help but grin at the thought that the bot knows that the thing to do when you've got a startup is to go put it on HN. It's almost... cute? If you give AI a VPS, of course it will eventually want to post its work on HN.

It's like when you catch your kid listening to Pink Floyd or something, and you have that little moment of triumph - "yes, he's learned something from me!"

7 days ago | parent [-]
[deleted]
sixhobbits 7 days ago | parent | prev | next [-]

(author here) I did feel kinda bad about it as I've always been a 'good' HNer until that point but honestly it didn't feel that spammy to me compared to some human generated slop I see posted here, and as expected it wasn't high quality enough to get any attention so 99% of people would never have seen it.

I think the processes etc that HN have in place to deal with human-generated slop are more than adequate to deal with an influx of AI generated slop, and if something gets through then maybe it means it was good enough and it doesn't matter?

felixgallo 7 days ago | parent | next [-]

That kind of attitude is exactly why we're all about to get overwhelmed by the worst slop any of us could ever have imagined.

The bar is not 'oh well, it's not as bad as some, and I think maybe it's fine.'

taude 7 days ago | parent [-]

well, he was arguing that it's not worse than 99% of the human slop that gets posted, so where do you draw the line?

* well crafted, human only? * Well crafted, whether human or AI? * Poorly crafted, human * well crafted, AI only * Poorly crafted, AI only * Just junk?

etc.

I think people will intuitively get a feel for when content is only AI generated. If people spend time writing a prompt that doesn't make it so wordy, and has personality, and it OK, then fine.

Also, big opportunity going to be out there for AI detected content, whether in forums, coming in inmail inboxes, on your corp file share, etc...

AtlasBarfed 7 days ago | parent | prev [-]

Did you?

Spoiler: no he didn't.

But the article is interesting...

It really highlights to me the pickle we are in with AI: because we are at a historical maximum already of "worse is better" with Javascript, and the last two decades have put out a LOT of javascript, AI will work best with....

Javascript.

Now MAYBE better AI models will be able to equivalently translate Javascript to "better" languages, and MAYBE AI coding will migrate "good" libraries in obscure languages to other "better" languages...

But I don't think so. It's going to be soooo much Javascript slop for the next ten years.

I HOPE that large language models, being language models, will figure out language translation/equivalency and enable porting and movement of good concepts between programming models... but that is clearly not what is being invested in.

What's being invested in is slop generation, because the prototype sells the product.

DrSiemer 7 days ago | parent | prev | next [-]

I'm not a fan of this option, but it seems to me the only way forward for online interaction is very strong identification on any place where you can post anything.

postexitus 7 days ago | parent | next [-]

Back in FidoNet days, some BBSs required identification papers for registering and only allowed real names to be used. Though not known for their level headed discussions, it definitely added a certain level of care in online interactions. I remember the shock seeing the anonymity Internet provided later, both positive and negative. I wouldn't be surprised if we revert to some central authentication mechanism which has some basic level of checks combined with some anonymity guarantees. For example, a government owned ID service, which creates a new user ID per website, so the website doesn't know you, but once they blacklist that one-off ID, you cannot get a new one.

sleepybrett 6 days ago | parent | next [-]

Smaller communities too.

I grew up in... slightly rural america in the 80s-90s, we had probably a couple of dozen local BBSes the community was small enough that after a bit I just knew who everyone was OR could find out very easily.

When the internet came along in the early 90s and I started mudding and hanging out in newsgroups I liked them small where I could get to know most of the userbase, or at least most of the posing userbase. Once mega 'somewhat-anonymous' (i.e. posts tied to a username, not like 4chan madness) communities like slashdot, huge forums, etc started popping up and now with even more mega-communities like twitter and reddit. We lost something, you can now throw bombs without consequence.

I now spend most of my online time in a custom built forum with ~200 people in it that we started building in an invite only way. It's 'internally public' information who invited who. It's much easier to have a civil conversation there, though we still do get the occasional flame-out. Having a stable identity even if it's not tied to a government name is valuable for a thriving and healthy community.

DrSiemer 6 days ago | parent [-]

Sounds good!

A German forum I'm on allows members limited invites based on participation. The catch is, you are responsible for the people you invite. If they get in trouble, you will share a part of the punishment.

benterix 7 days ago | parent | prev | next [-]

Honestly, having seen how it can be used against you, retroactively, I would never ever engage in a discussion under my real name.

(The fact that someone could correlate posts[0] based on writing style, as previously demonstrated on HN and used to doxx some people, makes things even more convoluted - you should think twice what you write and where.)

[0] https://news.ycombinator.com/item?id=33755016

postexitus 6 days ago | parent [-]

This is a subset of "I don't have anything to hide" argument - if we use our real names, I think we'll have more responsibility about what we say. Of course, that's assuming our seemingly democratic governments don't turn authoritarian all of a sudden, as a Turkish citizen, I know that's not a given.

andoando 7 days ago | parent | prev [-]

id.me?

Not government owned, but even irs.gov uses it

xnorswap 7 days ago | parent | prev [-]

That can be automated away too.

People will be more than willing to say, "Claude, impersonate me and act on my behalf".

withinboredom 6 days ago | parent | next [-]

I do this every time I find myself typing something I could get written up over or even fired for.

1. I'm usually too emotional to write out why I feel that way instead of saying what I feel.

2. I really don't like the person (or their idea) but I don't want to get fired over it.

Claude is really great at this: "Other person said X, I think it is stupid and they're a moron for suggesting this. Explain to them why this is a terrible idea or tell me I'm being an idiot."

Sometimes it tells me I'm being an idiot, sometimes it gives me nearly copy-pasta text that I can use and agree with.

pyrale 7 days ago | parent | prev | next [-]

> People will be more than willing to say, "Claude, impersonate me and act on my behalf".

I'm now imagining a future where actual people's identities are blacklisted just like some IP addresses are dead to email, and a market develops for people to sell their identity to spammers.

simonw 7 days ago | parent [-]

That's always been the biggest flaw in the Worldcoin idea in my opinion: if you have a billion+ humans get their eyeball scanned in exchange for some kind of cryptographic identity, you can guarantee that a VERY sizable portion of those billion people will happily sell that cryptographic identity (which they don't understand the value of) to anyone who offers them some money.

As far as I can tell the owner of the original iris can later invalidate an ID that they've sold, but if you buy an ID from someone who isn't strongly technically literate you can probably extract a bunch of value from it anyway.

zoeysmithe 7 days ago | parent | prev | next [-]

I mean, that's fine I guess as long as its respectable and respects the forum.

"Claude write a summary of the word doc I wrote about x and post it as a reply comment," is fine. I dont see why it wouldnt be. Its a good faith effort to post.

"Claude, post every 10 seconds to reddit to spam people to believe my politics is correct," isn't but that's not the case. Its not a good faith effort.

The moderation rules for 'human slop' will apply to AI too. Try spamming a well moderated reddit and see how far you get, human or AI.

antonvs 6 days ago | parent [-]

The problem is speed and quantity. Humans weren't able to fight off the original email spam, it took automated systems. Forums will have to institute much stronger rate limiting and other such measures.

gowld 7 days ago | parent | prev [-]

That's fine, because once someone is banned, the impersonations are also banned.

bookofjoe 7 days ago | parent | prev | next [-]

See also: https://news.ycombinator.com/item?id=44860174 (posted 12 hours ago)

zoeysmithe 7 days ago | parent | prev | next [-]

I mean I can spam HN right now with a script.

Forums like HN, reddit, etc will need to do a better job detecting this stuff, moderator staffing will need to be upped, AI resistant captchas need to be developed, etc.

Spam will always be here in some form, and its always an arms race. That doesnt really change anything. Its always been this way.

kbar13 7 days ago | parent | prev | next [-]

it's annoying but it'll be corrected by proper moderation on these forums

as an aside i've made it clear that just posting AI-written emoji slop PR review descriptions and letting claude code directly commit without self reviewing is unacceptable at work

bongodongobob 7 days ago | parent | prev [-]

The Internet is already 99% shit and always has been. This doesn't change anything.

zanellato19 7 days ago | parent [-]

It's gotten much worse. Before it was shit from people, now it's corporate shit. Corporate shit is so much worse.