Remix.run Logo
threethirtytwo 18 hours ago

[flagged]

magnio 18 hours ago | parent | next [-]

Pity that HN's ability to detect sarcasm is as robust as that of a sentiment analysis model using keyword-matching.

furyofantares 17 hours ago | parent | next [-]

The problem is more that it's an LLM-generated comment that's about 20x as long as it needed to be to get the point across.

cubefox 16 hours ago | parent | next [-]

It's obviously not LLM-generated.

kleene_op 16 hours ago | parent [-]

Phew. This is a relief, honestly!

threethirtytwo 17 hours ago | parent | prev [-]

It's not.

Evidence shows otherwise: Despite the "20x" length, many people actually missed the point.

furyofantares 4 hours ago | parent | next [-]

Oh yeah, there is also a problem with people not noticing they're reading LLM output, AND with people missing sarcasm on here. Actually, I'm OK with people missing sarcasm on here - I have plenty of places to go for sarcasm and wit and it's actually kind of nice to have a place where most posts are sincere, even if that sets people up to miss it when posts are sarcastic.

Which is also what makes it problematic that you're lying about your LLM use. I would honestly love to know your prompt and how you iterated on the post, how much you put into it and how much you edited or iterated. Although pretending there was no LLM involved at all is rather disappointing.

Unfortunately I think you might feel backed into a corner now that you've insisted otherwise but it's a genuinely interesting thing here that I wish you'd elaborate on.

eru 17 hours ago | parent | prev | next [-]

Despite or because?

_diyar 17 hours ago | parent | prev [-]

I definitely missed the point because of the length, and only realized after I read replies to your comment.

threethirtytwo 16 hours ago | parent | next [-]

Next time I'll write something shorter, or if you don't believe I wrote it... then I'll tell the AI to write something shorter.

quinnjh 16 hours ago | parent | prev [-]

Its not just verbose—it's almost a novel. Parent either cooked and capped, or has managed to perfectly emulate the patterns this parrot is stochastically known best for. I liked the pro human vibe if anything.

catlifeonmars 17 hours ago | parent | prev [-]

That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.

16 hours ago | parent | prev | next [-]
[deleted]
catoc 16 hours ago | parent | prev | next [-]

I firmly believe @threethirtytwo’s reply was not produced by an LLM

mkarliner 16 hours ago | parent [-]

regardless of if this text was written by an LLM or a human, it is still slop,with a human behind it just trying to wind people up . If there is a valid point to be made , it should be made, briefly.

catoc 15 hours ago | parent [-]

If the point was triggering a reply, the length and sarcasm certainly worked.

I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.

But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.

rixed 16 hours ago | parent | prev | next [-]

Are you expecting people who can't detect self-dellusions to be able to detect sarcasm, or are you just being cruel?

eru 17 hours ago | parent | prev | next [-]

> This is a relief, honestly. A prior solution exists now, which means the model didn’t solve anything at all. It just regurgitated it from the internet, which we can retroactively assume contained the solution in spirit, if not in any searchable or known form. Mystery resolved.

Vs

> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"

johnfn 18 hours ago | parent | prev | next [-]

I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.

AstroBen 17 hours ago | parent | next [-]

Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average

I guess this is the end of the human internet

prussia 16 hours ago | parent | next [-]

To give them the benefit of the doubt, people who talk to AI too much probably start mimicking its style.

4k93n2 17 hours ago | parent | prev [-]

yea, i was suspicious by the second paragraph but was sure once i got to "that’s not engineering, it’s cosplay"

AstroBen 17 hours ago | parent | next [-]

It's also the wording. The weird phrases

"Glorified Google search with worse footnotes" what on earth does that mean?

AI has a distinct feel to it

lxgr 16 hours ago | parent | next [-]

And with enough motivated reasoning, you can find AI vibes in almost every comment you don’t agree with.

For better or worse, I think we might have to settle on “human-written until proven otherwise”, if we don’t want to throw “assume positive intent” out the window entirely on this site.

testdelacc1 16 hours ago | parent | prev [-]

Dude is swearing up and down that they came up with the text on their own. I agree with you though, it reeks of LLMs. The only alternative explanation is that they use LLMs so much that they’ve copied the writing style.

plaguuuuuu 17 hours ago | parent | prev [-]

I've had that exact phrase pop up from an LLM when I asked it for a more negative code review

threethirtytwo 18 hours ago | parent | prev | next [-]

Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.

It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.

catlifeonmars 16 hours ago | parent | next [-]

I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?

comp_throw7 17 hours ago | parent | prev | next [-]

> But if it was there is currently no way for anyone to tell the difference.

This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).

threethirtytwo 17 hours ago | parent | next [-]

I've tested some of those services and they weren't very reliable.

CamperBob2 9 hours ago | parent | prev [-]

If such a thing did exist, it would exist only until people started training models to hide from it.

Negative feedback is the original "all you need."

velox_neb 17 hours ago | parent | prev [-]

> It wasn't AI generated.

You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...

Not that I needed pangram to tell me that, it's obvious slop.

threethirtytwo 17 hours ago | parent | next [-]

I wouldn't know how to prove to you otherwise other then to tell you that I have seen these tools show incorrect results for both AI generated text and human written text.

lxgr 16 hours ago | parent | prev | next [-]

Good thing you had a stochastic model backing up (with “low confidence”, no less) your vague intuition of a comment you didn’t like being AI-written.

XenophileJKO 17 hours ago | parent | prev [-]

I must be a bot because I love existential dread, that's a great phrase. I feel like they trigger a lot on literate prose.

lxgr 16 hours ago | parent [-]

Sad times when the only remaining way to convince LLM luddites of somebody’s humanity is bad writing.

CamperBob2 18 hours ago | parent | prev | next [-]

(edit: removed duplicate comment from above, not sure how that happened)

undeveloper 18 hours ago | parent | next [-]

the poster is in fact being very sarcastic. arguing in favor of emergent reasoning does in fact make sense

threethirtytwo 18 hours ago | parent | prev [-]

It's a formal sarcasm piece.

CamperBob2 18 hours ago | parent | prev [-]

It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.

(edit: fixed link)

threethirtytwo 18 hours ago | parent | next [-]

I thought the mockery and sarcasm in my piece was rather obvious.

CamperBob2 18 hours ago | parent [-]

Poe's Law is the real Bitter Lesson.

habinero 18 hours ago | parent | prev [-]

We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"

I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test

nurettin 18 hours ago | parent | prev [-]

Why not plan for a future where a lot of non-trivial tasks are automated instead of living on the edge with all this anxiety?

threethirtytwo 18 hours ago | parent [-]

[flagged]

16 hours ago | parent | next [-]
[deleted]
undeveloper 18 hours ago | parent | prev | next [-]

come out of the irony layer for a second -- what do you believe about LLMs?

jorvi 16 hours ago | parent | prev | next [-]

I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.

LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.

So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.

7777332215 18 hours ago | parent | prev | next [-]

If all of it is going away and you should deny reality, what does everything else you wrote even mean?

habinero 18 hours ago | parent | prev [-]

Yes, it is simply impossible that anyone could look at things and do your own evaluations and come to a different, much more skeptical conclusion.

The only possible explanation is people say things they don't believe out of FUD. Literally the only one.