Remix.run Logo
xeeeeeeeeeeenu 17 hours ago

> no prior solutions found.

This is no longer true, a prior solution has just been found[1], so the LLM proof has been moved to the Section 2 of Terence Tao's wiki[2].

[1] - https://www.erdosproblems.com/forum/thread/281#post-3325

[2] - https://github.com/teorth/erdosproblems/wiki/AI-contribution...

nl 17 hours ago | parent | next [-]

Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"

And even odder that the proof was by Erdos himself and yet he listed it as an open problem!

TZubiri 17 hours ago | parent [-]

Maybe it was in the training set.

magneticnorth 16 hours ago | parent [-]

I think that was Tao's point, that the new proof was not just read out of the training set.

cma 8 hours ago | parent | next [-]

I don't think it is dispositive, just that it likely didn't copy the proof we know was in the training set.

A) It is still possible a proof from someone else with a similar method was in the training set.

B) something similar to erdos's proof was in the training set for a different problem and had a similar alternate solution to chatgpt, and was also in the training set, which would be more impressive than A)

CamperBob2 7 hours ago | parent | next [-]

It is still possible a proof from someone else with a similar method was in the training set.

A proof that Terence Tao and his colleagues have never heard of? If he says the LLM solved the problem with a novel approach, different from what the existing literature describes, I'm certainly not able to argue with him.

mmooss 6 hours ago | parent [-]

> A proof that Terence Tao and his colleagues have never heard of?

Tao et al. didn't know of the literature proof that started this subthread.

CamperBob2 6 hours ago | parent [-]

Right, but someone else did ("colleagues.")

habinero 4 hours ago | parent [-]

No, they searched for it. There's a lot of math literature out there, not even an expert is going to know all of it.

CamperBob2 4 hours ago | parent [-]

Point being, it's not the same proof.

mmooss 2 hours ago | parent [-]

Your point seemed to be, if Tao et al. haven't heard of it then it must not exist. The now known literature proof contradicts that claim.

heliumtera 8 hours ago | parent | prev [-]

Does it matter if it copied or not? How the hell would one even define if it is a copy or original at this point?

At this point the only conclusion here is: The original proof was on the training set. The author and Terence did not care enough to find the publication by erdos himself

rzmmm 15 hours ago | parent | prev [-]

The model has multiple layers of mechanisms to prevent carbon copy output of the training data.

glemion43 13 hours ago | parent | next [-]

Do you have a source for this?

Carbon copy would mean over fitting

fweimer 7 hours ago | parent | next [-]

I saw weird results with Gemini 2.5 Pro when I asked it to provide concrete source code examples matching certain criteria, and to quote the source code it found verbatim. It said it in its response quoted the sources verbatim, but that wasn't true at all—they had been rewritten, still in the style of the project it was quoting from, but otherwise quite different, and without a match in the Git history.

It looked a bit like someone at Google subscribed to a legal theory under which you can avoid copyright infringement if you take a derivative work and apply a mechanical obfuscation to it.

Workaccount2 4 hours ago | parent [-]

LLM's are not archives of information.

People seem to have this belief, or perhaps just general intuition, that LLMs are a google search on a training set with a fancy language engine on the front end. That's not what they are. The models (almost) self avoid copyright, because they never copy anything in the first place, hence why the model is a dense web of weight connections rather than an orderly bookshelf of copied training data.

Picture yourself contorting your hands under a spotlight to generate a shadow in the shape of a bird. The bird is not in your fingers, despite the shadow of the bird, and the shadow of your hand, looking very similar. Furthermore, your hand-shadow has no idea what a bird is.

fweimer 3 hours ago | parent [-]

For a task like this, I expect the tool to use web searches and sift through the results, similar to what a human would do. Based on progress indicators shown during the process, this is what happens. It's not an offline synthesis purely from training data, something you would get from running a model locally. (At least if we can believe the progress indicators, but who knows.)

NewsaHackO 3 hours ago | parent | prev | next [-]

It is the classic "He made it up"

Der_Einzige 6 hours ago | parent | prev [-]

Source is just read the definition of what "temperature" is.

But honestly source = "a knuckle sandwich" would be appropriate here.

TZubiri 15 hours ago | parent | prev | next [-]

forgive the skepticism, but this translates directly to "we asked the model pretty please not to do it in the system prompt"

ffsm8 14 hours ago | parent | next [-]

It's mind boggling if you think about the fact they're essential "just" statistical models

It really contextualizes the old wisdom of Pythagoras that everything can be represented as numbers / math is the ultimate truth

glemion43 13 hours ago | parent | next [-]

They are not just statistical models

They create concepts in latent space which is basically compression which forces this

jrmg 7 hours ago | parent | next [-]

You’re describing a complex statistical model.

glemion43 3 hours ago | parent [-]

Debatable I would argue. It's definitely not 'just a statistical model's and I would argue that the compression into this space fixes potential issues differently than just statistics.

But I'm not a mathematics expert if this is the real official definition I'm fine with it. But are you though?

mmooss 6 hours ago | parent | prev [-]

What is "latent space"? I'm wary of metamagical descriptions of technology that's in a hype cycle.

DoctorOetker 5 hours ago | parent | next [-]

its a statistical term, a latent variable is one that is either known to exist, or believed to exist, and then estimated.

consider estimating the position of an object from noisy readings. One presumes that position to exist in some sense, and then one can estimate it by combining multiple measurements, increasing positioning resolution.

its any variable that is postulated or known to exist, and for which you run some fitting procedure

AIorNot 5 hours ago | parent | prev | next [-]

See this video

https://youtu.be/D8GOeCFFby4?si=AtqH6cmkOLvqKdr0

glemion43 3 hours ago | parent | prev [-]

I'm disappointed that you had to add the 'metamagical' to your question tbh

It doesn't matter if ai is in a hype cycle or not it doesn't change how a technology works.

Check out the yt videos from 1blue3brown he explains LLMs quite well. .your first step is the word embedding this vector space represents the relationship between words. Father - grandfather. The vector which makes a father a grandfather is the same vector as mother to grandmother.

You the use these word vectors in the attention layer to create a n dimensional space aka latent space which basically reflects a 'world' the LLM walks through. This makes the 'magic' of LLMs.

Basically a form of compression by having higher dimensions reflecting kind a meaning.

Your brain does the same thing. It can't store pixels so when you go back to some childhood environment like your old room, you remember it in some efficient (brain efficient) way. Like the 'feeling' of it.

That's also the reason why an LLM is not just some statistical parrot.

mmooss 2 hours ago | parent [-]

> It doesn't matter if ai is in a hype cycle or not it doesn't change how a technology works.

It does change what people say about it. Our words are not reality itself; the map is not the territory.

Are you saying people should take everything said about LLMs at face value?

glemion43 23 minutes ago | parent [-]

Being dismissive of technical terms on hn because something seems to be a hype is really weird.

It's the reason why I'm here because we discuss more technically about technology

GrowingSideways 13 hours ago | parent | prev [-]

How so? Truth is naturally an apriori concept; you don't need a chatbot to reach this conclusion.

ComplexSystems 6 hours ago | parent | prev | next [-]

The model doesn't know what its training data is, nor does it know what sequences of tokens appeared verbatim in there, so this kind of thing doesn't work.

mikaraento 14 hours ago | parent | prev | next [-]

That might be somewhat ungenerous unless you have more detail to provide.

I know that at least some LLM products explicitly check output for similarity to training data to prevent direct reproduction.

TZubiri an hour ago | parent | next [-]

So it would be able to produce the training data but with sufficient changes or added magic dust to be able to claim it as one's own.

Legally I think it works, but evidence in a court works differently than in science. It's the same word but don't let that confuse you and don't mix them both.

guenthert 6 hours ago | parent | prev [-]

Should they though? If the answer to a question^Wprompt happens to be in the training set, wouldn't it be disingenuous to not provide that?

ttctciyf 5 hours ago | parent [-]

Maybe it's intended to avoid legal liability resulting from reproducing copyright material not licensed for training?

TZubiri an hour ago | parent [-]

Ding!

It's great business to minimally modify valuable stuff and then take credit for it. As was explained to me by bar-certified counsel "if you take a recipe and add, remove or change just one thing, it's now your recipe"

The new trend in this is asking Claude Code to create a software on some type, like a Browser or a DICOM viewer, and then publishing that it's managed to do this very expensive thing (but if you check source code, which is never published, it probably imports a lot of open source dependencies that actually do the thing)

Now this is especially useful in business, but it seems that some people are repurposing this for proving math theorems. The Terence Tao effort which later checks for previous material is great! But the fact that the Section 2 (for such cases) is filled to the brim, and section 1 is mostly documented failed attempts (except for 1 proof, congratulations to the authors), mostly confirms my hypothesis, claiming that the model has guards that prevent it is a deus ex machina cope against the evidence.

efskap 14 hours ago | parent | prev [-]

Would it really be infeasible to take a sample and do a search over an indexed training set? Maybe a bloom filter can be adapted

hexaga 13 hours ago | parent [-]

It's not the searching that's infeasible. Efficient algorithms for massive scale full text search are available.

The infeasibility is searching for the (unknown) set of translations that the LLM would put that data through. Even if you posit only basic symbolic LUT mappings in the weights (it's not), there's no good way to enumerate them anyway. The model might as well be a learned hash function that maintains semantic identity while utterly eradicating literal symbolic equivalence.

Den_VR 14 hours ago | parent | prev | next [-]

Unfortunately.

GeoAtreides 8 hours ago | parent | prev [-]

does it?

this is a verbatim quote from gemini 3 pro from a chat couple of days ago:

"Because I have done this exact project on a hot water tank, I can tell you exactly [...]"

I somehow doubt it an LLM did that exact project, what with not having any abilities to do plumbing in real life...

retsibsi 7 hours ago | parent [-]

Isn't that easily explicable as hallucination, rather than regurgitation?

ttctciyf 5 hours ago | parent [-]

Those are not mutually exclusive in this instance, it seems.

davidhs 8 hours ago | parent | prev | next [-]

It looks like these models work pretty well as natural language search engines and at connecting together dots of disparate things humans haven't done.

pfdietz 7 hours ago | parent | next [-]

They're finding them very effective at literature search, and at autoformalization of human-written proofs.

Pretty soon, this is going to mean the entire historical math literature will be formalized (or, in some cases, found to be in error). Consider the implications of that for training theorem provers.

mlpoknbji 6 hours ago | parent [-]

I think "pretty soon" is a serious overstatement. This does not take into account the difficulty in formalizing definitions and theorem statements. This cannot be done autonomously (or, it can, but there will be serious errors) since there is no way to formalize the "text to lean" process.

What's more, there's almost surely going to turn out to be a large amount of human generated mathematics that's "basically" correct, in the sense that there exists a formal proof that morally fits the arc of the human proof, but there's informal/vague reasoning used (e.g. diagram arguments, etc) that are hard to really formalize, but an expert can use consistently without making a mistake. This will take a long time to formalize, and I expect will require a large amount of human and AI effort.

pfdietz an hour ago | parent [-]

It's all up for debate, but personally I feel you're being too pessimistic there. The advances being made are faster than I had expected. The area is one where success will build upon and accelerate success, so I expect the rate of advance to increase and continue increasing.

This particular field seems ideal for AI, since verification enables identification of failure at all levels. If the definitions are wrong the theorems won't work and applications elsewhere won't work.

p-e-w 7 hours ago | parent | prev [-]

Every time this topic comes up people compare the LLM to a search engine of some kind.

But as far as we know, the proof it wrote is original. Tao himself noted that it’s very different from the other proof (which was only found now).

That’s so far removed from a “search engine” that the term is essentially nonsense in this context.

theptip 6 hours ago | parent [-]

Hassabis put forth a nice taxonomy of innovation: interpolation, extrapolation, and paradigm shifts.

AI is currently great at interpolation, and in some fields (like biology) there seems to be low-hanging fruit for this kind of connect-the-dots exercise. A human would still be considered smart for connecting these dots IMO.

AI clearly struggles with extrapolation, at least if the new datum is fully outside the training set.

And we will have AGI (if not ASI) if/when AI systems can reliably form new paradigms. It’s a high bar.

cubefox 15 hours ago | parent | prev | next [-]

This illustrates how unimportant this problem is. A prior solution did exist, but apparently nobody knew because people didn't really care about it. If progress can be had by simply searching for old solutions in the literature, then that's good evidence the supposed progress is imaginary. And this is not the first time this has happened with an Erdős problem.

A lot of pure mathematics seems to consist in solving neat logic puzzles without any intrinsic importance. Recreational puzzles for very intelligent people. Or LLMs.

antonvs 4 hours ago | parent | next [-]

> "intrinsic importance"

"Intrinsic" in contexts like this is a word for people who are projecting what they consider important onto the world. You can't define it in any meaningful way that's not entirely subjective.

4 hours ago | parent | prev | next [-]
[deleted]
jojobas 10 hours ago | parent | prev | next [-]

It's hard to predict which maths result from 100 years ago surfaces in say quantum mechanics or cryptography.

glemion43 13 hours ago | parent | prev | next [-]

It shows that a 'llm' can now work on issues like this today and tomorrow it can do even more.

Don't be so ignorant. A few years ago NO ONE could have come up with something so generic as an LLM which will help you to solve this kind of problems and also create text adventures and java code.

danielbln 12 hours ago | parent | next [-]

The goal posts are strapped to skateboards these days, and the WD40 is applied to the wheels generously.

sampullman 10 hours ago | parent | next [-]

Regular WD40 should not be used as bearing lubricant!

danielbln 10 hours ago | parent [-]

Exactly!

glemion43 2 hours ago | parent | prev [-]

I don't get your pessimism...

Nothing of it was even imaginable and yes the progress is crazy fast.

How can you be so dismissive?

danielbln 2 hours ago | parent [-]

You misread my comment.

glemion43 2 hours ago | parent [-]

You mean like a small rocket build? Okay :)

BoredPositron 11 hours ago | parent | prev [-]

You can just wait and verify instead of the publishing, redacting cycles of the last year. It's embarrassing.

MattGaiser 14 hours ago | parent | prev [-]

There is still enormous value in cleaning up the long tail of somewhat important stuff. One of the great benefits of Claude Code to me is that smaller issues no longer rot in backlogs, but can be at least attempted immediately.

cubefox 14 hours ago | parent [-]

The difference is that Claude Code actually solves practical problems, but pure (as opposed to applied) mathematics doesn't. Moreover, a lot of pure mathematics seems to be not just useless, but also without intrinsic epistemic value, unlike science. See https://news.ycombinator.com/item?id=46510353

drob518 7 hours ago | parent | next [-]

I’m an engineer, not a mathematician, so I definitely appreciate applied math more than I do abstract math. That said, that’s my personal preference and one of the reasons that I became an engineer and not a mathematician. Working on nothing but theory would bore me to tears. But I appreciate that other people really love that and can approach pure math and see the beauty. And thank God that those people exist because they sometimes find amazing things that we engineers can use during the next turn of the technological crank. Instead of seeing pure math as useless, perhaps shift to seeing it as something wonderful for which we have not YET found a practical use.

Ar-Curunir 2 hours ago | parent [-]

Even if pure math is useless, that’s still okay. We do plenty of things that are useless. Not everything has to have a use.

jstanley 14 hours ago | parent | prev | next [-]

Applications for pure mathematics can't necessarily be known until the underlying mathematics is solved.

Just because we can't imagine applications today doesn't mean there won't be applications in the future which depend on discoveries that are made today.

cubefox 9 hours ago | parent [-]

Well, read the linked comment. The possible future applications of useless science can't be known either. I still argue that it has intrinsic value apart from that, unlike pure mathematics.

Thorrez 8 hours ago | parent [-]

There are many cases where pure mathematics became useful later.

https://www.reddit.com/r/math/comments/dfw3by/is_there_any_e...

cubefox 8 hours ago | parent [-]

So what? There are probably also many cases where seemingly useless science became useful later.

glenstein 7 hours ago | parent [-]

Exactly, you're almost getting it. Hence the value of "pure" research in both science and math.

cubefox 6 hours ago | parent [-]

You are not yet getting it I'm afraid. The point of the linked post was that, even assuming an equal degree of expected uselessness, scientific explanations have intrinsic epistemic value, while proving pure math theorems hasn't.

glenstein 6 hours ago | parent [-]

I think you lost track of what I was replying to. Thorrez noted that "There are many cases where pure mathematics became useful later." You replied by saying "So what? There are probably also many cases where seemingly useless science became useful later." You seemed to be treating the latter as if it negated the former which doesn't follow. The utility of pure math research isn't negated by noting there's also value in pure science research, any more than "hot dogs are tasty" is negated by replying "so what? hamburgers are also tasty". That's the point you made, and that's what I was responding to, and I'm not confused on this point despite your insistence to the contrary.

Instead of addressing any of that you're insisting I'm misunderstanding and pointing me back to a linked comment of yours drawing a distinction between epistemic value of science research vs math research. Epistemic value counts for many things, but one thing it can't do is negate the significance of pure math turning into applied research on account of pure science doing the same.

cubefox 5 hours ago | parent [-]

"You replied by saying "So what? There are probably also many cases where seemingly useless science became useful later." You seemed to be treating the latter as if it negated the former"

No, "so what" doesn't indicate disagreement, just that something isn't relevant.

Anyway, assume hot dogs taste not good at all, except in rare circumstances. It would then be wrong to say "hot dogs taste good", but it would be right to say "hot dogs don't taste good". Now substitute pure math for hot dogs. Pure math can be generally useless even if it isn't always useless. Men are taller than women. That's the difference between applied and pure math. The difference between math and science is something else: Even useless science has value, while most useless math (which consists of pure math) doesn't. (I would say the axiomatization of new theories, like probability theory, can also have inherent value, independent of any uselessness, insofar as it is conceptual progress, but that's different from proving pure math conjectures.)

cwnyth 4 hours ago | parent [-]

It really speaks to the weakness of your original claim that you're applying this level of sophistry to your backpedaling.

cubefox 2 hours ago | parent [-]

There are 1135 Erdős problems. The solution to how many of them do you expect to be practically useless? 99%? More? 100%? Calling something useful merely because it might be in rare exceptions is the real sophistry.

teiferer 13 hours ago | parent | prev | next [-]

It's hard to know beforehand. Like with most foundational research.

My favorite example is number theory. Before cyptography came along it was pure math, an esoteric branch for just number nerds. defund Turns out, super applicable later on.

baq 13 hours ago | parent | prev | next [-]

You’re confusing immediately useful with eventually useful. Pure maths has found very practical applications over the millennia - unless you don’t consider it pure anymore, at which point you’re just moving goalposts.

cubefox 13 hours ago | parent [-]

No, I'm not confusing that. Read the linked comment if you're interested.

TheOtherHobbes 12 hours ago | parent [-]

You are confusing that. The biggest advancements in science are the result of the application of leading-edge pure math concepts to physical problems. Netwonian physics, relativistic physics, quantum field theory, Boolean computing, Turing notions of devices for computability, elliptic-curve cryptography, and electromagnetic theory all derived from the practical application of what was originally abstract math play.

Among others.

Of course you never know which math concept will turn out to be physically useful, but clearly enough do that it's worth buying conceptual lottery tickets with the rest.

glenstein 6 hours ago | parent | next [-]

Just to throw in another one, string theory was practically nothing but a basic research/pure research program unearthing new mathematical objects which drove physics research and vice versa. And unfortunately for the haters, string theory has borne real fruit with holography, producing tools for important predictions in plasma physics and black hole physics among other things. I feel like culture hasn't caught up to the fact that holography is now the gold rush frontier that has everyone excited that it might be our next big conceptual revolution in physics.

cubefox 7 hours ago | parent | prev [-]

There is a difference between inventing/axiomatizing new mathematical theories and proving conjectures. Take the Riemann hypothesis (the big daddy among the pure math conjectures), and assume we (or an LLM) prove it tomorrow. How high do you estimate the expected practical usefulness of that proof?

glenstein 6 hours ago | parent [-]

That's an odd choice, because prime numbers routinely show up in important applications in cryptography. To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography. And solving it would be valuable in its own right, a kind of mathematical equivalent to discovering a fundamental law in physics which permanently changes what is known to be true about the structure of numbers.

Ironically this example turns out to be a great object lesson in not underestimating the utility of research based on an eyeball test. But it shouldn't even have to have any intuitively plausible payoff whatsoever in order to justify it. The whole point is that even if a given research paradigm completely failed the eyeball test, our attitude should still be that it very well could have practical utility, and there are so many historical examples to this effect (the other commenter already gave several examples, and the right thing to do would have been acknowledge them), and besides I would argue they still have the same intrinsic value that any and all knowledge has.

cubefox 5 hours ago | parent [-]

> To actually solve RH would likely involve developing new mathematical tools which would then be brought to bear on deployment of more sophisticated cryptography.

I doubt that this is true.

glenstein 4 hours ago | parent [-]

It already has! The progress that's been made thus far, involved the development of new ways to probabilistically estimate density of primes, which in turn have already been used in cryptography for secure key based on deeper understanding of how to quickly and efficiently find large prime numbers.

amazingman 14 hours ago | parent | prev [-]

It's unclear to me what point you are making.

threethirtytwo 17 hours ago | parent | prev [-]

[flagged]

magnio 16 hours ago | parent | next [-]

Pity that HN's ability to detect sarcasm is as robust as that of a sentiment analysis model using keyword-matching.

furyofantares 16 hours ago | parent | next [-]

The problem is more that it's an LLM-generated comment that's about 20x as long as it needed to be to get the point across.

cubefox 15 hours ago | parent | next [-]

It's obviously not LLM-generated.

kleene_op 15 hours ago | parent [-]

Phew. This is a relief, honestly!

threethirtytwo 16 hours ago | parent | prev [-]

It's not.

Evidence shows otherwise: Despite the "20x" length, many people actually missed the point.

furyofantares 3 hours ago | parent | next [-]

Oh yeah, there is also a problem with people not noticing they're reading LLM output, AND with people missing sarcasm on here. Actually, I'm OK with people missing sarcasm on here - I have plenty of places to go for sarcasm and wit and it's actually kind of nice to have a place where most posts are sincere, even if that sets people up to miss it when posts are sarcastic.

Which is also what makes it problematic that you're lying about your LLM use. I would honestly love to know your prompt and how you iterated on the post, how much you put into it and how much you edited or iterated. Although pretending there was no LLM involved at all is rather disappointing.

Unfortunately I think you might feel backed into a corner now that you've insisted otherwise but it's a genuinely interesting thing here that I wish you'd elaborate on.

eru 15 hours ago | parent | prev | next [-]

Despite or because?

_diyar 15 hours ago | parent | prev [-]

I definitely missed the point because of the length, and only realized after I read replies to your comment.

threethirtytwo 15 hours ago | parent | next [-]

Next time I'll write something shorter, or if you don't believe I wrote it... then I'll tell the AI to write something shorter.

quinnjh 15 hours ago | parent | prev [-]

Its not just verbose—it's almost a novel. Parent either cooked and capped, or has managed to perfectly emulate the patterns this parrot is stochastically known best for. I liked the pro human vibe if anything.

catlifeonmars 15 hours ago | parent | prev [-]

That’s just the internet. Detecting sarcasm requires a lot of context external to the content of any text. In person some of that is mitigated by intonation, facial expressions, etc. Typically it also requires that the the reader is a native speaker of the language or at least extremely proficient.

14 hours ago | parent | prev | next [-]
[deleted]
catoc 15 hours ago | parent | prev | next [-]

I firmly believe @threethirtytwo’s reply was not produced by an LLM

mkarliner 14 hours ago | parent [-]

regardless of if this text was written by an LLM or a human, it is still slop,with a human behind it just trying to wind people up . If there is a valid point to be made , it should be made, briefly.

catoc 13 hours ago | parent [-]

If the point was triggering a reply, the length and sarcasm certainly worked.

I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.

But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.

rixed 15 hours ago | parent | prev | next [-]

Are you expecting people who can't detect self-dellusions to be able to detect sarcasm, or are you just being cruel?

eru 15 hours ago | parent | prev | next [-]

> This is a relief, honestly. A prior solution exists now, which means the model didn’t solve anything at all. It just regurgitated it from the internet, which we can retroactively assume contained the solution in spirit, if not in any searchable or known form. Mystery resolved.

Vs

> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"

johnfn 16 hours ago | parent | prev | next [-]

I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.

AstroBen 16 hours ago | parent | next [-]

Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average

I guess this is the end of the human internet

prussia 15 hours ago | parent | next [-]

To give them the benefit of the doubt, people who talk to AI too much probably start mimicking its style.

4k93n2 15 hours ago | parent | prev [-]

yea, i was suspicious by the second paragraph but was sure once i got to "that’s not engineering, it’s cosplay"

AstroBen 15 hours ago | parent | next [-]

It's also the wording. The weird phrases

"Glorified Google search with worse footnotes" what on earth does that mean?

AI has a distinct feel to it

lxgr 15 hours ago | parent | next [-]

And with enough motivated reasoning, you can find AI vibes in almost every comment you don’t agree with.

For better or worse, I think we might have to settle on “human-written until proven otherwise”, if we don’t want to throw “assume positive intent” out the window entirely on this site.

testdelacc1 15 hours ago | parent | prev [-]

Dude is swearing up and down that they came up with the text on their own. I agree with you though, it reeks of LLMs. The only alternative explanation is that they use LLMs so much that they’ve copied the writing style.

plaguuuuuu 15 hours ago | parent | prev [-]

I've had that exact phrase pop up from an LLM when I asked it for a more negative code review

threethirtytwo 16 hours ago | parent | prev | next [-]

Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.

It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.

catlifeonmars 15 hours ago | parent | next [-]

I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?

comp_throw7 16 hours ago | parent | prev | next [-]

> But if it was there is currently no way for anyone to tell the difference.

This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).

threethirtytwo 16 hours ago | parent | next [-]

I've tested some of those services and they weren't very reliable.

CamperBob2 7 hours ago | parent | prev [-]

If such a thing did exist, it would exist only until people started training models to hide from it.

Negative feedback is the original "all you need."

velox_neb 16 hours ago | parent | prev [-]

> It wasn't AI generated.

You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...

Not that I needed pangram to tell me that, it's obvious slop.

threethirtytwo 16 hours ago | parent | next [-]

I wouldn't know how to prove to you otherwise other then to tell you that I have seen these tools show incorrect results for both AI generated text and human written text.

lxgr 15 hours ago | parent | prev | next [-]

Good thing you had a stochastic model backing up (with “low confidence”, no less) your vague intuition of a comment you didn’t like being AI-written.

XenophileJKO 16 hours ago | parent | prev [-]

I must be a bot because I love existential dread, that's a great phrase. I feel like they trigger a lot on literate prose.

lxgr 15 hours ago | parent [-]

Sad times when the only remaining way to convince LLM luddites of somebody’s humanity is bad writing.

CamperBob2 16 hours ago | parent | prev | next [-]

(edit: removed duplicate comment from above, not sure how that happened)

undeveloper 16 hours ago | parent | next [-]

the poster is in fact being very sarcastic. arguing in favor of emergent reasoning does in fact make sense

threethirtytwo 16 hours ago | parent | prev [-]

It's a formal sarcasm piece.

CamperBob2 16 hours ago | parent | prev [-]

It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.

(edit: fixed link)

threethirtytwo 16 hours ago | parent | next [-]

I thought the mockery and sarcasm in my piece was rather obvious.

CamperBob2 16 hours ago | parent [-]

Poe's Law is the real Bitter Lesson.

habinero 16 hours ago | parent | prev [-]

We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"

I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test

nurettin 17 hours ago | parent | prev [-]

Why not plan for a future where a lot of non-trivial tasks are automated instead of living on the edge with all this anxiety?

threethirtytwo 16 hours ago | parent [-]

[flagged]

14 hours ago | parent | next [-]
[deleted]
undeveloper 16 hours ago | parent | prev | next [-]

come out of the irony layer for a second -- what do you believe about LLMs?

jorvi 14 hours ago | parent | prev | next [-]

I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.

LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.

So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.

7777332215 16 hours ago | parent | prev | next [-]

If all of it is going away and you should deny reality, what does everything else you wrote even mean?

habinero 16 hours ago | parent | prev [-]

Yes, it is simply impossible that anyone could look at things and do your own evaluations and come to a different, much more skeptical conclusion.

The only possible explanation is people say things they don't believe out of FUD. Literally the only one.