Remix.run Logo
thaumasiotes a day ago

> If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

Really? Regardless of whether it's a good paper?

Aurornis a day ago | parent | next [-]

Citations are a key part of the paper. If the paper isn’t supported by the citations, it’s not a good paper.

withinboredom a day ago | parent [-]

Have you ever followed citations before? In my experience, they don't support what is being citated, saying the opposite or not even related. It's probably only 60%-ish that actually cite something relevant.

Aurornis 11 hours ago | parent | next [-]

I follow them a lot. I’ve also had cases where they don’t support the paper.

This doesn’t make it okay. Bad human writer and reviewer practices are also bad.

WWWWH a day ago | parent | prev [-]

Well yes, but just because that’s bad doesn’t mean this isn’t far worse.

zwnow a day ago | parent | prev [-]

How is it a good paper if the info in it cant be trusted lmao

thaumasiotes a day ago | parent [-]

Whether the information in the paper can be trusted is an entirely separate concern.

Old Chinese mathematics texts are difficult to date because they often purport to be older than they are. But the contents are unaffected by this. There is a history-of-math problem, but there's no math problem.

hnfong a day ago | parent | next [-]

You are totally correct that hallucinated citations do not invalidate the paper. The paper sans citations might be great too (I mean the LLM could generate great stuff, it's possible).

But the author(s) of the paper is almost by definition a bad scientist (or whatever field they are in). When a researcher writes a paper for publication, if they're not expected to write the thing themselves, at least they should be responsible for checking the accuracy of the contents, and citations are part of the paper...

alexcdot a day ago | parent | prev | next [-]

Problem is that most ML papers today are not independently verifiable proofs - in most, you have to trust the scientist didn't fraudulently produce their results.

There is so much BS being submitted to conferences and decreasing the amount of BS they see would result in less skimpy reviews and also less apathy

zwnow a day ago | parent | prev [-]

Not really true nowadays. Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

Whether the students directly used LLMs or just read content online that was produced with them and cited after just shows how difficult these things made gathering information that's verifiable.

thaumasiotes a day ago | parent [-]

> Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

That's... gibberish.

Anything you can do to verify a paper, you can do to verify the same paper with all citations scrubbed.

Whether the citations support the paper, or whether they exist at all, just doesn't have anything to do with what the paper says.

zwnow a day ago | parent [-]

I dont think you know how whitepapers work then