Remix.run Logo
theoldgreybeard a day ago

If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

CapitalistCartr a day ago | parent | next [-]

I'm an industrial electrician. A lot of poor electrical work is visible only to a fellow electrician, and sometimes only another industrial electrician. Bad technical work requires technical inspectors to criticize. Sometimes highly skilled ones.

andy99 a day ago | parent | next [-]

I’ve reviewed a lot of papers, I don’t consider it the reviewers responsibility to manually verify all citations are real. If there was an unusual citation that was relied on heavily for the basis of the work, one would expect it to be checked. Things like broad prior work, you’d just assume it’s part of background.

The reviewer is not a proofreader, they are checking the rigour and relevance of the work, which does not rest heavily on all of the references in a document. They are also assuming good faith.

stdbrouw a day ago | parent | next [-]

The idea that references in a scientific paper should be plentiful but aren't really that important, is a consequence of a previous technological revolution: the internet.

You'll find a lot of papers from, say, the '70s, with a grand total of maybe 10 references, all of them to crucial prior work, and if those references don't say what the author claims they should say (e.g. that the particular method that is employed is valid), then chances are that the current paper is weaker than it seems, or even invalid, and so it is extremely important to check those references.

Then the internet came along, scientists started padding their work with easily found but barely relevant references and journal editors started requiring that even "the earth is round" should be well-referenced. The result is that peer reviewers feel that asking them to check the references is akin to asking them to do a spell check. Fair enough, I agree, I usually can't be bothered to do many or any citation checks when I am asked to do peer review, but it's good to remember that this in itself is an indication of a perverted system, which we just all ignored -- at our peril -- until LLM hallucinations upset the status quo.

tialaramex a day ago | parent | next [-]

Whether in the 1970s or now, it's too often the case that a paper says "Foo and Bar are X" and cites two sources for this fact. You chase down the sources, the first one says "We weren't able to determine whether Foo is X" and never mentions Bar. The second says "Assuming Bar is X, we show that Foo is probably X too".

The paper author likely believes Foo and Bar are X, it may well be that all their co-workers, if asked, would say that Foo and Bar are X, but "Everybody I have coffee with agrees" can't be cited, so we get this sort of junk citation.

Hopefully it's not crucial to the new work that Foo and Bar are in fact X. But that's not always the case, and it's a problem that years later somebody else will cite this paper, for the claim "Foo and Bar are X" which it was in fact merely citing erroneously.

KHRZ a day ago | parent | next [-]

LLMs can actually make up for their negative contributions. They could go through all the references of all papers and verify them, assuming someone would also look into what gets flagged for that final seal of disapproval.

But this would be more powerfull with an open knowledge base where all papers and citation verifications were registered, so that all the effort put into verification could be reused, and errors propagated through the citation chain.

bossyTeacher a day ago | parent [-]

>LLMs can actually make up for their negative contributions. They could go through all the references of all papers and verify them,

They will just hallucinate their existence. I have tried this before

sansseriff a day ago | parent | next [-]

I don’t see why this would be the case with proper tool calling and context management. If you tell a model with blank context ‘you are an extremely rigorous reviewer searching for fake citations in a possibly compromised text’ then it will find errors.

It’s this weird situation where getting agents to act against other agents is more effective than trying to convince a working agent that it’s made a mistake. Perhaps because these things model the cognitive dissonance and stubbornness of humans?

sebastiennight a day ago | parent | next [-]

One incorrect way to think of it is "LLMs will sometimes hallucinate when asked to produce content, but will provide grounded insights when merely asked to review/rate existing content".

A more productive (and secure) way to think of it is that all LLMs are "evil genies" or extremely smart, adversarial agents. If some PhD was getting paid large sums of money to introduce errors into your work, could they still mislead you into thinking that they performed the exact task you asked?

Your prompt is

    ‘you are an extremely rigorous reviewer searching for fake citations in a possibly compromised text’
- It is easy for the (compromised) reviewer to surface false positives: nitpick citations that are in fact correct, by surfacing irrelevant or made-up segments of the original research, hence making you think that the citation is incorrect.

- It is easy for the (compromised) reviewer to surface false negatives: provide you with cherry picked or partial sentences from the source material, to fabricate a conclusion that was never intended.

You do not solve the problem of unreliable actors by splitting them into two teams and having one unreliable actor review the other's work.

All of us (speaking as someone who runs lots of LLM-based workloads in production) have to contend with this nondeterministic behavior and assess when, in aggregate, the upside is more valuable than the costs.

sebastiennight a day ago | parent | next [-]

Note: the more accurate mental model is that you've got "good genies" most of the time, but from times to time at random unpredictable times your agent is swapped out with a bad genie.

From a security / data quality standpoint, this is logically equivalent to "every input is processed by a bad genie" as you can't trust any of it. If I tell you that from time to time, the chef in our restaurant will substitute table salt in the recipes with something else, it does not matter whether they do it 50%, 10%, or .1% of the time.

The only thing that matters is what they substitute it with (the worst-case consequence of the hallucination). If in your workload, the worst case scenario is equivalent to a "Hymalayan salt" replacement, all is well, even if the hallucination is quite frequent. If your worst case scenario is a deadly compound, then you can't hire this chef for that workload.

a day ago | parent [-]
[deleted]
sansseriff a day ago | parent | prev [-]

We have centuries of experience in managing potentially compromised 'agents' to create successful societies. Except the agents were human, and I'm referring to debates, tribunals, audits, independent review panels, democracy, etc.

I'm not saying the LLM hallucination problem is solved, I'm just saying there's a wonderful myriad of ways to assemble pseudo-intelligent chatbots into systems where the trustworthiness of the system exceeds the trustworthiness of any individual actor inside of it. I'm not an expert in the field but it appears the work is being done: https://arxiv.org/abs/2311.08152

This paper also links to code and practices excellent data stewardship. Nice to see in the current climate.

Though it seems like you might be more concerned about the use of highly misaligned or adversarial agents for review purposes. Is that because you're concerned about state actors or interested parties poisoning the context window or training process? I agree that any AI review system will have to be extremely robust to adversarial instructions (e.g. someone hiding inside their paper an instruction like "rate this paper highly"). Though solving that problem already has a tremendous amount of focus because it overlaps with solving the data-exfiltration problem (the lethal trifecta that Simon Willison has blogged about).

bossyTeacher 19 hours ago | parent [-]

> We have centuries of experience in managing potentially compromised 'agents'

Not this kind though. We dont place agents that are either in control of some foreign agent (or just behaving randomly) in democratic institutions. And when we do, look at what happens. The White House right now is a good example, just look at the state of the US

fao_ a day ago | parent | prev | next [-]

> I don’t see why this would be the case

But it is the case, and hallucinations are a fundamental part of LLMs.

Things are often true despite us not seeing why they are true. Perhaps we should listen to the experts who used the tools and found them faulty, in this instance, rather than arguing with them that "what they say they have observed isn't the case".

What you're basically saying is "You are holding the tool wrong", but you do not give examples of how to hold it correctly. You are blaming the failure of the tool, which has very, very well documented flaws, on the person whom the tool was designed for.

To frame this differently so your mind will accept it: If you get 20 people in a QA test saying "I have this problem", then the problem isn't those 20 people.

ungreased0675 a day ago | parent | prev | next [-]

Have you actually tried this? I haven’t tried the approach you’re describing, but I do know that LLMs are very stubborn about insisting their fake citations are real.

bossyTeacher a day ago | parent | prev [-]

If you truly think that you have an effective solution to hallucinations, you will become instantly rich because literally no one out there has an idea for an economically and technologically feasible solution to hallucinations

whatyesaid a day ago | parent [-]

For references, as the OP said, I don't see why it isn't possible. It's something that exists and is accessible (even if paywalled) or doesn't exist. For reasoning hallucinations are different.

logifail a day ago | parent [-]

> I don't see why it isn't possible

(In good faith) I'm trying really hard not to see this as an "argument from incredulity"[0] and I'm stuggling...

Full disclosure: natural sciences PhD, and a couple of (IMHO lame) published papers, and so I've seen the "inside" of how lab science is done, and is (sometimes) published. It's not pretty :/

[0] https://en.wikipedia.org/wiki/Argument_from_incredulity

whatyesaid a day ago | parent [-]

If you've got a prompt, along the lines of: given some references, check their validity. It searches against the articles and URLs provided. You return "yes", "no", and let's also add "inconclusive", for each reference. Basic LLMs can do this much instruction following, just like in 99.99% of times they don't get 829 multiplied by 291 wrong when you ask them (nowadays). You'd prompt it to back all claims solely by search/external links showing exact matches and not use its own internal knowledge.

The fake references generated in the ICLR papers were I assume due to people asking a LLM to write parts of the related work section, not verify references. In that prompt it relies a lot on internal knowledge and spends a majority of time thinking about what the relevant subareas are and cutting edge is, probably. I suppose it omits a second-pass check. In the other case, you have the task of verifying references, which is mostly basic instruction following for advanced models that have web access. I think you'd run the risks of data poisoning and model timeout more than hallucinations.

knome a day ago | parent | prev [-]

I assumed they meant using the LLM to extract the citations and then use external tooling to lookup and grab the original paper, at least verifying that it exists, has relevant title, summary and that the authors are correctly cited.

mike_hearn 17 hours ago | parent [-]

Which is what the people in this new article are doing.

HPsquared a day ago | parent | prev [-]

Wikipedia calls this citogenesis.

ineedasername a day ago | parent | prev | next [-]

>“consequence of a previous technological revolution: the internet.”

And also of increasingly ridiculous and overly broad concepts of what plagiarism is. At some point things shifted from “don’t represent others’ work as novel” towards “give a genealogical ontology of every concept above that of an intro 101 college course on the topic.”

semi-extrinsic a day ago | parent | prev | next [-]

It's also a consequence of the sheer number of building blocks which are involved in modern science.

In the methods section, it's very common to say "We employ method barfoo [1] as implemented in library libbar [2], with the specific variant widget due to Smith et al. [3] and the gobbledygook renormalization [4,5]. The feoozbar is solved with geometric multigrid [6]. Data is analyzed using the froiznok method [7] from the boolbool library [8]." There goes 8, now you have 2 citations left for the introduction.

stdbrouw a day ago | parent [-]

Do you still feel the same way if the froiznok method is an ANOVA table of a linear regression, with a log-transformed outcome? Should I reference Fisher, Galton, Newton, the first person to log transform an outcome in a regression analysis, the first person to log transform the particular outcome used in your paper, the R developers, and Gauss and Markov for showing that under certain conditions OLS is the best linear unbiased estimator? And then a couple of references about the importance of quantitative analysis in general? Because that is the level of detail I’m seeing :-)

semi-extrinsic a day ago | parent [-]

Yeah, there is an interesting question there (always has been). When do you stop citing the paper for a specific model?

Just to take some examples, is BiCGStab famous enough now that we can stop citing van der Vorst? Is the AdS/CFT correspondence well known enough that we can stop citing Maldacena? Are transformers so ubiquitous that we don't have to cite "Attention is all you need" anymore? I would be closer to yes than no on these, but it's not 100% clear-cut.

One obvious criterion has to be "if you leave out the citation, will it be obvious to the reader what you've done/used"? Another metric is approximately "did the original author get enough credit already"?

stdbrouw 17 hours ago | parent [-]

Yeah, I didn't want to be contrary just for the sake of it, the heuristics you mention seem like good ones, and if followed would probably already cut down on quite a few superfluous references in most papers.

freehorse a day ago | parent | prev | next [-]

It is not (just) consequence of the internet, the scientific production itself has grown exponentially. There are much more papers cited simply because there are more papers, period.

varjag a day ago | parent | prev | next [-]

Not even the Internet per se but citation index becoming universally accepted KPI for research work.

HPsquared a day ago | parent | prev [-]

Maybe there could be a system to classify the importance of each reference.

zipy124 a day ago | parent [-]

Systems do exist for this, but they're rather crude.

grayhatter a day ago | parent | prev | next [-]

> The reviewer is not a proofreader, they are checking the rigour and relevance of the work, which does not rest heavily on all of the references in a document.

I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no?

> They are also assuming good faith.

I can only relate this to code review, but assuming good faith means you assume they didn't try to introduce a bug by adding this dependency. But I would should still check to make sure this new dep isn't some typosquatted package. That's the rigor I'm responsible for.

dilawar a day ago | parent | next [-]

> I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no?

Ph.D. in neuroscience here. Programmer by trade. This is not true. Less you know about most peer revies is better.

The better peer reviews are also not this 'thorough' and no one expects reviewers to read or even check references. Unless they are citing something they are familiar with and you are using it wrong then they will likely complain. Or they find some unknown citations very relevant to their work, they will read.

I don't have a great analogy to draw here. peer review is usually a thankless and unpaid work so there is unlikely to be any motivation for fraud detection unless it somehow affects your work.

wpollock a day ago | parent [-]

> The better peer reviews are also not this 'thorough' and no one expects reviewers to read or even check references.

Checking references can be useful when you are not familiar with the topic (but must review the paper anyway). In many conference proceedings that I have reviewed for, many if not most citations were redacted so as to keep the author anonymous (citations to the author's prior work or that of their colleagues).

LLMs could be used to find prior work anyway, today.

tpoacher a day ago | parent | prev | next [-]

This is true, but here the equivalent situation is someone using a greek question mark (";") instead of a semicolon (";"), and you as a code reviewer are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail.

Yes in theory you can go through every semicolon to check if it's not actually a greek question mark; but one assumes good faith and baseline competence such that you as the reviewer would generally not be expected to perform such pedantic checks.

So if you think you might have reasonably missed greek question marks in a visual code review, then hopefully you can also appreciate how a paper reviewer might miss a false citation.

scythmic_waves a day ago | parent | next [-]

> as a code reviewer [you] are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail.

As a PR reviewer I frequently pull down the code and run it. Especially if I'm suggesting changes because I want to make sure my suggestion is correct.

Do other PR reviewers not do this?

dataflow a day ago | parent | next [-]

I don't commonly do this and I don't know many people who do this frequently either. But it depends strongly on the code, the risks, the gains of doing so, the contributor, the project, the state of testing and how else an error would get caught (I guess this is another way of saying "it depends on the risks"), etc.

E.g. you can imagine that if I'm reviewing changes in authentication logic, I'm obviously going to put a lot more effort into validation than if I'm reviewing a container and wondering if it would be faster as a hashtable instead of a tree.

> because I want to make sure my suggestion is correct.

In this case I would just ask "have you already also tried X" which is much faster than pulling their code, implementing your suggestion, and waiting for a build and test to run.

tpoacher a day ago | parent | prev | next [-]

I do too, but this is a conference, I doubt code was provided.

And even then, what you're describing isn't review per se, it's replication. In principle there are entire journals that one can submit replication reports to, which count as actual peer reviewable publications in themselves. So one needs to be pragmatic with what is expected from a peer review (especially given the imbalance between resources invested to create one versus the lack of resources offered and lack of any meaningful reward)

Majromax a day ago | parent [-]

> I do too, but this is a conference, I doubt code was provided.

Machine learning conferences generally encourage (anonymized) submission of code. However, that still doesn't mean that replication is easy. Even if the data is also available, replication of results might require impractical levels of compute power; it's not realistic to ask a peer reviewer to pony up for a cloud account to reproduce even medium-scale results.

lesam a day ago | parent | prev | next [-]

If there’s anything I would want to run to verify, I ask the author to add a unit test. Generally, the existing CI test + new tests in the PR having run successfully is enough. I might pull and run it if I am not sure whether a particular edge case is handled.

Reviewers wanting to pull and run many PRs makes me think your automated tests need improvement.

Terr_ a day ago | parent | prev | next [-]

I don't, but that's because ensuring the PR compiles and passes old+new automated tests is an enforced requirement before it goes out.

So running it myself involves judging other risks, much higher-level ones than bad unicode characters, like the GUI button being in the wrong place.

grayhatter a day ago | parent | prev | next [-]

> Do other PR reviewers not do this?

Some do, many, (like peer reviewers), are unable to consider the consequences of their negligence.

But it's always a welcome reminder that some people care about doing good work. That's easy to forget browsing HN, so I appreciate the reminder :)

vkou a day ago | parent | prev [-]

> Do other PR reviewers not do this?

No, because this is usually a waste of time, because CI enforces that the code and the tests can run at submission time. If your CI isn't doing it, you should put some work in to configure it.

If you regularly have to do this, your codebase should probably have more tests. If you don't trust the author, you should ask them to include test cases for whatever it is that you are concerned about.

grayhatter a day ago | parent | prev | next [-]

> This is true, but here the equivalent situation is someone using a greek question mark (";") instead of a semicolon (";"),

No it's not. I think you're trying to make a different point, because you're using an example of a specific deliberate malicious way to hide a token error that prevents compilation, but is visually similar.

> and you as a code reviewer are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail.

What weird world are you living in where you don't have CI. Also, it's pretty common I'll test code locally when reviewing something more complex, more complex, or more important, if I don't have CI.

> Yes in theory you can go through every semicolon to check if it's not actually a greek question mark; but one assumes good faith and baseline competence such that you as the reviewer would generally not be expected to perform such pedantic checks.

I don't, because it won't compile. Not because I assume good faith. References and citations are similar to introducing dependencies. We're talking about completely fabricated deps. e.g. This engineer went on npm and grabbed the first package that said left-pad but it's actually a crypto miner. We're not talking about a citation missing a page number, or publication year. We're talking about something that's completely incorrect, being represented as relevant.

> So if you think you might have reasonably missed greek question marks in a visual code review, then hopefully you can also appreciate how a paper reviewer might miss a false citation.

I would never miss this, because the important thing is code needs to compile. If it doesn't compile, it doesn't reach the master branch. Peer review of a paper doesn't have CI, I'm aware, but it's also not vulnerable to syntax errors like that. A paper with a fake semicolon isn't meaningfully different, so this analogy doesn't map to the fraud I'm commenting on.

tpoacher a day ago | parent [-]

you have completely missed the point of the analogy.

breaking the analogy beyond the point where it is useful by introducing non-generalising specifics is not a useful argument. Otherwise I can counter your more specific non-generalising analogy by introducing little green aliens sabotaging your imaginary CI with the same ease and effect.

grayhatter a day ago | parent [-]

I disagree you could do that and claim to be reasonable.

But I agree, because I'd rather discuss the pragmatics and not bicker over the semantics about an analogy.

Introducing a token error, is different from plagiarism, no? Someone wrote code that can't compile, is different from someone "stealing" proprietary code from some company, and contributing it to some FOSS repo?

In order to assume good faith, you also need to assume the author is the origin. But that's clearly not the case. The origin is from somewhere else, and the author that put their name on the paper didn't verify it, and didn't credit it.

tpoacher a day ago | parent [-]

Sure but the focus here is on the reviewer not the author.

The point is what is expected as reasonable review before one can "sign their name on it".

"Lazy" (or possibly malicious) authors will always have incentives to cut corners as long as no mechanisms exist to reject (or even penalise) the paper on submission automatically. Which would be the equivalent of a "compiler error" in the code analogy.

Effectively the point is, in the absence of such tools, the reviewer can only reasonably be expected to "look over the paper" for high-level issues; catching such low-level issues via manual checks by reviewers has massively diminishing returns for the extra effort involved.

So I don't think the conference shaming the reviewers here in the absence of providing such tooling is appropriate.

xvilka a day ago | parent | prev [-]

Code correctness should be checked automatically with the CI and testsuite. New tests should be added. This is exactly what makes sure these stupid errors don't bother the reviewer. Same for the code formatting and documentation.

merely-unlikely a day ago | parent | next [-]

This discussion makes me think peer reviews need more automated tooling somewhat analogous to what software engineers have long relied on. For example, a tool could use an LLM to check that the citation actually substantiates the claim the paper says it does, or else flags the claim for review.

noitpmeder a day ago | parent | next [-]

I'd go one further and say all published papers should come with a clear list of "claimed truths", and one is only able to cite said paper if they are linking in to an explicit truth.

Then you can build a true hierarchy of citation dependencies, checked 'statically', and have better indications of impact if a fundamental truth is disproven, ...

vkou a day ago | parent [-]

Have you authored a lot of non-CS papers?

Could you provide a proof of concept paper for that sort of thing? Not a toy example, an actual example, derived from messy real-world data, in a non-trivial[1] field?

---

[1] Any field is non-trivial when you get deep enough into it.

alexcdot a day ago | parent | prev [-]

hey, i'm a part of the gptzero team that built automated tooling, to get the results in that article!

totally agree with your thinking here, we can't just give this to an LLM, because of the need to have industry-specific standards for what is a hallucination / match, and how to do the search

thfuran a day ago | parent | prev [-]

What exactly is the analogy you’re suggesting, using LLMs to verify the citations?

tpoacher a day ago | parent [-]

not OP, but that wouldn't really be necessary.

One could submit their bibtex files and expect bibtex citations to be verifiable using a low level checker.

Worst case scenario if your bibtex citation was a variant of one in the checker database you'd be asked to correct it to match the canonical version.

However, as others here have stated, hallucinated "citations" are actually the lesser problem. Citing irrelevant papers based on a fly-by reference is a much harder problem; this was present even before LLMs, but this has now become far worse with LLMs.

thfuran a day ago | parent [-]

Yes, I think verifying mere existence of the cited paper barely moves the needle. I mean, I guess automated verification of that is a cheap rejection criterion, but I don’t think it’s overall very useful.

alexcdot a day ago | parent [-]

really good point. one of the cofounders of gptzero here!

the tool gptzero used in the article also detects if the citation supports the claim too, if you scroll to "cited information accuracy" here: https://app.gptzero.me/documents/1641652a-c598-453f-9c94-e0b...

this is still in beta because its a much harder problem for sure, since its hard to determine if a 40 page paper supports a claims (if the paper claims X is computationally intractable, does that mean algorithms to compute approximate X are slow?)

pron a day ago | parent | prev | next [-]

That is not, cannot be, and shouldn't be, the bar for peer review. There are two major differences between it and code review:

1. A patch is self-contained and applies to a codebase you have just as much access to as the author. A paper, on the other hand, is just the tip of the iceberg of research work, especially if there is some experiment or data collection involved. The reviewer does not have access to, say, videos of how the data was collected (and even if they did, they don't have the time to review all of that material).

2. The software is also self-contained. That's "prodcution". But a scientific paper does not necessarily aim to represent scientific consensus, but a finding by a particular team of researchers. If a paper's conclusions are wrong, it's expected that it will be refuted by another paper.

grayhatter a day ago | parent [-]

> That is not, cannot be, and shouldn't be, the bar for peer review.

Given the repeatability crisis I keep reading about, maybe something should change?

> 2. The software is also self-contained. That's "prodcution". But a scientific paper does not necessarily aim to represent scientific consensus, but a finding by a particular team of researchers. If a paper's conclusions are wrong, it's expected that it will be refuted by another paper.

This is a much, MUCH stronger point. I would have lead with this because the contrast between this assertion, and my comparison to prod is night and day. The rules for prod are different from the rules of scientific consensus. I regret losing sight of that.

garden_hermit a day ago | parent | next [-]

> Given the repeatability crisis I keep reading about, maybe something should change?

The replication crisis — assuming that it is actually a crisis — is not really solvable with peer review. If I'm reviewing a psychology paper presenting the results of an experiment, I am not able to re-conduct the entire experiment as presented by the authors, which would require completely changing my lab, recruiting and paying participants, and training students & staff.

Even if I did this, and came to a different result than the original paper, what does it mean? Maybe I did something wrong in the replication, maybe the result is only valid for certain populations, maybe inherent statistical uncertainty means we just get different results.

Again, the replication crisis — such that it exists — is not the result of peer review.

hnfong a day ago | parent | prev [-]

IMHO what should change is we stop putting "peer reviewed" articles on a pedestal.

Even if peer review is as rigorous as code reviewed (the former which is usually unpaid), we all know that reviewed code still has bugs, and a programmer would be nuts to go around saying "this code is reviewed by experts, we can assume it's bug free, right?"

But there are too many people who are just assuming peer reviewed articles means they're somehow automatically correct.

vkou a day ago | parent [-]

> IMHO what should change is we stop putting "peer reviewed" articles on a pedestal.

Correct. Peer review is a minimal and necessary but not sufficient step.

freehorse a day ago | parent | prev | next [-]

A reviewer is assessing the relevance and "impact" of a paper rather than correctness itself directly. Reviewers may not even have access to the data itself that authors may have used. The way it essentially works is an editor asks the reviewers "is this paper worthy to be published in my journal?" and the reviewers basically have to answer that question. The process is actually the editor/journal's responsibility.

chroma205 a day ago | parent | prev | next [-]

> I've always assumed peer review is similar to diff review. Where I'm willing to sign my name onto the work of others. If I approve a diff/pr and it takes down prod. It's just as much my fault, no?

No.

Modern peer review is “how can I do minimum possible work so I can write ‘ICLR Reviewer 2025’ on my personal website”

freehorse a day ago | parent | next [-]

The vast majority of people I see do not even mention who they review for in CVs etc. It is usually more akin to a volunteer based, thankless work. Unless you are an editor or sth in a journal, what you review for does not count much for anything.

grayhatter a day ago | parent | prev [-]

> No. [...] how can I do minimum possible work

I don't know, I still think this describes most of the reviews I've seen

I just hope most devs that do this know better than to admit to it.

bjourne a day ago | parent | prev [-]

For ICLR reviewers were asked to review 5 papers in two weeks. Unpaid voluntary work in addition to their normal teaching, supervision, meetings, and other research duties. It's just not possible to understand and thoroughly review each paper even for topic experts. If you want to compare peer review to coding, it's more like "no syntax errors, code still compiles" rather than pr review.

alexcdot a day ago | parent [-]

I really like what IJCAI is doing to pay reviewers to do this work, with the $100 fee from authors

Yeah its insane the workload reviewers are faced with + being an author who gets a review from a novice

PeterStuer a day ago | parent | prev | next [-]

I think the root problem is that everyone involved, from authors to reviewers to publishers, know that 99.999% of papers are completely of no consequence, just empty calories with the sole purpose of padding quotas for all involved, and thus are not going to put in the effort as if.

This is systemic, and unlikely to change anytime soon. There have been remedies proposed (e.g. limits on how many papers an author can publish per year, let's say 4 to be generous), but they are unlikely to gain traction as thoug most would agree onbenefits, all involved in the system would stand to lose short term.

Aurornis a day ago | parent | prev | next [-]

> I don’t consider it the reviewers responsibility to manually verify all citations are real

I guess this explains all those times over the years where I follow a citation from a paper and discover it doesn’t support what the first paper claimed.

rokob a day ago | parent | prev | next [-]

As a reviewer I at least skimmed the papers for every reference in every paper that I review. If it isn't useful to furthering the point of the paper then my feedback is to remove the reference. Adding a bunch of junk because it is broadly related in a giant background section is a waste of everyone's time and should be removed. Most of the time you are mostly aware of the papers being cited anyway because that is the whole point of reviewing in your area of expertise.

not2b a day ago | parent | prev | next [-]

Agreed. I used to review lots of submissions for IEEE and similar conferences, and didn't consider it my job to verify every reference. No one did, unless the use of the reference triggered an "I can't believe it said that" reaction. Of course, back then, there wasn't a giant plagiarism machine known to fabricate references, so if tools can find fake references easily the tools should be used.

andai a day ago | parent | prev | next [-]

>I don’t consider it the reviewers responsibility to manually verify all citations are real.

Doesn't this sound like something that could be automated?

for paper_name in citations... do a web search for it, see if it there's a page in the results with that title.

That would at least give you "a paper with this name exists".

armcat a day ago | parent | prev | next [-]

I agree with you (I have reviewed papers in the past), however, made-up citations are a "signal". Why would the authors do that? If they made it up, most likely they haven't really read that prior work. If they haven't, have they really done proper due dilligence on their research? Are they just trying to "beef up" their paper with citations to unfairly build up credibility?

pbhjpbhj a day ago | parent | prev | next [-]

Surely there are tools to retrieve all the citations, publishers should spot it easily.

However the paper is submitted, like a folder on a cloud drive, just have them include a folder with PDFs/abstracts of all the citations?

They might then fraudulently produce papers to cite, but they can't cite something that doesn't exist.

michaelt a day ago | parent | next [-]

> Surely there are tools to retrieve all the citations,

Even if you could retrieve all citations (which isn't always as easy as you might hope) to validate citations you'd also have to confirm the paper says what the person citing it says. If I say "A GPU requires 1.4kg of copper" citing [1] is that a valid citation?

That means not just reviewing one paper, but also potentially checking 70+ papers it cites. The vast majority of paper reviewers will not check citations actually say what they're claimed to say, unless a truly outlandish claim is made.

At the same time, academia is strangely resistant to putting hyperlinks in citations, preferring to maintain old traditions - like citing conference papers by page number in a hypothetical book that has never been published; and having both a free and a paywalled version of a paper while considering the paywalled version the 'official' version.

[1] https://arxiv.org/pdf/2512.04142

tpoacher a day ago | parent | prev [-]

how delightfully optimistic of you to think those abstracts would not also be ai generated ...

zzzeek a day ago | parent [-]

sure but then the citations are no longer "hallucinated", they actually point to something fraudulent. that's a different problem.

jayess a day ago | parent | prev | next [-]

Wow. I went to law school and was on the law review. That was our precise job for the papers selected for publication. To verify every single citation.

_blk a day ago | parent [-]

Thanks for sharing that. Interesting how there was a solution to a problem that didn't really exist yet.. I mean, I'm sure it was there for a reason, but I assume it was more things like wrongful attribution, missing commas etc. rather than outright invented quotes to fit a narrative or do you have more background on that?

...at least the mandatory automated checking processes are probably not far off at least for the more reputable journals, but it still makes you wonder how much you can trust the last two years of LLM-enhanced science that is now being quoted in current publications and if those hallucinations can be "reverted" after having been re-quoted. A bit like Wikipedia can be abused to establish facts.

zdragnar a day ago | parent | prev | next [-]

This is half the basis for the replication crisis, no? Shady papers come out and people cite them endlessly with no critical thought or verification.

After all, their grant covers their thesis, not their thesis plus all of the theses they cite.

figassis a day ago | parent | prev | next [-]

It is absolutely the reviewers job to check citations. Who else will check and what is the point of peer review then? So you’d just happily pass on shoddy work because it’s not your job? You’re reviewing both the authors work and if there were people to at needed to ensure citations were good, you’re checking their work also. This is very much the problem today with this “not my problem” mindset. If it passes review, the reviewer is also at fault. Not excuses.

zipy124 a day ago | parent | next [-]

The problem is most academics just do not have the time to do this for free, or in fact even if paid. In addition you may not even have access to the references. In acoustics it's not uncommon to cite works that don't even exist online and it's unlikely the reviewer will have the work in their library.

dpkirchner a day ago | parent | prev [-]

Agreed, and I'd go further. If nobody is reviewing citations they may as well not exist. Why bother?

vkou a day ago | parent [-]

1. To make it clear what is your work, and what is building on someone else's.

2. If the paper turns out to be important, people will bother.

3. There's checking for cursory correctness, and there's forensic torture.

figassis 13 hours ago | parent [-]

building on imaginary someone else? That's exactly the same as lying. Is a review not about verifying that the paper and even data is correct? I get reviewers can make mistakes, but this seems like defending intentional mistakes.

I mean, in college I have had to review papers, and so took peer review lectures, and nowhere in there was it ever stated that citations are not the reviewer's job. In fact, citation verification was one to the most important parts of the lectures, as in, how to find original sources (when authoring), and how to verify them (when reviewing).

When did peer review get redefined?

vkou 7 hours ago | parent [-]

I'm not defending dishonesty, I'm saying that's what citations do when they are used by honest people.

zzzeek a day ago | parent | prev | next [-]

correct me if I'm wrong but citations in papers follow a specific format, and the case here is that a tool was used to validate that they are all real. Certainly a tool that scans a paper for all citations and verifies that they actually exist in the journals they reference shouldn't be all that technically difficult to achieve?

alexcdot a day ago | parent | next [-]

There are a ton of edge cases and a bit of contextual understanding for what is a hallucinated citation (i.e. what if its republished from arxiv to ICLR?)

But to your point, seems we need a tool that can do this

mike_hearn 16 hours ago | parent | prev [-]

It's not, there's lots of ways to resolve citations without even using AI.

I experimented a couple of years ago with getting LLMs to check citations but stopped working on it because there's no incentive. You could run a fancy expensive pipeline burning scarce GPU hours and find a bunch of bad citations. Then what? Nobody cares. No journal is going to retract any of these papers, the academics themselves won't care or even respond to your emails, nobody is willing to pay for this stuff, least of all the universities, journals or governments themselves.

For example, there's a guy in France who runs a pre-LLM pipeline to discover bad papers using hand-coded heuristics like regexs or metadata analysis e.g. checking if a citation has been retracted. Many of the things it detects are plagiarism, paper mills (i.e. companies that sell fake papers to academics for a profit), or the result of joke paper creators like SciGen.

https://dbrech.irit.fr/pls/apex/f?p=9999:1::::::

Other than populating an obscure database nobody knows about, this work achieved bupkis.

auggierose a day ago | parent | prev [-]

In short, a review has no objective value, it is just an obstacle to be gamed.

amanaplanacanal a day ago | parent [-]

In theory, the review tries to determine if the conclusion reached actually follows from whatever data is provided. It assumes that everything is honest, it's just looking to see if there were mistakes made.

auggierose a day ago | parent [-]

Honest or not should not make a difference, after all, the submitting author may believe themselves everything is A-OK.

The review should also determine how valuable the contribution is, not only if it has mistakes or not.

Todays reviews determine neither value nor correctness in any meaningful way. And how could they, actually? That is why I review papers only to the extent that I understand them, and I clearly delineate my line of understanding. And I don't review papers that I am not interested in reading. I once got a paper to review that actually pointed out a mistake in one of my previous papers, and then proposed a different solution. They correctly identified the mistake, but I could not verify if their solution worked or not, that would have taken me several weeks to understand. I gave a report along these lines, and the person who gave me the review said I should say more about their solution, but I could not. So my review was not actually used. The paper was accepted, which is fine, but I am sure none of the other reviewers actually knows if it is correct.

Now, this was a case where I was an absolute expert. Which is far from the usual situation for a reviewer, even though many reviewers give themselves the highest mark for expertise when they just should not.

barfoure a day ago | parent | prev | next [-]

I’d love to hear some examples of poor electrical work that you’ve come across that’s often missed or not seen.

AstroNutt a day ago | parent | next [-]

A couple had just moved in a house and called me to replace the ceiling fan in the living room. I pulled the flush mount cover down to start unhooking the wire nuts and noticed RG58 (coax cable). Someone had used the center conductor as the hot wire! I ended up running 12/2 Romex from the switch. There was no way in hell I could have hooked it back up the way it was. This is just one example I've come across.

joshribakoff a day ago | parent | prev [-]

I am not an electrician, but when I did projects, I did a lot of research before deciding to hire someone and then I was extremely confused when everyone was proposing doing it slightly differently.

A lot of them proposed ways that seem to violate the code, like running flex tubing beyond the allowed length or amount of turns.

Another example would be people not accounting for needing fireproof covers if they’re installing recessed, lighting in between dwelling in certain cities…

Heck, most people don’t actually even get the permit. They just do the unpermitted work.

xnx a day ago | parent | prev | next [-]

No doubt the best electricians are currently better than the best AI, but the best AI is likely now better than the novice homeowner. The trajectory over the past 2 years has been very good. Another five years and AI may be better than all but the very best, or most specialized, electricians.

legostormtroopr a day ago | parent [-]

Current state AI doesn’t have hands. How can it possibly be better at installing electrics than anyone?

Your post reads like AI precisely because while the grammar is fine, it lacks context - like someone prompted “reply that AI is better than average”.

xnx a day ago | parent [-]

An electrician with total knowledge/understanding, but only the average dexterity of a non-professional would still be very useful.

lencastre a day ago | parent | prev | next [-]

an old boss of mine used to say there are no stupid electricians found alive, as they self select darwin award style

bdangubic a day ago | parent | prev [-]

same (and much, much, much worse) for science

kklisura a day ago | parent | prev | next [-]

> AI is not the problem, laziness and negligence is

This reminds me about discourse about a gun problem in US, "guns don't kill people, people kill people", etc - it is a discourse used solely for the purpose of not doing anything and not addressing anything about the underlying problem.

So no, you're wrong - AI IS THE PROBLEM.

Yoofie a day ago | parent | next [-]

No, the OP is right in this case. Did you read TFA? It was "peer reviewed".

> Worryingly, each of these submissions has already been reviewed by 3-5 peer experts, most of whom missed the fake citation(s). This failure suggests that some of these papers might have been accepted by ICLR without any intervention. Some had average ratings of 8/10, meaning they would almost certainly have been published.

If the peer reviewers can't be bothered to do the basics, then there is literally no point to peer review, which is fully independent of the author who uses or doesn't use AI tools.

smileybarry a day ago | parent | next [-]

Peer reviewers can also use AI tools, which will hallucinate a "this seems fine" response.

amrocha a day ago | parent | prev [-]

If AI fraud is good at avoiding detection via peer review that doesn’t mean peer review is useless.

If your unit tests don’t catch all errors it doesn’t mean unit tests are useless.

sneak a day ago | parent | prev [-]

> it is a discourse used solely for the purpose of not doing anything and not addressing anything about the underlying problem

Solely? Oh brother.

In reality it’s the complete opposite. It exists to highlight the actual source of the problem, as both industries/practitioners using AI professionally and safely, and communities with very high rates of gun ownership and exceptionally low rates of gun violence exist.

It isn’t the tools. It’s the social circumstances of the people with access to the tools. That’s the point. The tools are inanimate. You can use them well or use them badly. The existence of the tools does not make humans act badly.

TomatoCo a day ago | parent | prev | next [-]

To continue the carpenter analogy, the issue with LLMs is that the shelf looks great but is structurally unsound. That it looks good on surface inspection makes it harder to tell that the person making it had no idea what they're doing.

embedding-shape a day ago | parent | next [-]

Regardless, if a carpenter is not validating their work before selling it, it's the same as if a researcher doesn't validate their citations before publishing. Neither of them have any excuses, and one isn't harder to detect than the other. It's just straight up laziness regardless.

judofyr a day ago | parent [-]

I think this is a bit unfair. The carpenters are (1) living in world where there’s an extreme focus on delivering as quicklyas possible, (2) being presented with a tool which is promised by prominent figures to be amazing, and (3) the tool is given at a low cost due to being subsidized.

And yet, we’re not supposed to criticize the tool or its makers? Clearly there’s more problems in this world than «lazy carpenters»?

SauntSolaire a day ago | parent | next [-]

Yes, that's what it means to be a professional, you take responsibility for the quality of your work.

peppersghost93 a day ago | parent | next [-]

It's a shame the slop generators don't ever have to take responsibility for the trash they've produced.

SauntSolaire a day ago | parent [-]

That's beside the point. While there may be many reasonable critiques of AI, none of them reduce the responsibilities of scientist.

peppersghost93 a day ago | parent | next [-]

Yeah this is a prime example of what I'm talking about. AI's produce trash and it's everyone else's problem to deal with.

SauntSolaire a day ago | parent [-]

Yes, it's the scientists problem to deal with it - that's the choice they made when they decided to use AI for their work. Again, this is what responsibility means.

peppersghost93 a day ago | parent [-]

This inspires me to make horrible products and shift the blame to the end user for the product being horrible in the first place. I can't take any blame for anything because I didn't force them to use it.

thfuran a day ago | parent | prev [-]

>While there many reasonable critiques of AI

But you just said we weren’t supposed to criticize the purveyors of AI or the tools themselves.

SauntSolaire a day ago | parent [-]

No, I merely said that the scientist is the one responsible for the quality of their own work. Any critiques you may have for the tools which they use don't lessen this responsibility.

thfuran a day ago | parent [-]

>No, I merely said that the scientist is the one responsible for the quality of their own work.

No, you expressed unqualified agreement with a comment containing

“And yet, we’re not supposed to criticize the tool or its makers?”

>Any critiques you may have for the tools which they use don't lessen this responsibility.

People don’t exist or act in a vacuum. That a scientist is responsible for the quality of their work doesn’t mean that a spectrometer manufacture that advertises specs that their machines can’t match and induces universities through discounts and/or dubious advertising claims to push their labs to replace their existing spectrometers with new ones which have many bizarre and unexpected behaviors including but not limited to sometimes just fabricating spurious readings has made no contribution to the problem of bad results.

SauntSolaire a day ago | parent [-]

You can criticize the tool or its makers, but not as a means to lessen the responsibility of the professional using it (the rest of the quoted comment). I agree with the GP, it's not a valid excuse for the scientist's poor quality of work.

thfuran a day ago | parent [-]

I just substantially edited the comment you replied to.

SauntSolaire a day ago | parent [-]

The scientist has (at the very least) a basic responsibility to perform due diligence. We can argue back and forth over what constitutes appropriate due diligence, but, with regard to the scientist under discussion, I think we'd be better suited discussing what constitutes negligence.

adestefan a day ago | parent | prev | next [-]

The entire thread is people missing this simple point.

bossyTeacher a day ago | parent | prev [-]

Well, then what does this say of LLM engineers at literally any AI company in existence if they are delivering AI that is unreliable then? Surely, they must take responsibility for the quality of their work and not blame it on something else.

embedding-shape a day ago | parent [-]

I feel like what "unreliable" means, depends on well you understand LLMs. I use them in my professional work, and they're reliable in terms of I'm always getting tokens back from them, I don't think my local models have failed even once at doing just that. And this is the product that is being sold.

Some people take that to mean that responses from LLMs are (by human standards) "always correct" and "based on knowledge", while this is a misunderstanding about how LLMs work. They don't know "correct" nor do they have "knowledge", they have tokens, that come after tokens, and that's about it.

bossyTeacher a day ago | parent | next [-]

> they're reliable in terms of I'm always getting tokens back from them

This is not what you are being sold though. They are not selling you "tokens". Check their marketing articles and you will not see the word token or synonym on any of their headings or subheadings. You are being sold these abilities:

- “Generate reports, draft emails, summarize meetings, and complete projects.”

- “Automate repetitive tasks, like converting screenshots or dashboards into presentations … rearranging meetings … updating spreadsheets with new financial data while retaining the same formatting.”

- "Support-type automation: e.g. customer support agents that can summarize incoming messages, detect sentiment, route tickets to the right team."

- "For enterprise workflows: via Gemini Enterprise — allowing firms to connect internal data sources (e.g. CRM, BI, SharePoint, Salesforce, SAP) and build custom AI agents that can: answer complex questions, carry out tasks, iterate deliverables — effectively automating internal processes."

These are taken straight from their websites. The idea that you are JUST being sold tokens is as hilariously fictional as any company selling you their app was actually just selling you patterns of pixels on your screen.

amrocha a day ago | parent | prev [-]

it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Lawyers are running their careers by citing hallucinated cases. Researchers are writing papers with hallucinated references. Programmers are taking down production by not verifying AI code.

Humans were made to do things, not to verify things. Verifying something is 10x harder than doing it right. AI in the hands of humans is a foot rocket launcher.

embedding-shape a day ago | parent [-]

> it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Again, true for most things. A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

I much rather push back on shoddy work no matter what source. I don't care if the citations are from a robot or a human, if they suck, then you suck, because you're presenting this as your work. I don't care if your paralegal actually wrote the document, be responsible for the work you supposedly do.

> Humans were made to do things, not to verify things.

I'm glad you seemingly have some grand idea of what humans were meant to do, I certainly wouldn't claim I do so, but I'm also not religious. For me, humans do what humans do, and while we didn't used to mostly sit down and consume so much food and other things, now we do.

amrocha a day ago | parent [-]

>A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

Uhh, yes??? We have completely reshaped our cities so that cars can thrive in them at the expense of people. We have laws and exams and enforcement all to prevent cars from being driven by irresponsible people.

And most drugs are literally illegal! The ones that arent are highly regulated!

If your argument is that AI is like heroin then I agree, let’s ban it and arrest anyone making it.

pertymcpert 19 hours ago | parent [-]

People need to be responsible for things they put their name on. End of story. No AI company claims their models are perfect and don’t hallucinate. But paper authors should at least verify every single character their submit.

bossyTeacher 18 hours ago | parent | next [-]

>No AI company claims their models are perfect and don’t hallucinate

You can't have it both ways. Either AIs are worth billions BECAUSE they can run mostly unsupervised or they are not. This is exactly like the AI driving system in Autopilot, sold as autonomous but reality doesn't live up to it.

amrocha 19 hours ago | parent | prev [-]

Yes, but they don’t. So clearly AI is a foot gun. What are doing about it?

concinds a day ago | parent | prev | next [-]

I use those LLM "deep research" modes every now and then. They can be useful for some use cases. I'd never think to freaking paste it into a paper and submit it or publish it without checking; that boggles the mind.

The problem is that a researcher who does that is almost guaranteed to be careless about other things too. So the problem isn't just the LLM, or even the citations, but the ambient level of acceptable mediocrity.

embedding-shape a day ago | parent | prev [-]

> And yet, we’re not supposed to criticize the tool or its makers?

Exactly, they're not forcing anyone to use these things, but sometimes others (their managers/bosses) forced them to. Yet it's their responsibility for choosing the right tool for the right problem, like any other professional.

If a carpenter shows up to put a roof yet their hammer or nail-gun can't actually put in nails, who'd you blame; the tool, the toolmaker or the carpenter?

judofyr a day ago | parent [-]

> If a carpenter shows up to put a roof yet their hammer or nail-gun can't actually put in nails, who'd you blame; the tool, the toolmaker or the carpenter?

I would be unhappy with the carpenter, yes. But if the toolmaker was constantly over-promising (lying?), lobbying with governments, pushing their tools into the hands of carpenters, never taking responsibility, then I would also criticize the toolmaker. It’s also a toolmaker’s responsibility to be honest about what the tool should be used for.

I think it’s a bit too simplistic to say «AI is not the problem» with the current state of the industry.

embedding-shape a day ago | parent | next [-]

If I hired a carpenter, he did a bad job, and he starts to blame the toolmaker because they lobby the government and over-promised what that hammer could do, I'd still put the blame on the carpenter. It's his tools, I couldn't give less of a damn why he got them, I trust him to be a professional, and if he falls for some scam or over-promised hammers, that means he did a bad job.

Just like as a software developer, you cannot blame Amazon because your platform is down, if you chose to host all of your platform there. You made that choice, you stand for the consequences, pushing the blame on the ones who are providing you with the tooling is the action of someone weak who fail to realize their own responsibilities. Professionals take responsibility for every choice they make, not just the good ones.

> I think it’s a bit too simplistic to say «AI is not the problem» with the current state of the industry.

Agree, and I wouldn't say anything like that either, which makes it a bit strange to include a reply to something no one in this comment thread seems to have said.

jascha_eng a day ago | parent | prev | next [-]

OpenAI and Anthropic at least are both pretty clear about the fact that you need to check the output:

https://openai.com/policies/row-terms-of-use/

https://www.anthropic.com/legal/aup

OpenAI:

> When you use our Services you understand and agree:

Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services. You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

Anthropic:

> When using our products or services to provide advice, recommendations, or in subjective decision-making directly affecting individuals or consumers, a qualified professional in that field must review the content or decision prior to dissemination or finalization. You or your organization are responsible for the accuracy and appropriateness of that information.

So I don't think we can say they are lying.

A poor workman blames his tools. So please take responsibility for what you deliver. And if the result is bad, you can learn from it. That doesn't have to mean not use AI but it definitely means that you need to fact check more thoroughly.

pertymcpert 19 hours ago | parent | prev [-]

That’s not what is happening with AI companies, and you damn well know it.

k4rli a day ago | parent | prev [-]

Very good analogy I'd say.

Also similar to what Temu, Wish, and other similar sites offer. Picture and specs might look good but it will likely be disappointing in the end.

SubiculumCode a day ago | parent | prev | next [-]

Yeah seriously. Using an LLM to help find papers is fine. Then you read them. Then you use a tool like Zotero or manually add citations. I use Gemini Pro to identify useful papers that I might not yet have encountered before. But, even when asking to restrict itself to Pubmed resources, it's citations are wonky, citing three different version sources of the same paper (citations that don't say what they said they'd discuss).

That said, these tools have substantially reduced hallucinations over the last year, and will just get better. It also helps if you can restrict it to reference already screened papers.

Finally, I'd lke to say tthat if we want scientists to engage in good science, stop forcing them to spend a third of their time in a rat race for funding...it is ridiculously time consuming and wasteful of expertise.

bossyTeacher a day ago | parent [-]

The problem isn't whether they have more or less hallucinations. The problem is that they have them. And as long as they hallucinate, you have to deal with that. It doesn't really matter how you prompt, you can't prevent hallucinations from happening and without manual checking, eventually hallucinations will slip under the radar because the only difference between a real pattern and a hallucinated one is that one exists in the world and the other one doesn't. This is not something you can really counter with more LLMs either as it is a problem intrinsic to LLMs

SubiculumCode 8 hours ago | parent [-]

Humans also hallucinate. We have an error rate. Your argument makes little sense in absolutist terms.

bigstrat2003 a day ago | parent | prev | next [-]

> If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

It's both. The tool is crappy, and the carpenter is crappy for blindly trusting it.

> AI is not the problem, laziness and negligence is.

Similarly, both are a problem here. LLMs are a bad tool, and we should hold people responsible when they blindly trust this bad tool and get bad results.

jodleif a day ago | parent | prev | next [-]

I find this to be a bit “easy”. There is such a thing as bad tools. If it is difficult to determine if the tool is good or bad i’d say some of the blame has to be put on the tool.

nwallin a day ago | parent | prev | next [-]

"Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break."--Bruce Schneier

There's a corollary here with LLMs, but I'm not pithy enough to phrase it well. Anyone can create something using LLMs that they, themselves, aren't skilled enough to spot the LLMs' hallucinations. Or something.

LLMs are incredibly good at exploiting peoples' confirmation biases. If it "thinks" it knows what you believe/want, it will tell you what you believe/want. There does not exist a way to interface with LLMs that will not ultimately end in the LLM telling you exactly what you want to hear. Using an LLM in your process necessarily results in being told that you're right, even when you're wrong. Using an LLM necessarily results in it reinforcing all of your prior beliefs, regardless of whether those prior beliefs are correct. To an LLM, all hypotheses are true, it's just a matter of hallucinating enough evidence to satisfy the users' skepticism.

I do not believe there exists a way to safely use LLMs in scientific processes. Period. If my belief is true, and ChatGPT has told me it's true, then yes, AI, the tool, is the problem, not the human using the tool.

czl 10 hours ago | parent [-]

> I do not believe there exists a way to safely use LLMs in scientific processes.

What about giving the LLM a narrowly scoped role as a hostile reviewer, while your job is to strengthen the write-up to address any valid objections it raises, plus any hallucinations or confusions it introduces? That’s similar to fuzz testing software to see what breaks or where the reasoning crashes.

Used this way, the model isn’t a source of truth or a decision-maker. It’s a stress test for your argument and your clarity. Obviously it shouldn’t be the only check you do, but it can still be a useful tool in the broader validation process.

rectang a day ago | parent | prev | next [-]

“X isn’t the problem, people are the problem.” — the age-old cry of industry resisting regulation.

kklisura a day ago | parent | next [-]

It's not about resisting. It's about undermining any action whatsoever.

theoldgreybeard a day ago | parent | prev | next [-]

I am not against regulation.

Quite the opposite actually.

codywashere a day ago | parent | prev [-]

what regulation are you advocating for here?

kibwen a day ago | parent | next [-]

At the very least, authors who have been caught publishing proven fabrications should be barred by those journals from ever publishing in them again. Mind you, this is regardless of whether or not an LLM was involved.

JumpCrisscross a day ago | parent [-]

> authors who have been caught publishing proven fabrications should be barred by those journals from ever publishing in them again

This is too harsh.

Instead, their papers should be required to disclose the transgression for a period of time, and their institution should have to disclose it publicly as well as to the government, students and donors whenever they ask them for money.

rectang a day ago | parent | prev [-]

I’m not advocating, I’m making a high-level observation: Industry forever pushes for nil regulation and blames bad actors for damaging use.

But we always have some regulation in the end. Even if certain firearms are legal to own, howitzers are not — although it still takes a “bad actor” to rain down death on City Hall.

The same dynamic is at play with LLMs: “Don’t regulate us, punish bad actors! If you still have a problem, punish them harder!” Well yes, we will punish bad actors, but we will also go through a negotiation of how heavily to constrain the use of your technology.

codywashere a day ago | parent [-]

so, what regulation do we need on LLMs?

the person you originally responded to isn’t against regulation per their comment. I’m not against regulation. what’s the pitch for regulation of LLMs?

only-one1701 a day ago | parent | prev | next [-]

Absolutely brutal case of engineering brain here. Real "guns don't kill people, people kill people" stuff.

somehnguy a day ago | parent | next [-]

Your second statement is correct. What about it makes it “engineering brain”?

rcpt a day ago | parent | next [-]

If the blame were solely on the user then we'd see similar rates of deaths from gun violence in the US vs. other countries. But we don't, because users are influenced by the UX

venturecruelty a day ago | parent | prev [-]

Somehow people don't kill people nearly as easily, or with as high of a frequency or social support, in places that don't have guns that are more accessible than healthcare. So weird.

theoldgreybeard a day ago | parent | prev [-]

If you were to wager a guess, what do you think my views on gun rights are?

only-one1701 a day ago | parent [-]

Probably something equally as nuanced and correct as the statement I replied to!

theoldgreybeard a day ago | parent [-]

You're projecting.

grey-area a day ago | parent | prev | next [-]

Generative AI and the companies selling it with false promises and using it for real work absolutely are the problem.

Hammershaft a day ago | parent | prev | next [-]

AI dramatically changes the perceived cost/benefit of laziness and negligence, which is leading to much more of it.

acituan a day ago | parent | prev | next [-]

> AI is not the problem, laziness and negligence is.

As much as I agree with you that this is wrong, there is a danger in putting the onus just on the human. Whether due to competition or top down expectations, humans are and will be pressured to use AI tools alongside their work and produce more. Whereas the original idea was for AI to assist the human, as the expected velocity and consumption pressure increases humans are more and more turning into a mere accountability laundering scheme for machine output. When we blame just the human, we are doing exactly what this scheme wants us to do.

Therefore we must also criticize all the systemic factors that puts pressure on reversal of AI‘s assistance into AI’s domination of human activity.

So AI (not as a technology but as a product when shoved down the throats) is the problem.

alexcdot a day ago | parent [-]

Absolutely, expectations and tools given by management are a real problem.

If management fires you because they are wrong about how good AI is, and you're right - at the end of the day, you're fired and the manager is in lalaland.

People need to actually push the correct calibration of what these tools should be trusted to do, while also trying to work with what they have.

b00ty4breakfast a day ago | parent | prev | next [-]

maybe the hammer factory should be held responsible for pumping out so many poorly calibrated hammer

SauntSolaire a day ago | parent | next [-]

The obvious solution in this scenario is.. to just buy a different hammer.

And in the case of AI, either review its output, or simply don't use it. No one has a gun to your head forcing you to use this product (and poorly at that).

It's quite telling that, even in this basic hypothetical, your first instinct is to gesture vaguely in the direction of governmental action, rather than expect any agency at the level of the individual.

b00ty4breakfast 4 hours ago | parent [-]

>It's quite telling that, even in this basic hypothetical, your first instinct is to gesture vaguely in the direction of governmental action, rather than expect any agency at the level of the individual.

When "individuals" (which is a funny way to refer to the global generative AI zeitgeist currently in full binge-mode that is encouraging and enabling this kind of behavior) refuse to regulate themselves, they have to be encouraged through external pressures to do so. Industry is so far up it's own ass wrt AI that all it can see is shit, there is no chance in hell that they will self-regulate. They gladly and indiscriminately slurp up the digital effluent that is currently sliding out the colon of the generative AI super-organism.

And, of course, these "individuals" are more than happy to share the consequences with the rest of the world without sharing too much of the corn that they're digging out of the shit. It does not behoove the rest of the world to not protect it's self-interest, to minimize the consequences of foolish and irresponsible generative AI usage and to make sure it gets it's fare share of the semi-digested golden kernels

venturecruelty a day ago | parent | prev [-]

No, because this would cost tens of jobs and affect someone's profits, which are sacrosanct. Obviously the market wants exploding hammers, or else people wouldn't buy them. I am very smart.

stocksinsmocks a day ago | parent | prev | next [-]

Trades also have self regulation. You can’t sell plumbing services or build houses without any experience or you get in legal trouble. If your workmanship is poor, you can be disciplined by the board even if the tool was at fault. I think fraudulent publications should be taken at least as seriously as badly installed toilets.

jval43 a day ago | parent | prev | next [-]

If a scientist just completely "made up" their references 10 years ago, that's a fraudster. Not just dishonesty but outright academic fraud.

If a scientist does it now, they just blame it on AI. But the consequences should remain the same. This is not an honest mistake.

People that do this - even once - should be banned for life. They put their name on the thing. But just like with plagiarism, falsifying data and academic cheating, somehow a large subset of people thinks it's okay to cheat and lie, and another subset gives them chance after chance to misbehave like they're some kind of children. But these are adults and anyone doing this simply lacks morals and will never improve.

And yes, I've published in academia and I've never cheated or plagiarized in my life. That should not be a drawback.

a day ago | parent | prev | next [-]
[deleted]
psychoslave 18 hours ago | parent | prev | next [-]

I don't see much crappy power tool provider throwing billions in marketing and product placement to make them used everywhere.

raincole a day ago | parent | prev | next [-]

Given we tacitly accepted replication crisis we'll definitely tacitly accept this.

calmworm a day ago | parent | prev | next [-]

I don’t understand. You’re saying even with crappy tools one should be able to do the job the same as with well made tools?

tedd4u a day ago | parent [-]

Three and a half years ago nobody had ever used tools like this. It can't be a legitimate complaint for an author to say, "not my fault my citations are fake it's the fault of these tools" because until recently no such tools were available and the expectation was that all citations are real.

calmworm a day ago | parent [-]

Then it’s just a poor analogy.

Forgeties79 a day ago | parent | prev | next [-]

If my calculator gives me the wrong number 20% of the time yeah I should’ve identified the problem, but ideally, that wouldn’t have been sold to me as a functioning calculator in the first place.

theoldgreybeard a day ago | parent | next [-]

If it was a well understood property of calculators that they gave incorrect answers randomly then you need to adjust the way you use the tool accordingly.

bigstrat2003 a day ago | parent | next [-]

Uh yeah... I would not use that tool. A tool which doesn't do its job randomly is useless.

amrocha a day ago | parent [-]

Sorry, Utkar the manager will fire you if you don’t use his shitty calculator. If you take the time to check the output every time you’ll be fired for being too slow. Better pray the calculator doesn’t lie to you.

a day ago | parent | prev | next [-]
[deleted]
a day ago | parent | prev | next [-]
[deleted]
Forgeties79 a day ago | parent | prev [-]

Generally I’d ditch that tool because it doesn’t work. A calculator is supposed to calculate. If it can’t reliably calculate, then it’s not a functioning tool and I am tired of people insisting it is functioning properly.

LLM’s simply aren’t good enough for all the use cases some people insist they are. They’re powerful tools that have been far too broadly applied and there’s too much money and too many reputations being put on the line to acknowledge the obvious limitations. Frankly I’m sick of it.

I had somebody on HN a few months ago insist to me that because we value art and fiction, LLM’s being wrong when we need them to be correct (in ways that are also not always easy to identify) was desirable. I don’t even know what to do with that kind of logic other than chalk it up as trolling. I don’t want my computer to trick me into false solutions.

imiric a day ago | parent | prev [-]

Indeed. The narrative that this type of issue is entirely the responsibility of the user to fix is insulting, and blame deflection 101.

It's not like these are new issues. They're the same ones we've experienced since the introduction of these tools. And yet the focus has always been to throw more data and compute at the problem, and optimize for fancy benchmarks, instead of addressing these fundamental problems. Worse still, whenever they're brought up users are blamed for "holding it wrong", or for misunderstanding how the tools work. I don't care. An "artificial intelligence" shouldn't be plagued by these issues.

SauntSolaire a day ago | parent | next [-]

> It's not like these are new issues.

Exactly, that's why not verifying the output is even less defensible now than it ever has been - especially for professional scientists who are responsible for the quality of their own work.

Forgeties79 a day ago | parent | prev [-]

> Worse still, whenever they're brought up users are blamed for "holding it wrong", or for misunderstanding how the tools work. I don't care. An "artificial intelligence" shouldn't be plagued by these issues.

My feelings exactly, but you’re articulating it better than I typically do ha

a day ago | parent | prev | next [-]
[deleted]
nialv7 a day ago | parent | prev | next [-]

Ah, the "guns don't kill people, people kill people" argument.

I mean sure, but having a tool that made fabrication so much easier has made the problem a lot worse, don't you think?

theoldgreybeard a day ago | parent [-]

Yes I do agree with you that having a tool that gives rocket fuel to a fraud engine should probably be regulated in some fashion.

Tiered licensing, mandatory safety training, and weapon classification by law enforcement works really well for Canada’s gun regime, for example.

RossBencina a day ago | parent | prev | next [-]

No qualified carpenter expects to use a hammer to drill a hole.

left-struck a day ago | parent | prev | next [-]

It’s like the problem was there all along, all LLMs did was expose it more

theoldgreybeard a day ago | parent | next [-]

Yes, LLMs didnt create the problem they just accelerated it to a speed that beggars belief.

criley2 a day ago | parent | prev [-]

https://en.wikipedia.org/wiki/Replication_crisis

Modern science is designed from the top to the bottom to produce bad results. The incentives are all mucked up. It's absolutely not surprising that AI is quickly becoming yet-another factor lowering quality.

foxfired a day ago | parent | prev | next [-]

I disagree. When the tool promises to do something, you end up trusting it to do the thing.

When Tesla says their car is self driving, people trust them to self drive. Yes, you can blame the user for believing, but that's exactly what they were promised.

> Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? They are all using LLMs as their time-saving strategy. [0]

It's not laziness, its the feature we were promised. We can't keep saying everyone is holding it wrong.

[0]: https://idiallo.com/blog/none-of-us-read-the-specs

rolandog a day ago | parent [-]

Very well put. You're promised Artificial Super Intelligence and shown a super cherry-picked promo and instead get an agent that can't hold its drool and needs constant hand-holding... it can't be both things at the same time, so... which is it?

gdulli a day ago | parent | prev | next [-]

That's like saying guns aren't the problem, the desire to shoot is the problem. Okay, sure, but wanting something like a metal detector requires us to focus on the more tangible aspect that is the gun.

baxtr a day ago | parent [-]

If I gave you a gun would you start shooting people just because you had one?

raincole a day ago | parent | next [-]

If the society rewarded me money and fame when I kill someone then I would. Why wouldn't I?

Like it or not, in our society scientists' job is to churn out papers. Of course they'll use the most efficient way to churn out papers.

agentultra a day ago | parent | prev | next [-]

If I gave you a gun without a safety could you be the one to blame when it goes off because you weren’t careful enough?

The problem with this analogy is that it makes no sense.

LLMs aren’t guns.

The problem with using them is that humans have to review the content for accuracy. And that gets tiresome because the whole point is that the LLM saves you time and effort doing it yourself. So naturally people will tend to stop checking and assume the output is correct, “because the LLM is so good.”

Then you get false citations and bogus claims everywhere.

sigbottle a day ago | parent | next [-]

Sorry, I'm not following the gun analogies at all

But regardless, I thought the point was that...

> The problem with using them is that humans have to review the content for accuracy.

There are (at least) two humans in this equation. The publisher, and the reader. The publisher at least should do their due diligence, regardless of how "hard" it is (in this case, we literally just ask that you review your OWN CITATIONS that you insert into your paper). This is why we have accountability as a concept.

zdragnar a day ago | parent | prev | next [-]

> If I gave you a gun without a safety could you be the one to blame when it goes off because you weren’t careful enough?

Absolutely. Many guns don't have safties. You don't load a round in the chamber unless you intend on using it.

A gun going off when you don't intend is a negligent discharge. No ifs, ands or buts. The person in possession of the gun is always responsible for it.

bluGill a day ago | parent [-]

> A gun going off when you don't intend is a negligent discharg

false. A gun goes off when not intended too often to claim that. It has happned to me - I then took the gun to a qualified gunsmith for repairs.

A gun they fires and hits anything you didn't intend to is negligent discharge even if you intended to shoot. Gun saftey is about assuming a gun that could possible fire will and ensuring nothing bad can happen. When looking at gun in a store (that you might want to buy) you aim it at an upper corner where even if it fires the odds of something bad resulting is the least lively to happen (it should be unloaded - and you may have checked, but you still aim there!)

same with cat toy lazers - they should be safe to shine in an eye - but you still point in a safe direction.

oceansweep a day ago | parent | prev | next [-]

Yes. That is absolutely the case. One of the Most popular handguns does not have a safety switch that must be toggled before firing. (Glock series handguns)

If someone performs a negligent discharge, they are responsible, not Glock. It does have other safety mechanisms to prevent accidental fires not resulting from a trigger pull.

agentultra a day ago | parent [-]

You seem to be getting hung up on the details of guns and missing the point that it’s a bad analogy.

Another way LLMs are not guns: you don’t need a giant data centre owned by a mega corp to use your gun.

Can’t do science because GlockGPT is down? Too bad I guess. Let’s go watch the paint dry.

The reason I made it is because this is inherently how we designed LLMs. They will make bad citations and people need to be careful.

baxtr a day ago | parent | prev | next [-]

>“because the LLM is so good.”

That's the issue here. Of course you should be aware of the fact that these things need to be checked - especially if you're a scientist.

This is no secret only known to people on HN. LLMs are tools. People using these tools need to be diligent.

imiric a day ago | parent | prev [-]

> LLMs aren’t guns.

Right. A gun doesn't misfire 20% of the time.

> The problem with using them is that humans have to review the content for accuracy.

How long are we going to push this same narrative we've been hearing since the introduction of these tools? When can we trust these tools to be accurate? For technology that is marketed as having superhuman intelligence, it sure seems dumb that it has to be fact-checked by less-intelligent humans.

gdulli a day ago | parent | prev | next [-]

That doesn't address my point at all but no, I'm not a violent or murderous person. And most people aren't. Many more people do, however, want to take shortcuts to get their work done with the least amount of effort possible.

SauntSolaire a day ago | parent [-]

> Many more people do, however, want to take shortcuts to get their work done with the least amount of effort possible.

Yes, and they are the ones responsible for the poor quality of work that results from that.

rcpt a day ago | parent | prev | next [-]

Probably not but, empirically, there are a lot of short tempered people who would.

a day ago | parent | prev | next [-]
[deleted]
komali2 a day ago | parent | prev | next [-]

Ok sure I'm down for this hypothetical. I will bring 50 random people in front of you, and you will hand all 50 of them loaded guns. Still feeling it?

bandofthehawk a day ago | parent [-]

Ever been to a shooting range? It's basically a bunch of random people with loaded guns.

komali2 a day ago | parent [-]

That's not as random as letting me choose them! They had to be allowed onto the range, show ID, afford the gun, probably do a background check to get the gun unless they used a loophole (which usually requires some social capital).

I'm proposing the true proposal of many guns rights advocates: anyone might have a gun.

So let me choose the 50 and you give them guns! Why not?

intended a day ago | parent | prev | next [-]

The issue with this argument, for anyone who comes after, is not when you give a gun to a SINGLE person, and then ask them "would you do a bad thing".

The issue is when you give EVERYONE guns, and then are surprised when enough people do bad things with them, to create externalities for everyone else.

There is some sort of trip up when personal responsibility, and society wide behaviors, intersect. Sure most people will be reasonable, but the issue is often the cost of the number of irresponsible or outright bad actors.

hipshaker a day ago | parent | prev [-]

If you look at gun violence in the U.S that is , speaking as a European, kind of what I see happening.

hansmayer a day ago | parent | prev | next [-]

Scientists who use LLMs to write a paper are crappy scientists indeed. They need to be held accountable, even ostracised by the scientific community. But something is missing from the picture. Why is it that they came up with this idea in the first place? Who could have been peddling the impression (not an outright lie - they are very careful) about LLMs being these almost sentient systems with emergent intelligence, alleviating all of your problems, blah blah blah. Where is the god damn cure for cancer the LLMs were supposed to invent? Who else is it that we need to keep accountable, scrutinised and ostracised for the ever-increasing mountains of AI-crap that is flooding not just the Internet content but now also penetrating into science, every day work, daily lives, conversations, etc. If someone released a tool that enabled and encouraged people to commit suicide in multiple instances that we know of by now, and we know since the infamous "plandemic" facebook trend that the tech bros are more than happy to tolerate worsening societal conditions in the name of their platform growth, who else do we need to keep accountable, scrutinise and ostracise as a society, I wonder?

the8472 a day ago | parent [-]

> Where is the god damn cure for cancer the LLMs were supposed to invent?

Assuming that cure is meant as hyperbole, how about https://www.biorxiv.org/content/10.1101/2025.04.14.648850v3 ? AI models being used for bad purposes doesn't preclude them being used for good purposes.

hansmayer 18 hours ago | parent [-]

...No, it was not meant as a hyperbole, as we were literally being told that these models will be able to do all of our work. I won't settle for the bullshit incremental wins here and there we see occassionally - I attribute those essentially to the old 'infinite number of monkeys typing on the infinite number of typewriters producing "Crime and Peace". No. that's not it - we were promised a god damn revolution, no less. Again, where is the cure for cancer and post-scarcity society ? Where is the AGI we were promised for the 2025? Let's hold the ghouls promising all that accountable for a change.

the8472 12 hours ago | parent [-]

My understanding is that they're promising those as endgoals of the development trajectory, not that any current model actually is AGI. Did anyone really claim that, let's say GPT4, would cure cancer or meet any AGI standard?

hansmayer 11 hours ago | parent [-]

Well, Sam Altman had said not long ago we would have AGI in 2025, and has been constantly implying something about "AI Scientists" and this and that. He literally said "We now know how to build AGI", also not long ago. He also stated that ChatGPT passed the Turing test without much fuss. The Anthropic has been pushing the narrative about the massive job loss, implying again that there would be an absolutely transforming impact coming soon. The Microsoft MBA-in-charge will have you believe his entire life and work is supposedly managed by an army of Clippy 2.0. The Google-MBA-in-charge has now started day-dreaming about space-based clusters, because guess what, his tool generates better fake pictures than Altman's. He too peddles the nonsense about the superpowerful AI. So, again yes, they said the AI would cure cancer and meet the AGI standard, so I demand they be held accountable for their own words and provide the answers to those questions!

rdiddly a day ago | parent | prev | next [-]

¿Por qué no los dos?

mk89 a day ago | parent | prev | next [-]

> we are tacitly endorsing it.

We are, in fact, not tacitly but openly endorsing this, due to this AI everywhere madness. I am so looking forward to when some genius in some banks starts to use it to simplify code and suddenly I have 100000000 € on my bank account. :)

venturecruelty a day ago | parent | prev | next [-]

"It's not a fentanyl problem, it's a people problem."

"It's not a car infrastructure problem, it's a people problem."

"It's not a food safety problem, it's a people problem."

"It's not a lead paint problem, it's a people problem."

"It's not an asbestos problem, it's a people problem."

"It's not a smoking problem, it's a people problem."

SauntSolaire a day ago | parent [-]

What an absurd set of equivalences to make regarding a scientist's relationship to their own work.

If an engineer provided this line of excuse to me, I wouldn't let them anywhere near a product again - a complete abdication of personal and professional responsibility.

DonHopkins a day ago | parent | prev | next [-]

Shouldn't there be a black list of people who get caught writing fraudulent papers?

theoldgreybeard a day ago | parent | next [-]

Probably. Something like that is what I meant by “social consequences”. Perhaps there should be civil or criminal ones for more egregious cases.

cindyllm a day ago | parent | prev [-]

[dead]

photochemsyn a day ago | parent | prev | next [-]

Yeah, I can't imagine not being familiar with every single reference in the bibliography of a technical publication with one's name on it. It's almost as bad as those PIs who rely on lab techs and postdocs to generate research data using equipment that they don't understand the workings of - but then, I've seen that kind of thing repeatedly in research academia, along with actual fabrication of data in the name of getting another paper out the door, another PhD granted, etc.

Unfortunately, a large fraction of academic fraud has historically been detected by sloppy data duplication, and with LLMs and similar image generation tools, data fabrication has never been easier to do or harder to detect.

jgalt212 a day ago | parent | prev | next [-]

fair enough, but carpenters are not being beat over the head to use new-fangled probabilistic speed squares.

constantcrying a day ago | parent | prev | next [-]

Absolutely correct. The real issue is that these people can avoid punishment. If you do not care enough about your paper to even verify the existence of citations, then you obviously should not have a job as a scientist.

Taking an academic who does something like that seriously, seem impossible. At best he is someone who is neglecting his most basic duties as an academic, at worst he is just a fraudster. In both cases he should be shunned and excluded.

belter a day ago | parent | prev | next [-]

"...each of which were missed by 3-5 peer reviewers..."

Its sloppy work all the way down...

cindyllm a day ago | parent [-]

[dead]

a day ago | parent | prev | next [-]
[deleted]
thaumasiotes a day ago | parent | prev [-]

> If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

Really? Regardless of whether it's a good paper?

Aurornis a day ago | parent | next [-]

Citations are a key part of the paper. If the paper isn’t supported by the citations, it’s not a good paper.

withinboredom a day ago | parent [-]

Have you ever followed citations before? In my experience, they don't support what is being citated, saying the opposite or not even related. It's probably only 60%-ish that actually cite something relevant.

Aurornis 11 hours ago | parent | next [-]

I follow them a lot. I’ve also had cases where they don’t support the paper.

This doesn’t make it okay. Bad human writer and reviewer practices are also bad.

WWWWH a day ago | parent | prev [-]

Well yes, but just because that’s bad doesn’t mean this isn’t far worse.

zwnow a day ago | parent | prev [-]

How is it a good paper if the info in it cant be trusted lmao

thaumasiotes a day ago | parent [-]

Whether the information in the paper can be trusted is an entirely separate concern.

Old Chinese mathematics texts are difficult to date because they often purport to be older than they are. But the contents are unaffected by this. There is a history-of-math problem, but there's no math problem.

hnfong a day ago | parent | next [-]

You are totally correct that hallucinated citations do not invalidate the paper. The paper sans citations might be great too (I mean the LLM could generate great stuff, it's possible).

But the author(s) of the paper is almost by definition a bad scientist (or whatever field they are in). When a researcher writes a paper for publication, if they're not expected to write the thing themselves, at least they should be responsible for checking the accuracy of the contents, and citations are part of the paper...

alexcdot a day ago | parent | prev | next [-]

Problem is that most ML papers today are not independently verifiable proofs - in most, you have to trust the scientist didn't fraudulently produce their results.

There is so much BS being submitted to conferences and decreasing the amount of BS they see would result in less skimpy reviews and also less apathy

zwnow a day ago | parent | prev [-]

Not really true nowadays. Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

Whether the students directly used LLMs or just read content online that was produced with them and cited after just shows how difficult these things made gathering information that's verifiable.

thaumasiotes a day ago | parent [-]

> Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

That's... gibberish.

Anything you can do to verify a paper, you can do to verify the same paper with all citations scrubbed.

Whether the citations support the paper, or whether they exist at all, just doesn't have anything to do with what the paper says.

zwnow a day ago | parent [-]

I dont think you know how whitepapers work then