| ▲ | dclowd9901 a day ago | |||||||||||||
To me, this is exactly what LLMs are good for. It would be exhausting double checking for valid citations in a research paper. Fuzzy comparison and rote lookup seem primed for usage with LLMs. Writing academic papers is exactly the _wrong_ usage for LLMs. So here we have a clear cut case for their usage and a clear cut case for their avoidance. | ||||||||||||||
| ▲ | skobes a day ago | parent | next [-] | |||||||||||||
If LLMs produce fake citations, why would we trust LLMs to check them? | ||||||||||||||
| ||||||||||||||
| ▲ | dawnerd a day ago | parent | prev | next [-] | |||||||||||||
Shouldn’t need an llm to check. It’s just a list of authors. I wouldn’t trust an llm on this, and even if they were perfect that’s a lot of resource use just to do something traditional code could do. | ||||||||||||||
| ||||||||||||||
| ▲ | idiotsecant a day ago | parent | prev [-] | |||||||||||||
Exactly, and there's nothing wrong with using LLMs in this same way as part of the writing process to locate sources (that you verify), do editing (that you check), etc. It's just peak stupidity and laziness to ask it to do the whole thing. | ||||||||||||||