▲ | ASalazarMX 4 days ago | |||||||||||||||||||||||||
This is a use case where I think a current LLM shines. Ask it to summarize the important points of n papers, and slow read only the ones that pique your interest. It won't be perfect, but it will save you a ton of time while letting you focus on the things that need more attention. | ||||||||||||||||||||||||||
▲ | freeopinion 4 days ago | parent [-] | |||||||||||||||||||||||||
I'm not anti-LLM even if the following statement sounds like it. I don't trust LLMs, even to summarize for me. I have to fact-check every single statement. For instance, if I ask ChatGPT, "Is PLA more dense than ABS?" it answers, "No, PLA is not more dense than ABS." Those are direct quotes. In the third paragraph, ChatGPT says, "So technically, PLA is denser than ABS, not less — I misspoke earlier." I find LLMs good for using words that I didn't think of. I can then reword a search to get better search results. To be fair, the cherry-picked example I used above sounds a lot like a human. Humans make such mistakes and corrections. If a human had given me that response, I would shrug and ask more questions. But it would make that human not be my go to source. It makes me shudder to think about code that is written in such a manner. | ||||||||||||||||||||||||||
|