| ▲ | dminik a day ago | ||||||||||||||||
If you're not sure what something is saying, how can you be sure that the AI had picked the correct interpretation? | |||||||||||||||||
| ▲ | alok-g a day ago | parent | next [-] | ||||||||||||||||
Right question to ask, however, good readers/professionals do have some sense for this and ability to dig further as needed. On the other hand, books and articles are often over-detailed, with the key stuff buried in the lede or even remaining tacit. For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended. | |||||||||||||||||
| ▲ | Flavius a day ago | parent | prev | next [-] | ||||||||||||||||
By asking it to cite its sources. Whenever I use AI, I have it pull direct quotes from the text to justify its interpretation. Sometimes it's spot on, sometimes it's wrong. But skimming a paper to fact-check a few specific quotes is still vastly faster than reading a dense paper completely blind. | |||||||||||||||||
| ▲ | joquarky a day ago | parent | prev [-] | ||||||||||||||||
Critical thinking. | |||||||||||||||||
| |||||||||||||||||