Remix.run Logo
elgertam 2 days ago

I have a nearly total opposite take. I can't tell you how many times I've read a book, a paper or something else and been confused by some ambiguity in the author's prose. Being able to drop the paper (or even the book!) into an LLM to dig into the precise meaning has been an unbelievable boost for me.

Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.

The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”

LLMs have become excellent teachers for me as a result.

dminik a day ago | parent | next [-]

If you're not sure what something is saying, how can you be sure that the AI had picked the correct interpretation?

alok-g a day ago | parent | next [-]

Right question to ask, however, good readers/professionals do have some sense for this and ability to dig further as needed. On the other hand, books and articles are often over-detailed, with the key stuff buried in the lede or even remaining tacit.

For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.

Flavius a day ago | parent | prev | next [-]

By asking it to cite its sources. Whenever I use AI, I have it pull direct quotes from the text to justify its interpretation. Sometimes it's spot on, sometimes it's wrong. But skimming a paper to fact-check a few specific quotes is still vastly faster than reading a dense paper completely blind.

joquarky a day ago | parent | prev [-]

Critical thinking.

BOOSTERHIDROGEN a day ago | parent [-]

In good faith, how do you tell yourself have good critical thinking?

neya a day ago | parent [-]

I believe you're talking to an LLM, just look at the comment history

atrocious 19 hours ago | parent | prev | next [-]

An LLM can't invent meaning in a text where there is none. It's equivalent to CSI's classic "zoom, enhance" on resolution limited photographs. You need to consider you're learning a load of rubbish from LLMs.

acmerfight a day ago | parent | prev [-]

We actually don't disagree at all—you are perfectly illustrating my point.

Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.

milesvp a day ago | parent [-]

I’m strongly reminded of early google every time I use AI for research. I used to be able to know little about a topic, try to search on it and get shit results. But, google would give me pages of results. So I could skim a lot and eventually on page 10, I stumble across some term of art, and that term would greatly improve my search. Rinse and repeat, and I’d have a good sense about the topic I was interested in.

You can’t really do that with google anymore, and I can’t remember the last time I bothered to actually learn something that wasn’t trivial from google. ChatGPT, however, has been a game changer. I can ask a really dumb question and get some basic info about the thing I’m asking about, and while it’s often not quite what I’m looking for, it gives me clues to follow, and I can quickly zero in on what I’m looking for, often in new contexts.

As an autodidact who’s main motivation to go to college was to get access to the stacks and direct internet access, I can’t even begin to tell you how game changing LLMs seem to be for learning.

To your point though, my concern is we don’t know how to teach how to learn, and LLMs will likely seduce many into bad behavior and poor research hygiene. I treat my research the same way I attack the stacks, but take someone who’s never been to a research library and ask them to create a report on some topic, and just why? That is the basic resistance, why?, why do what an LLM is almost literally built to do. Yet that is also highly related to individual learning, to take a bunch of disperate sources and synthesize output related to the input.

I suspect we’ll learn how to use LLMs in the same way we learned how to use calculators. But I have no doubt that on average (or maybe median or mode?) calculators have made us less capable to do basic arithmetic, and I suspect LLMs will also cause a great percentage of the population to be worse at sythesizing information. I’d hope that it’s only the same people who would have otherwise only gotten their information from TV, but I do have a slight fear it will creep past that subsection of the population.