Remix.run Logo
rmoriz 5 days ago

You can still ask, generate a list of things to learn etc. basically generate a streamlined course based on all tutorials, readmes and source code available when the model was trained. You can call your tutor 24/7 as long as you got tokens.

seba_dos1 5 days ago | parent | next [-]

You have to keep guard at each step to notice the inconsistencies and call your tutor's mistakes out though, or you'll inevitably learn some garbage. This is a use case that certainly "feels" like it's boosting your learning (it sure does to me), but I'd like to read an actual study on whether it really does before reaching any conclusions.

It seems to me that LLMs help the most at the initial step of getting into some rabbit hole - when you're getting familiar with the jargon, so you can start reading some proper resources without being confused too much. The sooner you manage to move there, the better.

rmoriz 4 days ago | parent [-]

You overestimate hallucinations in known settings. If you ask to show source code, it‘s easy to check the sources (of a framework, language, local code)

seba_dos1 3 days ago | parent [-]

No I don't. I have used Claude, ChatGPT and Gemini in many "known settings" while working during the last few weeks to test whether their output would be helpful. Topics included many things - Bayer image processing, color science, QML and Plasma plugins, GPS, GTK3->4 porting, USB PD, PDF data structures, ALSA configs... All of them hallucinated (which is hardly surprising, that's just what they do). Sometimes it was enough to ask it to verify its claims on the Web, but Gemini Pro once refused to get corrected, stubbornly claiming that the correct answer was "a common misconception" even when confronted with sources claiming otherwise :)

I was already knowledgeable enough in these topics to catch these, but some were dangerously subtle. Really, the only way to use LLMs to actually learn anything beyond trivial is to actively question everything it prints out and never move forward until you actually grasp the thing and can verify it. It still feels helpful to me to use it this way, but it's hard to tell how it compares to learning from a good and trustworthy resource in terms of efficiency. It's hard to unlearn something and try to learn it again another way to compare ;P

theshrike79 5 days ago | parent | prev [-]

ChatGPT even has a specific "Study mode" where it refrains from telling you the answer directly and kinda guides you to figure it out yourself.