▲ | seba_dos1 3 days ago | |
No I don't. I have used Claude, ChatGPT and Gemini in many "known settings" while working during the last few weeks to test whether their output would be helpful. Topics included many things - Bayer image processing, color science, QML and Plasma plugins, GPS, GTK3->4 porting, USB PD, PDF data structures, ALSA configs... All of them hallucinated (which is hardly surprising, that's just what they do). Sometimes it was enough to ask it to verify its claims on the Web, but Gemini Pro once refused to get corrected, stubbornly claiming that the correct answer was "a common misconception" even when confronted with sources claiming otherwise :) I was already knowledgeable enough in these topics to catch these, but some were dangerously subtle. Really, the only way to use LLMs to actually learn anything beyond trivial is to actively question everything it prints out and never move forward until you actually grasp the thing and can verify it. It still feels helpful to me to use it this way, but it's hard to tell how it compares to learning from a good and trustworthy resource in terms of efficiency. It's hard to unlearn something and try to learn it again another way to compare ;P |