Remix.run Logo
clusterhacks 4 days ago

"Out-of-order" and non-conventional explanations are interesting to consider. For both, I would expect LLMs to do poorly when there isn't much (or any) material for those approaches in the training data. My intuition would be the learner is going to have to be more exploratory via prompt engineering and still struggle against the tendency of the model to lean into classic or conventional explanations.

I don't particularly expect models to be dependable in responses, but I see how that presents a much larger problem in a learning context. I'm ok with bad responses that I can fight back against, but I also wouldn't reach for an LLM for a new field by default either.

For me, I do like using an LLM as a supplemental learning aid along with other traditional resources. I haven't tackled a deeper, new-to-me field yet with one. Maybe it's time for that . . .