Remix.run Logo
Warwolt 2 hours ago

Actually, I think this is a case where LLMS _can_ be useful. If we're prompting for small enough outputs, for examples around things we can already sort of reason about it, we're able to judge whether or not what's presented to use makes sense.

Presumably you're also reading some kind of learning text about the Chinese language, so the sole source isn't just the LLM?

In my experience, asking an LLM to produce small examples of well-known things (or rather, things that are going to be talked about frequently in the training data, so generally basic or fundamental topics) tend to work fine, and is going to be at a level where you yourself can judge what's presented.

I think the real danger is when a person is prompting things they don't know how to verify for themselves, since then we're basically just rolling dice and hoping