Remix.run Logo
russfink 2 days ago

Practical question: when getting the AI to teach you something, eg how attention can be focused in LLMs, how do you know it’s teaching you correct theory? Can I use a metric of internal consistency, repeatedly querying it and other models with a summary of my understanding? What do you all do?

layer8 2 days ago | parent | next [-]

> What do you all do?

Google for non-AI sources. Ask several models to get a wider range of opinions. Apply one’s own reasoning capabilities where applicable. Remain skeptical in the absence of substantive evidence.

Basically, do what you did before LLMs existed, and treat LLM output like you would have a random anonymous blog post you found.

akomtu a day ago | parent [-]

In that case, LLMs must be written off as very knowledgeable crackpots because of their tendency to make things up. That's how we would treat a scientist who's caught making things up.

jennyholzer2 2 days ago | parent | prev [-]

[flagged]