| ▲ | musicale 5 hours ago | |
Thanks for the response, but (per the omitted portion of my sentence before the semicolon) I was not talking about the M in LLM. I was talking about a conceptual or analytic model that a human might develop to try to predict the behavior of an LLM, per Norvig's claim of insight derived from behavioral observation. But now that I think a bit about it, the observation that an LLM seems to frequently produce obviously and/or subtly incorrect output, is not robust to prompt rewording, etc. is perhaps a useful Norvig-style insight. | ||