▲ | layer8 3 days ago | |
Lossy is an incomplete characterization. LLMs are also much more fluctuating and fuzzy. You can get wildly varying output depending on prompting, for what should be the same (even if lossy) knowledge. There is not just loss during the training, but also loss and variation during inference. An LLM overall is a much less coherent and consistent thing than most humans, in terms of knowledge, mindset, and elucidations. |