▲ | glenstein 7 days ago | |
This is the best and clearest explanation I have yet seen that describe a tricky thing, namely that LLMs, which are synonymous with "AI" for so many people, are just one variation of many possible types of machine intelligence. Which I find important because, well, hallucinating facts is what you would expect from an LLM, but isn't necessarily inherent issue with machine intelligence writ large if it's trained from the ground up on different principles, or modelling something else. We use LLMs as a stand in for tutors because being really good at language incidentally makes them able to explain math or history as a side effect. Importantly it doesn't show that hallucinating is a baked in problem for AI writ large. Presumably different models will have different kinds of systemic errors based on their respective designs. |