| ▲ | trio8453 19 hours ago | ||||||||||||||||
> This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing Useful = great. We've made incredible progress in the past 3-5 years. The people who are disappointed have their standards and expectations set at "science fiction". | |||||||||||||||||
| ▲ | lxgr 19 hours ago | parent | next [-] | ||||||||||||||||
I think many people are now learning that their definition of intelligence was actually not very precise. From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.) | |||||||||||||||||
| |||||||||||||||||
| ▲ | danaris 16 hours ago | parent | prev [-] | ||||||||||||||||
Or the people who are disappointed were listening to the AI hype men like Sam Altman, who have, in fact, been promising AGI or something very like it for years now. I don't think it's fair to deride people who are disappointed in LLMs for not being AGI when many very prominent proponents have been claiming they are or soon will be exactly that. | |||||||||||||||||