| ▲ | roenxi 2 days ago | |
So? Do I not count as a benchmark of basic intelligent now? I've got a bunch of tests and whatnot that suggest I'm a reasonably above average at thinking. There is this fascinating trend where people would rather bump humans out of the naturally intelligent category rather than admit AIs are actually already at an AGI standard. If we're looking for intelligent conversation AI is definitely above average. Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance. | ||
| ▲ | Peritract 2 days ago | parent | next [-] | |
A lot of people who are also above average according to a bunch of tests disagree with you. Even if we take 'above average' on some tests to mean in every area--above average at literacy, above average at music, above average at empathy--it's still clear that many people have higher standards for these things than you. I'm not saying definitively that this means your standards are unreasonably easy to meet, but I do think it's important to think about it, rather than just assume that--because it impresses you--it must be impressive in general. When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'. | ||
| ▲ | ACCount37 2 days ago | parent | prev [-] | |
A major weak point for AIs is long term tasks and agentic behavior. Which is, as it turns out, its own realm of behavior that's hard to learn from text data, and also somewhat separate from g - the raw intelligence component. An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all. | ||