| ▲ | 9x39 3 hours ago | |||||||
There was a meme going around that said the fall of Rome was an unannounced anticlimactic event where one day someone went out and the bridge wasn't ever repaired. Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee. Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today. I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting. | ||||||||
| ▲ | esafak 3 hours ago | parent | next [-] | |||||||
But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes. | ||||||||
| ||||||||
| ▲ | 3 hours ago | parent | prev | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | beej71 3 hours ago | parent | prev [-] | |||||||
I'd always imagined that AGI meant an AI was given other AIs to manage. | ||||||||