| |
| ▲ | CamperBob2 8 hours ago | parent | next [-] | | I think the biggest case for a bearish attitude towards AGI is simply that we don't take advantage of the intelligence we already have. Look at our elected human leaders, for Pete's sake. If we had access to AGI today, we'd just find novel and interesting ways to ignore it, enslave it, gimp it, and/or bias it. | | |
| ▲ | sph 6 hours ago | parent [-] | | What, your preference would be to just unleash it upon the world? I wish the average software engineer had any foundation whatsoever in humanities and philosophy before being allowed to make such decisions, but alas, we are doomed. |
| |
| ▲ | surgical_fire 3 hours ago | parent | prev [-] | | Really? The last major advancement was probably GPT3, af least if we are talking about the LLM companies, the ones involved in the current data center boom. After that was we experienced were marginal improvements of the same technology. Yes, the current models are better than what OpenAI put out at the time of ChatGPT 3, but none of it was revolutionary (and the gains have been less and less perceptible in newer versions). We might be as far from AGI as we were in 2022. I think we are multiple revolutions in technology away from it. |
|