| ▲ | goatlover 2 hours ago | |
More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way. Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3. More likely though AGI would cause economic crash from automating too many jobs too quickly. | ||