▲ | buu700 3 days ago | |||||||||||||||||||||||||||||||||||||||||||
I would categorize sentient AGI as artificial consciousness[1], but I don't see an obvious reason AGI inherently must be conscious or sentient. (In terms of near-term economic value, non-sentient AGI seems like a more useful invention.) For me, AGI is an AI that I could assign an arbitrarily complex project, and given sufficient compute and permissions, it would succeed at the task as reliably as a competent C-suite human executive. For example, it could accept and execute on instructions to acquire real estate that matches certain requirements, request approvals from the purchasing and legal departments as required, handle government communication and filings as required, construct a widget factory on the property using a fleet of robots, and operate the factory on an ongoing basis while ensuring reliable widget deliveries to distribution partners. Current agentic coding certainly feels like magic, but it's still not that. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | ACCount37 2 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
"Consciousness" and "sentience" are terms mired in philosophical bullshit. We do not have an operational definition of either. We have no agreement on what either term really means, and we definitely don't have a test that could be administered to conclusively confirm or rule out "consciousness" or "sentience" in something inhuman. We don't even know for sure if all humans are conscious. What we really have is task specific performance metrics. This generation of AIs is already in the valley between "average human" and "human expert" on many tasks. And the performance of frontier systems keeps improving. | ||||||||||||||||||||||||||||||||||||||||||||
|