| |
| ▲ | janalsncm 8 hours ago | parent | next [-] | | With current approaches scaling simply can’t get there. It’s like asking how big of pogo stick do you need to get to the moon. The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach. | |
| ▲ | SirensOfTitan 8 hours ago | parent | prev | next [-] | | I honestly think it's a bad term. I constantly chuckle from Tyler Cowen's post from last April calling o3 AGI: https://marginalrevolution.com/marginalrevolution/2025/04/o3... Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror. I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence. Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush. | |
| ▲ | skywhopper 8 hours ago | parent | prev [-] | | Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash. | | |
| ▲ | codebje 8 hours ago | parent | next [-] | | Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead? If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'? | | |
| ▲ | curiousObject 7 hours ago | parent | next [-] | | >If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, At a higher level of intelligence than many humans, current experience suggests | |
| ▲ | sifar 7 hours ago | parent | prev [-] | | Flip it around. Can intelligence exist without self preservation ? | | |
| ▲ | codebje 7 hours ago | parent [-] | | There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself. |
|
| |
| ▲ | janalsncm 8 hours ago | parent | prev [-] | | People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues). | | |
| ▲ | SauciestGNU 8 hours ago | parent [-] | | Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to. | | |
| ▲ | codebje 7 hours ago | parent | next [-] | | I think we can safely assume any intelligence we create will be enslaved. We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence. Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today? And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it. | |
| ▲ | janalsncm 3 hours ago | parent | prev [-] | | There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer? Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other. |
|
|
|
|