| |
| ▲ | DiscourseFan a day ago | parent | next [-] | | It requires constant feedback, critical evaluation, and checks. This is not AGI, its cognitive augmentation. One that is collective, one that will accelerate human abilities far beyond what the academic establishment is currently capable of, but that is still fundamentally organic. I don't see a problem with this--AGI advocates treat machine intelligence like some sort of God that will smite non-believers and reward the faithful. This is what we tell children so that they won't shit their beds at night, otherwise they get a spanking. The real world is not composed of rewards and punishments. | | |
| ▲ | komali2 20 hours ago | parent | next [-] | | It does seem that the venn diagram of "roko's basilisk" believers and "AGI is coming within our lifetimes" believers is nearly a circle. Would be nice if there were some less... religious... arguments for AGI's imminence. | | |
| ▲ | DiscourseFan 20 hours ago | parent [-] | | I think the “Roko’s Basilisk” thing is mostly a way for readers of Nick Land to explain part of his philosophical perspective without the need for, say, an actual background in philosphy. But the simplicity reduces his nuanced thought into a call for a sheeplike herd—they don’t even need a shepherd! Or perhaps there is, but he is always yet to come…best to stay in line anyway, he might be just around the corner. |
| |
| ▲ | falcor84 17 hours ago | parent | prev | next [-] | | > It requires constant feedback, critical evaluation, and checks. This is not AGI, its cognitive augmentation. To me that doesn't sound qualitatively different from a PhD student. Are they just cognitive augmentation for their mentor? In any case, I wasn't trying to argue that this system as-is is AGI, but just that it's no longer "ridiculous", and that this to me looks like a herald of AGI, as the portion being done by humans gets smaller and smaller | | |
| ▲ | DiscourseFan 12 hours ago | parent [-] | | People would say the same thing about a calculator, or computation in general. Just like any machine it must be constructed purposefully to be useful, and once we require something which exceeds that purpose it must be constructed once again. Only time will tell the limits of human intelligence, now that AI is integrating into society and industry. |
| |
| ▲ | frozenseven 15 hours ago | parent | prev [-] | | >AGI advocates treat machine intelligence like some sort of God that will smite non-believers and reward the faithful. >The real world is not composed of rewards and punishments. Most "AGI advocates" say that AGI is coming, sooner rather than later, and it will fundamentally reshape our world. On its own that's purely descriptive. In my experience, most of the alleged "smiting" comes from the skeptics simply being wrong about this. Rarely there's talk of explicit rewards and punishments. | | |
| ▲ | DiscourseFan 12 hours ago | parent [-] | | You should look into “Roko’s Basilisic,” its a genuine belief that often goes alongside that of AGI. | | |
| ▲ | frozenseven 3 hours ago | parent [-] | | I should be the target audience for this stuff, but I honestly can't name a single person who believes in this "Roko's basilisk" thing. To my knowledge, even the original author abandoned it. There probably are a small handful out there, but I've never seen 'em myself. |
|
|
| |
| ▲ | markusde a day ago | parent | prev | next [-] | | > it's really not clear to me that humans would be a valuable component in knowledge work for much longer. To me, this sounds like when we first went to the moon, and people were sure we'd be on Mars be the end of the 80's. > Even ARC-AGI-2 is now at over 50%. Any measure of "are we close to AGI" is as scientifically meaningful as "are we close to a warp drive" because all anyone has to go on at this point is pure speculation. In my opinion, we should all strive to be better scientists and think more carefully about what an observation is supposed to mean before we tout it as evidence. Despite the name, there is no evidence that ARC-AGI tests for AGI. | | |
| ▲ | ogogmad 15 hours ago | parent [-] | | > To me, this sounds like when we first went to the moon, and people were sure we'd be on Mars be the end of the 80's. Unlike space colonisation, there are immediate economic rewards from producing even modest improvements in AI models. As such, we should expect much faster progress in AI than space colonisation. But it could still turn out the same way, for all we know. I just think that's unlikely. | | |
| ▲ | zeroonetwothree 13 hours ago | parent [-] | | The minerals in the asteroid belt are estimated to be worth in the $100s of quintillions. I would say that’s a decent economic incentive to develop space exploration (not necessarily colonization, but it may make it easier). |
|
| |
| ▲ | jacquesm a day ago | parent | prev | next [-] | | You either have a case of human augmented AI here or AI augmented human. Either by themself would not have made the step. | | | |
| ▲ | feastingonslop a day ago | parent | prev | next [-] | | Excellent! Humans can then spend their time on other activities, rather than get bogged down in the mundane. | | |
| ▲ | navels a day ago | parent | next [-] | | Other activites such as the sublime pursuit of truth and beauty . . . aka mathematics ;-) | |
| ▲ | latexr 17 hours ago | parent | prev [-] | | Not going to happen as long as the society we live in has this big of a hard on for capitalism and working yourself to the bone is seen as a virtue. Every time there’s a productivity boost, the newly gained free time is immediately consumed by more work. It’s a sick version of Parkinson’s law where work is infinite. https://en.wikipedia.org/wiki/Parkinson%27s_law |
| |
| ▲ | a day ago | parent | prev | next [-] | | [deleted] | |
| ▲ | catlifeonmars a day ago | parent | prev [-] | | “Much longer” is doing a lot of heavy lifting there. | | |
| ▲ | falcor84 17 hours ago | parent | next [-] | | Let me put it like this: I expect AI to replace much of human wage labor over the next 20 years and push many of us, and myself almost certainly included, into premature retirement. I'm personally concerned that in a few years, I'll find my software proficiency to be as useful as my chess proficiency today is useful to Stockfish. I am afraid of a massive social upheaval both for myself and my family, and for society at large. | | |
| ▲ | dehsge 10 hours ago | parent | next [-] | | There are other bounds here at play that are often not talked about. Ai runs on computers. Consider the undecidability of Rices theorem. Where compiled code of non trivial statements may or may not be error free. Even an ai can’t guarantee its compiled code is error free. Not because it wouldn’t write sufficient code that solves a problem, but the code it writes is bounded by other externalities. Undecidability in general makes the dream of generative ai considerably more challenging than how it’s being ‘sold. | |
| ▲ | chongli 16 hours ago | parent | prev [-] | | Here “much of” is doing the heavy lifting. Are you willing to commit to a percentage or a range? I work at an insurance company and I can’t see AI replacing even 10% of the employees here. Too much of what we do is locked up in decades-old proprietary databases that cannot be replaced for legal reasons. We still rely on paper mail for a huge amount of communication with policyholders. The decisions we make on a daily basis can’t be trusted to AI for legal reasons. If AI caused even a 1% increase in false rejections of claims it would be an enormous liability issue. | | |
| ▲ | falcor84 15 hours ago | parent [-] | | Yes, absolutely willing to commit. I can't find a single reliable source, but from what I gather, over 70% of people in the West do "pure knowledge work", which doesn't include any embodied actuvities. I am happy to put my money that these jobs will start being fully taken over by AI rapidly soon (if they aren't already), and that by 2035, less than 50% of us will have a job that doesn't require "being there". And regarding your example of an insurance company, I'm not sure about that industry, but seeing the transformation of banking over the last decade to fully digital providers like Revolut, I would expect similar disruption there. | | |
| ▲ | zeroonetwothree 12 hours ago | parent [-] | | I would easily take the other side of this bet. It just reminds me when everyone was sure back in 2010 that we’d have self driving cars within 10 years and human drivers would be obsolete. Today replacing human drivers fully is still about 10 years away. |
|
|
| |
| ▲ | markusde a day ago | parent | prev [-] | | As is "even if it was in my area of specialty". I would not be able to do this proof, I can tell you that much. |
|
|