| ▲ | LightBug1 3 days ago |
| Yes, when AI's whole schtick was that it was supposed to be the greatest and smartest revolution in the last few centuries. Conclusion: we are not in the age of AI. |
|
| ▲ | rich_sasha 3 days ago | parent | next [-] |
| Dunno. Mass production was clearly a many-orders-of-magnitude improvement on the artisan model, yet still humans are needed. We still call it the "industrial revolution". |
| |
| ▲ | LightBug1 3 days ago | parent [-] | | Fair. My jury is still out as to whether the current models are proto-AI. Obviously an incredible innovation. I'm just not certain they have the potential to go the whole way. /layman disclaimer | | |
| ▲ | falcor84 3 days ago | parent [-] | | As you say, whether we call it "AI", or "doohickey", it is an incredible innovation. And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way" - it is a technological advancement, that like all others should inspire practitioners to develop better future systems, that adapt some aspects of it. Perhaps at some point we will see a self-propelling technological singularity with the AI developing its own successor autonomously, but that's clearly not the current situation. | | |
| ▲ | LightBug1 2 days ago | parent | next [-] | | Doohickey is so much more relatable ... I may call LLM's that from now on. Thank you. | |
| ▲ | bluefirebrand 2 days ago | parent | prev | next [-] | | > And I don't think that anyone is claiming at the moment that the systems as-is will themselves "go the whole way" Dunno but I see plenty of people making exactly this claim every day, even on this site | |
| ▲ | kmoser 2 days ago | parent | prev [-] | | That will never happen. We may approach that state asymptotically but since AI output is stochastic, and humans' goals change over time, humans will always be part of the loop. | | |
| ▲ | falcor84 2 days ago | parent [-] | | Whatever the formula for the probability of recursive self-improvement of AI may be, I am unfortunately certain that the fickleness of human goals does not factor into it. |
|
|
|
|
|
| ▲ | CuriouslyC 3 days ago | parent | prev [-] |
| I'm a booster, but LLMs are 100% not going to give us true autonomous intelligence, they're incredibly powerful but all the intelligence they display is "hacked," generalization is limited. That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years, that these tools aren't powerful enough to irreversibly transform the the world. They absolutely are, and there's no going back. |
| |
| ▲ | hvb2 3 days ago | parent [-] | | > That being said, people are making a huge mistake with the idea that just because we're not gonna hit AGI in the next few years Because that's what we've been promised, not once but many times by many different companies. So sure, there's a marginal improvement like refactoring tools that do a lot of otherwise manual labor. |
|