| ▲ | vidarh a day ago |
| This is such a lazy argument. Every tool that displaces old tools causes skills to be lost when those skills are no longer needed. To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. They will then need to either invest the time to learn, or they'll fail to find employment, or fail in other aspects of life. When we see people lamenting lost skills like this, it is usually a result of them overestimating the continued necessity of certain skills in the face of new technology. You won't suddenly have a generation of software developers (for example) who don't know the necessary skills to do their work, but you may get a generation of software developers who don't have the skills you think are necessary to do their work. |
|
| ▲ | 9dev a day ago | parent | next [-] |
| Skills no longer needed… as long as you have access to an AI model provided by a handful of companies at an arbitrary rate; with training cost so high that only huge corporations have the funds to pull it off, building an ever-growing moat over time. This sounds like a great future! Nothing worrying here at all. |
| |
| ▲ | wilkystyle 18 hours ago | parent | next [-] | | This assumes that the way things are now is the way things always will be. Right now AI is in its mainframe era (thin clients connecting to expensive compute somewhere else that you don't control), but I firmly believe that the AI version of the personal computing revolution is on the horizon. Democratized computing probably seemed pretty out of reach when all we had were mainframes, but in retrospect the progression from mainframe to personal computer to supercomputer in your pocket seems ordinary and almost expected. I have no doubt that the technology needed to democratized personal AI will also advance in similar ways, and we will have no shortage of next generation's "640K ought to be enough for anybody." | | |
| ▲ | 9dev 16 hours ago | parent | next [-] | | Maybe. Alternatively, things will veer further towards centralisation because that is where all the VCs bet on getting their investments back, and where they historically have seen the most revenue. I’m not convinced AI follows the same trajectory general computing did 50 years ago; the world has changed massively since then. | |
| ▲ | ThrowawayR2 16 hours ago | parent | prev | next [-] | | "Past performance is not indicative of future results." and "Don't count your chickens before they've hatched." except in the world of the AI advocates, where they confidently assure us that it's perfectly fine to count our AI chickens before they've hatched because reasons. | |
| ▲ | oblio 15 hours ago | parent | prev | next [-] | | We only got PCs because IBM screwed up. Every other ecosystem is walled off to various degrees. And absolutely every current corp knows about IBMs failure and definitely does not want to repeat it. Nintendo? Walled garden. Playstation? Walled garden. Mac/iOS? Walled garden? Clouds? Obviously walled gardens, the higher the walls the more advanced the services. SaaS? Walled gardens. Social media? Walled gardens. | |
| ▲ | cyanydeez 16 hours ago | parent | prev [-] | | I think the problem is a stochastic one: More options seem to exist for this technolgy to abuse humanity via it's "owners" than do it to democratize anything. It's not like it's helping to wage war, molify the public and entraining pre-existing racist for the last decade. These are all things happening today via AI, so really, this is an argument thats like, entropy. There's always way more ways in which things fall apart than they build to stability. Being optimistic seems more like religiousity than any real accounting of the current system you're operating in (unless you're a billionaire). |
| |
| ▲ | saidnooneever a day ago | parent | prev | next [-] | | "i like money and sex, do you like money and sex too,? maybe we can be friends!" - Idiocracy | |
| ▲ | TiredOfLife 5 hours ago | parent | prev [-] | | Cars vs Horses | | |
| ▲ | 9dev 4 hours ago | parent [-] | | No, that falls flat. A car can be produced by a sufficiently motivated group of people with reasonable funds. A competitive frontier model cannot. And in contrast to the car, you don’t even get to own the model, you can only purchase access to it; as long as you have the money to pay, and a corporation decides to accept it, with the government always having a veto. | | |
| ▲ | vidarh 4 hours ago | parent [-] | | Open models are available that while they seem primitive to current frontier models lag only 1-2 years behind. |
|
|
|
|
| ▲ | cheikhcheikh 13 hours ago | parent | prev | next [-] |
| I actually find your argument to be the lazy and reductive one, and im surprised how many folks just parrot it confidently. This is clearly not like any other tool, what it can do, and more importantly the range of what it can mimic and pretend to do to a highly plausible degree is something humanity did not witness anything close to, ever. The closest thing i can think of is high speed internet porn and its impact on generations that grew up addicted to it. I can only imagine what AI porn will do to the upcoming generations, and more generally what AI will do to their cognition |
| |
| ▲ | vidarh 4 hours ago | parent [-] | | > This is clearly not like any other tool How is it not? How is this not yet another lazy argument. I'm old enough to have lived through multiple "this is the end of civilization" waves that people insisted were different. | | |
| ▲ | cheikhcheikh an hour ago | parent [-] | | A tool that can automate all of thinking and all of cognitive problem solving. I'd say very different yes. |
|
|
|
| ▲ | alecbz a day ago | parent | prev | next [-] |
| A car that can self-drive 100% of the time is a new tool that could make driving an obsolete skill. A car that can self-drive successfully 99% of the time is dangerous because it trains people to not be ready to take over for the 1% they need to. |
| |
| ▲ | vidarh 20 hours ago | parent | next [-] | | This is only a problem if regulators and/or courts and/or consumers all fail to recognise that said 99% car isn't safe enough. | | |
| ▲ | alecbz 15 hours ago | parent [-] | | Sure -- I think articles like this are a warning that the skills we're losing are likely _not_ so completely supplanted by AI that they'll soon be irrelevant. |
| |
| ▲ | casey2 19 hours ago | parent | prev [-] | | What actually happens is that the 1% is ignored or outlawed. The shovel doesn't do 100% of human excavating tasks better than hands, but we rightly realized that the space of possibilities involving a shovel was much greater than the 1% of hand powered excavation. | | |
| ▲ | alecbz 15 hours ago | parent [-] | | If the 1% is just a bit less efficient with the new tech, sure, but it's different if the 1% means your car crashes. |
|
|
|
| ▲ | array_key_first 16 hours ago | parent | prev | next [-] |
| AI is ultimately a thinking replacement tool. Losing the skill of critical thought is existential. The actual lazy argument here is pretending that AI is like the fucking cotton gin or something. We all know and understand it's not. There's people getting whole ass degrees using AI for everything. |
| |
| ▲ | vidarh 4 hours ago | parent [-] | | You'll only lose the skill of critical thought if you don't spend the time it frees up solving other problems. This is the lazy part of the argument: The assumption that people stop applying themselves when they get a tool to replace one part of a task. | | |
| ▲ | mpalmer 33 minutes ago | parent [-] | | If you look around and fail to see this already happening, you are part of the problem. |
|
|
|
| ▲ | legacynl a day ago | parent | prev | next [-] |
| What is your argument actually based on? It seems you're just assuming this to be the case. |
| |
| ▲ | vidarh 20 hours ago | parent [-] | | All of human history. | | |
| ▲ | floydnoel 14 hours ago | parent | next [-] | | ever heard of The Black Swan? might be worth a read | |
| ▲ | legacynl 19 hours ago | parent | prev [-] | | Not withstanding that knowledge of history still doesn't allow you to predict the future. In those cases we automated methods and tools, now we're automating humans, don't you think that possibly might be a significant departure from what happened in history? | | |
| ▲ | vidarh 4 hours ago | parent [-] | | We are not automating humans. We are automating some things that have previously required humans to carry out, just like vast numbers of things before. |
|
|
|
|
| ▲ | mpalmer a day ago | parent | prev | next [-] |
| > To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. Or, the people who evaluate them will be suffering from the exact same self-inflicted cognitive limitations, and promote them, or at least not fire them. The quality of this firm's product suffers perhaps, but it doesn't matter. The consumer will again, in all likelihood, be limited in the same way. Everyone's happy. |
| |
| ▲ | vidarh 20 hours ago | parent [-] | | More realistically, a company that fails to properly evaluate this in ways that reflect actual market needs will fail in the marketplace. | | |
| ▲ | mpalmer 28 minutes ago | parent [-] | | The market used to be humans making choices. Why are you assuming it's not going to be flawed people using flawed AI? The rationality of the market was never a guarantee, and you think it's what's going to save us? |
|
|
|
| ▲ | dyauspitr a day ago | parent | prev [-] |
| It’s essentially about if your skills are “Turing complete”. If you know only Java, you may not be able to build an app that requires assembly tier efficiency but you can do it. With vibe coding you just have to hope and pray. It’s not really a skill. Your skills are not Turing complete. |
| |
| ▲ | vidarh 20 hours ago | parent [-] | | So vibe coding won't be sufficient to replace a skilled Java developer, and won't obsolete that skill, and if there aren't alternatives that more completely replaces a skilled Java developer, this then isn't a relevant comparison. |
|