▲ | IshKebab a day ago | ||||||||||||||||||||||||||||
Sure 5 years: AI coding assistants are a lot better than they are now, but still can't actually replace junior engineers (at least ones that aren't shit). AI fraud is rampant, with faked audio commonplace. Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it. Tesla's robotaxi won't be available, but Waymo will be in most major US cities. 10 years: AI assistants are now useful enough that you can use them in the ways that Apple and Google really wanted you to use Siri/Google Assistant 5 years ago. "What have I got scheduled for today?" will give useful results, and you'll be able to have a natural conversation and take actions that you trust ("cancel my 10am meeting; tell them I'm sick"). AI coding assistants are now very good and everyone will use them. Junior devs will still exist. Vibe coding will actually work. Most AI Startups will have gone bust, leaving only a few players. Art-based AI will be very popular and artists will use it all the time. It will be part of their normal workflow. Waymo will become available in Europe. Some receptionists and PAs have been replaced by AI. 15 years: AI researchers finally discover how to do on-line learning. Humanoid robots are robust and smart enough to survive in the real world and start to be deployed in controlled environments (e.g. factories) doing simple tasks. Driverless cars are "normal" but not owned by individuals and driverful cars are still way more common. Small light computers become fast enough that autonomous slaughter it's become reality (i.e. drones that can do their own navigation and face recognition etc.) 20 years: Valve confirms no Half Life 3. | |||||||||||||||||||||||||||||
▲ | FeepingCreature 16 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
It kind of sounds like you're saying "exactly everything we have today, we will have mildly more of." | |||||||||||||||||||||||||||||
▲ | Quarrelsome a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
you should add a bit where AI is pushed really hard in places where the subjects have low political power, like management of entry level workers, care homes or education and super bad stuff happens. Also we need a big legal event to happen where (for example) autonomous driving is part of a really big accident where lots of people die or someone brings a successful court case that an AI mortgage underwriter is discriminating based on race or caste. It won't matter if AI is actually genuinely responsible for this or not, what will matter is the push-back and the news cycle. Maybe more events where people start successfully gaming deployed AI at scale in order to get mortgages they shouldn't or get A-grades when they shouldn't. | |||||||||||||||||||||||||||||
▲ | WXLCKNO 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
So in the past 5 years we went from not having ChatGPT at all and it being released in 2022 (with non "chat" models before that) but in the next 5 now that the entire tech world is consumed with making better AI models, we'll just get slightly better AI coding assistants? Reminds me of that comment about the first iPod being lame and having less space than a nomad. Worst take I've ever seen on here recently. | |||||||||||||||||||||||||||||
▲ | 9dev a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
It’s soothing to read a realistic scenario amongst all of the ludicrous hype on here. | |||||||||||||||||||||||||||||
▲ | FairlyInvolved a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
We are going to scale up GPT4 by a factor of ~10,000 and that will result in getting an accurate summary of your daily schedule? | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | archagon a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> Small light computers become fast enough that autonomous slaughter it's become reality This is the real scary bit. I'm not convinced that AI will ever be good enough to think independently and create novel things without some serious human supervision, but none of that matters when applied to machines that are destructive by design and already have expectations of collateral damage. Slaughterbots are going to be the new WMDs — and corporations are salivating at the prospect of being first movers. https://www.youtube.com/watch?v=UiiqiaUBAL8 | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
▲ | petesergeant 19 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
> Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it. I think this is much closer than you think, because there's a good percentage of call centers that are basically just humans with no power cosplaying as people who can help. My fiber connection went to shit recently. I messaged the company, and got a human who told me they were going to reset the connection from their side, if I rebooted my router. 30m later with no progress, I got a human who told me that they'd reset my ports, which I was skeptical about, but put down to a language issue, and again reset my router. 30m later, the human gave me an even more outlandish technical explanation of what they'd do, at which point I stumbled across the magical term "complaint" ... an engineer phoned me 15m later, said there was something genuinely wrong with the physical connection, and they had a human show up a few hours later and fix it. No part of the first-layer support experience there would have been degraded if replaced by AI, but the company would have saved some cash. |