| ▲ | davydm 3 days ago |
| FTA: "But with billions of dollars pouring into generative AI every year, we will see instant voice-to-code capabilities + bug-free quality in 2-5 years." HA HA HA HA HA HA HA HA HA HA HA HA omg, thanks for the laugh - "bug-free quality in 2-5 years" pfffffft I'm not holding my breath - rather, I think that by then, the hype will have finally lost some steam as companies crash and burn with their shitty, "almost working" codebases. |
|
| ▲ | Rzor 3 days ago | parent | next [-] |
| I'm in the same camp as you where I don't think the hype is justified, but when it comes to the medium and long term adoption I can easily see LLMs getting good enough so that programmers will be more like plumbers than system designers and maintainers. At the very least it's going to be hard to justify big teams.
I do wonder if tech is ready for more competition then because starving is a hell of a motivator. |
| |
| ▲ | lbreakjai a day ago | parent | next [-] | | > programmers will be more like plumbers Are some of us not, already? I sure have been in roles where I felt like it, writing glue to move things from one AWS service to another. | |
| ▲ | beej71 3 days ago | parent | prev [-] | | Why would a big AI-assisted team be hard to justify? |
|
|
| ▲ | joerter10 3 days ago | parent | prev | next [-] |
| I tend to agree. Unless there is some kind of major breakthrough, I see a heavy asymptotic curve in generative AI code quality |
| |
| ▲ | beej71 3 days ago | parent [-] | | Contrasted to the exponential curve of power consumption... | | |
| ▲ | burnt-resistor 3 days ago | parent [-] | | Building tons of deca/gigawatt datacenters without a specific monetization plan or need. It still seems like a very temporary bubble most likely to lead to a collapse leaving pollution, waste, land, and community destruction in its wake. If it were to be successful, then ~95% of a previously high-income people (~100M professionals) will be unemployed and/or lose earnings. There is no net positive outcome except for {tr,b}illionaires possibly getting richer. |
|
|
|
| ▲ | MaxLeiter 3 days ago | parent | prev [-] |
| Think about how much progress has been made in the last 2-5 years. I can understand skepticism but not the HA HA HAs |
| |
| ▲ | marginalia_nu 3 days ago | parent | next [-] | | Ha-has are perhaps tonally inappropriate, but when you look at the facts it seems unlikely. What we've seen in the last few years is fairly unlikely to continue forever. That's rarely how trends go. If anything if we actually look at the trend lines the improvements between model generations are becoming smaller, and the models are getting larger and more expensive to train. A perhaps bigger concern is how flimsy the industry itself is. When investors start asking where their returns are at, it's not going to be pretty. The likes of OpenAI and Anthropic are deep in the red, absolutely hemorrhaging money, and they're especially exposed since a big part of their income is from API-deals with VC-funded startups that in turn also have scarlet balance sheets. Unless we have another miraculous breakthrough that makes these models drastically cheaper to train and operate, or we see massive increases in adoption from people willing to accept significantly higher subscription fees, I just don't see how this is going to end the way the AI optimists think it will. We're likely looking at something similar to the dot com bubble. It's not that the technology isn't viable or that it's not going to make big waves eventually, it's just that the world needs to catch up to it. Everything people were dreaming of during the dot com bubble did eventually come true, just 15 years later when the logistics had caught up, smartphones had been invented, and the web wasn't just for nerds anymore. | | |
| ▲ | oytis 3 days ago | parent [-] | | > Unless we have another miraculous breakthrough I guess the argument of AI optimists is that these breakthroughs are likely to happen given the recent history. Deep learning was rediscovered like, what, 15 years ago? "Attention is all you need" is 8 years old. So it's easy to assume that something is boiling deep down that will show impressive results 5-10 years down the line. | | |
| ▲ | marginalia_nu 3 days ago | parent [-] | | Scientific breakthroughs happen, but they're notoriously difficult to make happen on command or on a schedule. Taking them for granted or as inevitable seems quite detached from reality. | | |
| ▲ | oytis 3 days ago | parent [-] | | True, but given how many breakthroughs we had in AI recently, for text, sound, images and video the odds of new breakthroughs happening are probably higher than otherwise. We have no idea how many of them we need till AGI or at least replacing software engineers though. | | |
| ▲ | marginalia_nu 3 days ago | parent [-] | | That's mostly just a few discoveries finding multiple applications. That's fairly common after a large breakthrough, and what you see is typically a flurry of activity and then things die down as the breakthrough gets figured out. | | |
| ▲ | NoGravitas 2 days ago | parent [-] | | It's "a few discoveries finding multiple applications" plus throwing as much data and compute as possible at those applications, a process that seems to be increasingly struggling uphill in the last year or so. |
|
|
|
|
| |
| ▲ | zwnow 3 days ago | parent | prev | next [-] | | When there's something new and shiny progress is made fast until we reach the inevitable ceiling. AI has unsolved issues. The bubble will eventually pop and the damages will be astounding. I will sign the HA HA HAs. People are delusional. | | |
| ▲ | sindriava 3 days ago | parent [-] | | Could you describe in what way do you find the current paradigms "new" or what unsolved issues you're talking about? | | |
| ▲ | zwnow 3 days ago | parent [-] | | Currently scaling limitations where gain outweighs the cost efficiency. True long term reasoning. Hallucinations and pattern matched reasoning over structural reasoning. Reasoning for novel tasks. Hitting a data wall so lack of training data. Stale knowledge. Biased knowledge. Oh and let's not forget about all the security related issues nobody likes to talk about. | | |
| ▲ | sindriava 3 days ago | parent [-] | | So just to be clear when you mention "AI" you're mostly talking about LLMs right? Since most of these don't apply to expert systems. |
|
|
| |
| ▲ | jennyholzer 3 days ago | parent | prev [-] | | HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA |
|