| ▲ | AlexandrB 2 days ago |
| The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents, so you know the monetization is going to be awful. Since the models are expensive to run it's all subscription priced and has to run in the cloud where the user has no control. The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships. Basically it's the culmination of all the worst tech trends of the last 10 years. |
|
| ▲ | dpe82 2 days ago | parent | next [-] |
| In a previous generation, the enabler of all our computer tech innovation was the incredible pace of compute growth due to Moore's Law, which was also "top-down" from very well-funded companies since designing and building cutting edge chips was (and still is) very, very expensive. The hype was insane, and decisions about what chip features to build were made largely on the basis of existing vendor relationships. Those companies benefited, but so did the rest of us. History rhymes. |
| |
| ▲ | JohnMakin 2 days ago | parent | next [-] | | Should probably change this to "was appearance of incredible pace of compute growth due to Moore's Law," because even my basic CS classes from 15 years ago were teaching that it was drastically slowing down, and isn't really a "law" more than an observational trend that lasted a few decades. There are limits to how small you can make transistors and we're not too far from it, at least not what would continue to yield the results of that curve. | | |
| ▲ | noosphr 2 days ago | parent [-] | | The corollary to Moores law, that computers get twice as fast every 18 months, died by 2010. People who didn't live through the 80s, 90s and early 00s, where you'd get a computer ten times as fast every 5 years, can't imagine what it was like back then. Today the only way to scale compute is to throw more power at it or settle for the 5% per year real single core performance improvement. |
| |
| ▲ | BrenBarn 2 days ago | parent | prev | next [-] | | The difference is once you bought one of those chips you could do your own innovation on top of it (i.e., with software) without further interference from those well-funded companies. You can't do that with GPT et al. because of the subscription model. | | |
| ▲ | almogo 2 days ago | parent [-] | | Yes you can? Sure you can't run GPT5 locally, but get your hands on a proper GPU and you can run some still very sophisticated local inference. | | |
| ▲ | BrenBarn a day ago | parent [-] | | You can do some, but many of them have license restrictions that prevent you from using them in certain ways. I can buy an Intel chip and deliberately use it to do things that hurt Intel's business (e.g., start a competing company). The big AI companies are trying very hard to make that kind of thing impossible by imposing constraints on the allowed uses of their models. |
|
| |
| ▲ | dmschulman 2 days ago | parent | prev | next [-] | | Eh, if this is true then IBM and Intel would still be the kings of the hill. Plenty of companies came from the bottom up out of nothing during the 90s and 2000s to build multi-billion dollar companies that are still dominate the market today. Many of those companies struggled for investment and grew over a long timeframe. The argument is something like that is not really possible anymore given the absurd upfront investments we're seeing existing AI companies need in order to further their offerings. | | |
| ▲ | dpe82 2 days ago | parent | next [-] | | Anthropic has existed for a grand total of 4 years. But yes, there was a window of opportunity when it was possible to do cutting-edge work without billions of investment. That window of opportunity is now past, at least for LLMs. Many new technologies follow a similar pattern. | | |
| ▲ | falcor84 2 days ago | parent [-] | | What about deepseek r1? That was earlier this year - how do you know that there won't be more "deepseek moments" in the coming years? |
| |
| ▲ | 3uler 2 days ago | parent | prev [-] | | Intel was king of the hill until 2018. | | |
| ▲ | BobbyTables2 2 days ago | parent [-] | | “Bobby, some things are like a tire fire: trying to put it out only makes it worse. You just gotta grab a beer and let it burn.” – Hank Rutherford Hill |
|
| |
| ▲ | HellDunkel 2 days ago | parent | prev [-] | | You completly forgot about the invention of the home computer. If we would have all been loging into some mainframe computer using a home terminal your assessment would be correct. |
|
|
| ▲ | simianwords 2 days ago | parent | prev | next [-] |
| This is very pessimistic take. Where else do you think the innovation would come from? Take cloud for example - where did the innovation come from? It was from the top. I have no idea how you came to the conclusion that this implies monetization is going to be awful. How do you know models are expensive to run? They have gone down in price repeatedly in the last 2 years. Why do you assume it has to run in the cloud when open source models can perform well? > The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships There are hundreds of millions of chatgpt users weekly. They didn't need a C suite to push the usage. |
| |
| ▲ | AlexandrB 2 days ago | parent | next [-] | | > I have no idea how you came to the conclusion that this implies monetization is going to be awful. Because cloud monetization was awful. It's either endless subscription pricing or ads (or both). Cloud is a terrible counter-example because it started many awful trends that strip consumer rights. For example "forever" plans that get yoinked when the vendor decides they don't like their old business model and want to charge more. | | |
| ▲ | simianwords 2 days ago | parent | next [-] | | Vast majority of cloud users use AWS, GCP and Azure which have metered billing. I'm not sure what you are talking about. | |
| ▲ | throwaway98797 2 days ago | parent | prev | next [-] | | lots of start ups were built on aws i’d rather have a subscription than no service at all oh, and one can always just not buy something if it’s not valuable enough | |
| ▲ | Daz1 2 days ago | parent | prev [-] | | >Because cloud monetization was awful Citation needed |
| |
| ▲ | acdha 2 days ago | parent | prev | next [-] | | > Take cloud for example - where did the innovation come from? It was from the top. Definitely not. That came years later but in the late 2000s to mid-2010s it was often engineers pushing for cloud services over the executives’ preferred in-house services because it turned a bunch of helpdesk tickets and weeks to months of delays into an AWS API call. Pretty soon CTOs were backing it because those teams shipped faster. The consultants picked it up, yes, but they push a lot of things and usually it’s only the ones which actual users want which succeed. | | |
| ▲ | HotHotLava 2 days ago | parent | next [-] | | I'm pretty sure OP wasn't talking about the management hierarchy, but "from the top" in the sense that it was big established companies inventing the cloud and innovating and pushing in the space, not small startups. | | |
| ▲ | acdha 2 days ago | parent | next [-] | | That could be, I was definitely thinking of management hierarchy since that difference has been so striking with AI. A lot of my awareness started in the academic HPC world which was a bit ahead in needing high capacity of generic resources but it felt like this came from the edges rather than the major IT giants. Companies like IBM, Microsoft, or HP weren’t doing it, and some companies like Oracle or Cisco appeared to thought that infrastructure complexity was part of their lock on enterprise IT departments since places with complex hand run books weren’t quick to switch vendors. Amazon at the time wasn’t seen as a big tech company - they were where you bought CDs – and companies like Joyent or Rackspace had a lot of mindshare as well before AWS started offering virtual compute in 2006. One big factor in all of this was that x86 virtualization wasn’t cheap until the mid-to-late 2000s so a lot of people weren’t willing to pay high virtualization costs, but without that you’re talking services like Bingodisk or S3 rather than companies migrating compute loads. | |
| ▲ | pandemicsyn 2 days ago | parent | prev [-] | | Sure Amazon was a big established co at the dawn of the cloud, and a little bit of an unexpected dark horse. None of the managed hosting providers saw Amazon coming. Also ran's like Rackspace and the like where also pretty established by that point. But there was also cool stuff happening at smaller places like Joyent, Heroku, Slicehost, Linode, Backblaze, iron.io, etc. |
| |
| ▲ | simianwords 2 days ago | parent | prev [-] | | Sure that’s the same way GPT was invented in Google. |
| |
| ▲ | HarHarVeryFunny 2 days ago | parent | prev | next [-] | | C-suite is pushing business adoption, and those GenAI projects of which 95% are failing. | | |
| ▲ | simianwords 2 days ago | parent | next [-] | | The other side of it is lots of users are willingly purchasing the subscription without any need of push. | | |
| ▲ | HarHarVeryFunny 2 days ago | parent | next [-] | | Sure - there are use cases for LLMs that work, and use cases that don't. I think those actually using "AI" have a lot better idea of which are which than the C-suite folk. | |
| ▲ | ath3nd 2 days ago | parent | prev [-] | | And yet we fail to see an uptick of better and higher quality software, if anything, AI slop is making OSS owners reject AI prs because of their low quality. I'd wager the personal failure rate when using LLMs is probably even higher than the 95% in enterprise, but will wait to see the numbers. |
| |
| ▲ | og_kalu 2 days ago | parent | prev [-] | | That same report said a lot people are just using personal accounts for work though. |
| |
| ▲ | BobbyTables2 2 days ago | parent | prev [-] | | Cloud is just “rent to own” without the “own” part. |
|
|
| ▲ | awongh 2 days ago | parent | prev | next [-] |
| > All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents What I always thought was exceptional is that it turns out it wasn't the incumbents who have the obvious advantage. Take away the fact that everyone involved is already at the top 0.00001% echelon of the space (Sam Altman and everyone involved with the creation of OpenAI), but if you had asked me 10 years ago who will have the leg up creating advanced AI I would have said all the big companies hoarding data. Turns out just having that data wasn't a starting requirement for the generation of models we have now. A lot of the top players in the space are not the giant companies with unlimited resources. Of course this isn't the web or web 2.0 era where to start something huge the starting capital was comparatively tiny, but it's interesting to see that the space allows for brand new companies to come out and be competitive against Google and Meta. |
|
| ▲ | crawshaw 2 days ago | parent | prev | next [-] |
| > All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents The model leaders here are OpenAI and Anthropic, two new companies. In the programming space, the next leaders are Qwen and DeepSeek. The one incumbent is Google who trails all four for my workloads. In the DevTools space, a new startup, Cursor, has muscled in on Microsoft's space. This is all capital heavy, yes, because models are capital heavy to build. But the Innovator's Dilemma persists. Startups lead the way. |
| |
| ▲ | lexandstuff 2 days ago | parent | next [-] | | And all of those companies except for Google are entirely dependant on NVIDIA who are the real winners here. | |
| ▲ | nightski 2 days ago | parent | prev [-] | | At what point is OpenAI not considered new? It's a few months from being a decade old with 3,000 employees and $60B in funding. | | |
| ▲ | fshr 2 days ago | parent [-] | | Well, compare them to Microsoft: 50 years old with 228,000 employees and $282 billion in revenue. |
|
|
|
| ▲ | tedivm 2 days ago | parent | prev | next [-] |
| This is only if you ignore the growing open source models. I'm running Qwen3-30B at home and it works great for most of the use cases I have. I think we're going to find that the optimizations coming from companies out of China are going to continue making local LLMs easier for folks to run. |
| |
|
| ▲ | hintymad 2 days ago | parent | prev | next [-] |
| > The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies Wouldn't it be the same for the hardware companies? Not everyone could build CPUs as Intel/Motorola/IBM did, not everyone could build mainframes like IBM did, and not everyone could build smart phones like Apple or Samsung did. I'd assume it boils down the value of the LLMs instead of who has the moat. Of course, personally I really wish everyone can participate in the innovation like the internet era, like training and serving large models on a laptop. I guess that day will come, like PC over mainframes, but just not now. |
|
| ▲ | mlyle 2 days ago | parent | prev | next [-] |
| They've gotta hope they get to cheap AGI, though. Any stall in progress either on chips or smartness/FLOP means there's a lot of surplus previous generation gear that can hang and commoditize it all out to open models. Just like how the "dot com bust" brought about an ISP renaissance on all the surplus, cheap-but-slightly-off-leading-edge gear. IMO that's the opportunity for a vibrant AI ecosystem. Of course, if they get to cheap AGI, we're cooked: both from vendors having so much control and the destabilization that will come to labor markets, etc. |
|
| ▲ | atleastoptimal 2 days ago | parent | prev | next [-] |
| Nevertheless, prices for LLM at any given level of performance have gone down precipitously over the past few years. Regardless of how bad it seems the decisions being made are, the decision making process both is making an extreme amount of money for those in the AI companies, and providing extremely cheap and high quality intelligence for those using their offerings. |
| |
| ▲ | pimlottc 2 days ago | parent [-] | | Remember when you could get an Uber ride all the way across town for $5? It is way too early to know what prices for these services will actually cost. | | |
| ▲ | atleastoptimal 2 days ago | parent [-] | | Is there an open source Uber? There are multiple open source AI models far beyond what SOTA was just 1 year ago. Even if they don't manage to drive prices down on the most recent closed models, they themselves will never be a trivial amount more than the compute they run on, and compute will only get more expensive if demand for AI continues to grow exponentially, which would likewise drive prices down due to competitive pressure. | | |
| ▲ | xigoi 19 hours ago | parent [-] | | > There are multiple open source AI models far beyond what SOTA was just 1 year ago. There are many models that call themselves open source, but the source is nowhere to be found, only the weights. |
|
|
|
|
| ▲ | chermi 2 days ago | parent | prev | next [-] |
| What's the counterfactual? Where would the world be today? Certainly the present is not an optimal allocation of resources, uncertainty and hysteris make it impossible. But where do you think we'd be instead? Are you assuming all of those dollars would be going to research otherwise? They wouldn't; if not for hype "ai" LLMs, research funding would be at 2017+/- 25% levels. Also think of how many researchers are funded and PhDs are trained because of this awful LLM era. Certainly their skills transfer. (Not that brute forcing with shit tons of compute is
standard "research funding"). And for the record I really wish more money was being thrown outside of LLM. |
|
| ▲ | edg5000 2 days ago | parent | prev | next [-] |
| How can you dismiss the value of the tech so blatantly? Have you used Opus for general questions and coding? > no idea whether it's actually benefiting someone "on the ground" I really don't get it. Before, we were farmers plowing by hand, and now we are using tractors. I do totally agree with your sentiment that it's still a horrible development though! Before Claude Code, I ran everything offline, all FOSS, owned all my machines, servers etc. Now I'm a subscription user. Zero control, zero privacy. That is the downside of it all. Actually, it's just like the mechanisation of farming! Collectivization in some countries was a nightmare for small land owners who cultivated the land (probably with animals). They went from that to a more efficient, government controlled collective farm, where they were just a farm worker, with the land reclaimed through land reform. That was an upgrade for the efficiency of farming, needing fewer humans for it. But a huge downgrade for the individual small-scale land owners. |
|
| ▲ | conartist6 2 days ago | parent | prev | next [-] |
| Come to the counterrevolution; we have cookies : ) |
|
| ▲ | 3uler 2 days ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | monax 2 days ago | parent [-] | | If you get a 10x speedup with an LLM it mean you are not doing anything new or interesting | | |
| ▲ | 3uler 2 days ago | parent [-] | | That is 99% of software engineering, boring line of business CRUD applications or data pipelines. Most creativity is just doing some slightly different riff on something done before… Sorry to break it to you but most of your job is just context engineering for yourself. |
|
|