Remix.run Logo
hansmayer 5 days ago

No please folks. Personally I have always been excited about the AI as a scientific discipline and practical field, and still am. But lets please stop trying to make a dead-end application of an otherwise interesting technology work. Its like those people who were still trying to build electronics with vaccuum tubes after transistors were invented. We need a transistor moment in the AI, not more vaccuum tubes.

anuramat 4 days ago | parent | next [-]

If language is a dead-end application of language models, I don't know what isn't; the tooling is architecture agnostic anyway

> after transistors were invented

But we don't have "transistors" yet, what's your point exactly?

hansmayer 4 days ago | parent [-]

Given the vast space of AI research results since the 1950s, I would not say that we dont have transistors yet. Just that we are not applying them.

anuramat 3 days ago | parent [-]

So, researchers are insanely lazy/secretly against AI/controlled by the Big Data?

hansmayer 3 days ago | parent [-]

I never said anything remotely similar to that, you must be projecting.

itsthecourier 5 days ago | parent | prev [-]

what are you talking about? how is this a deadend?

it improves over existing tools

hansmayer 5 days ago | parent [-]

I am not disputing that it improves the tools. But looking at the entire picture, the whole concept of using LLMs as a general purpose utility is a dead-end. Just the basic arithmetics of it does not add up. If you told your manager you had spent 20,000 dollars on a project, generating a pre-tax revenue of 100 dollars, i.e. creating net loss of 19,900 USD, you'd be fired right away. But somehow the GenAI industry has a similar investment-to-revenue ratio on a much larger scale and still the wishful thinking is in it's fifth year? I get it that people want to get in on the ride but just that having to add so much on top of it, constantly new plugins, tools, concepts, whatever all so that we can avoid seeing this for what it is - building TVs with vaccuum tubes, when what we desparately need are transistors, not improved vaccuum tubes. Just as we did not need faster horses in the era of Ford T-Model.

NullifyNAN 4 days ago | parent | next [-]

DeepSeek has shown that it makes 500% profit and it sells tokens for far lower than any big AI company.

https://www.reuters.com/technology/chinas-deepseek-claims-th...

These companies are unprofitable because of balance sheet shenanigans. See “Hollywood Accounting”.

There is absolutely no way they are not turning massive profit. They are serving relatively similar models to open source at 5-50x the price.

GLM 2.5 is $0.60 in, $2.20 out and it’s basically equivalent to Claude Opus.

Opus is $15 in and $75 out.

No way they’re operating at a massive loss.

hansmayer 4 days ago | parent [-]

I have no idea about DeepSeek. But the US-based GenAI leaders are in fact, operating under massive loss.

93po 5 days ago | parent | prev [-]

OpenAI would be profitable if they stopped all investment and research and just sold their existing products. So this argument doesn't really match reality.

hansmayer 4 days ago | parent | next [-]

Sure, feel free to break down the numbers.

93po 4 days ago | parent [-]

In 2024 they had a $5 billion loss. About $3b of that was training. $1.5b was employees. I'm sure there's at least another $0.5b of costs associated to building out rather than just serving inference. In reality it's probably several times that. So if you cut employees to just maintaining what they have, fire all researchers etc, stop expansion, and stop training, you'd be profitable. Which is dumb and they wouldn't do that, but my point isn't that it's realistic, but rather that they could sell what they have at a profit if they wanted to.

hansmayer 4 days ago | parent [-]

So they could be profitable, but the conditions to achieve the profitability are dumb and unrealistic. Your own words. Somehow you claim to have still made your point, because a company firing all its employees and stopping all product development could be profitable, right? Because thats what companies do routinely, they just maximise profits by firing everyone once the product is mature enough and can practically take care of itself. I wonder why all the e-commerce companies just dont apply this one simple trick? Is that the argument that you are making? Now for the calculations - are you sure the losses are only 5B? Well, if we just account for the Microsoft donated Azure credits, they run a lot of their workloads on, its probably a lot, lot more than that. Unaccounted for in the OpenAI books perhaps, but still a huge material investment, that does not make any returns to anyone, hence a (by definition) loss.

93po 3 days ago | parent [-]

I'm not sure what your original point was.

Either it's that serving AI as a business model is impossible to run at a profit, which I easily demonstrated is not the case. If it's just serving the model, then yes, it works, and there's tons of businesses doing just that and operating at a profit.

Or is that's the expense of evening running a GPU to serve a model is not worth the value that the model running on the GPU is capable of making, which is demonstrably not true, given that people are paying anywhere from dozens to hundreds of dollars a month, and there is an eventual payback period for both the cost of the hardware and electricity there.

hansmayer 3 days ago | parent [-]

I think it was on you to make a point here, not me. What is it that you demonstrated? I only saw a lot of creative imagination and "could be-would be" scenarios.

Eggpants 4 days ago | parent | prev [-]

Citation needed.