Remix.run Logo
karussell 11 hours ago

> What is the business model of open weight AI?

This is what I do not understand as well and advertising the knowledge and more advanced model is also the only thing that comes to my mind.

Since a month I am using gemma4 locally successfully on a MBP M2 for many search queries (wikipedia style questions) and it is really good, fast enough (30-40t/s) and feels nice as it keeps these queries private. But I don't understand why Google does this and so I think "we" need to find a better solution where the entire pipeline is open and the compute somehow crowdfunded. Because there will be a time when these local models will get more closed like Android is closing down. One restriction they might enforce in the future could be that they cripple the models down for "sensitive" topics like cybersecurity or health topics. Or the government could even feel the need to force them to do so.

2ndorderthought 11 hours ago | parent | next [-]

Why would you want to try to support all users simple queries on your ai data center if they could run it on their own computer?

It builds good will also. it also shows research prowess.

For China it's different. They need to show Americans who don't trust them at all because of propaganda that they have no tricks up their sleeve. It also doesn't hurt when Chinese companies drop models for free people can run at home that are about as good as sonnet. Serious mic drop.

TheJCDenton 10 hours ago | parent | next [-]

Very good point on using local ai to avoid data centers costs.

Running AI models on local hardware was exploratory at first, and if it's so easy today it's thanks to open source. It's a little bit coincidental that we have this today, and that mainstream hardware have this capability. The fact that a phone can run very small models is exploratory or some kind of marketing opportunity at best.

Why would hardware company ships cards with more AI capabilites (like more VRAM) in the foreseable future ? On what ground does the marketing for on device AI will keep generating interest ? For something as important, it's very uncertain. But above all, it should not depends on these brittle justifications.

Showing good will in distribution and research prowess today is positive communication, but it can be exactly the oppositite if/when an attack using those small models will reach a high value target.

For China the cultural difference is so huge, it's difficult to say. I would think they first and foremost need to show to evryone inside and outside of China that they match american models. Second, i would say that when americans prefer few very powerfull companies on the get go because they can leverage a lot of capital rapidly to industrialize, China will prefer leveraging a lot of smaller companies exploring a lot of things simultanously (so doing a lot of research), THEN creating legislation to let only the best (or a few) to survive effectively. In the end it's the same result (monopoly or oligopoly), but China may have a stronger core (research) and America may have stronger productive capital, that may be proved obsolete... In the long run, in either side it's a gamble, again.

2ndorderthought 9 hours ago | parent | next [-]

They have already shown that their models match or excel over American ones in different cases. For cheaper too.

I disagree on the second point. I think most Americans don't prefer fewer competition, that's a bit antithetical to the free market.

I doubt the Chinese government cares as much about controlling a few companies as you think they do.

China has a few things going for it beyond research. They are mission driven, they actually have needs for this technology, their needs will forward their entire economy as they are the world's largest manufacturers. They are also huge exporters and have buckets of customer support for various languages.

China also has considerably stronger infrastructure for electricity, etc. even with an nividia embargo they are doing more than showing up.

I don't think it's a matter of who "wins". There is no winning. I think China stands to gain far more from LLMs than the US does, and they have proven they don't need the us to do it, even with he us trying to sabotage it's every move into the space. The game is already more or less over in my mind.

If anything I see LLMs as having a huge market in China, and now the US can't even sell it to them.

All I care about is, if I have to use this technology, let me run it locally to avoid the surveillance capitalism aspect. That seems to be the real reason the us has propped up it economy in anticipation for this technology. Yet it doesn't long term benefit the us nor me.

codebje 7 hours ago | parent | prev [-]

I'd expect unified memory architectures (Apple M-series, AMD Ryzen AI series, etc) to be the future of local inference, not GPU cards.

2ndorderthought 7 hours ago | parent [-]

Time will tell. Depends on small model architecture trends and hardware availability. I wouldn't be surprised if something came slightly out of left field. Considering Taiwan is trapped into producing the same chips for the next 2 years, I wouldn't be surprised if a new player emerged.

karussell 11 hours ago | parent | prev [-]

Indeed cost can be another factor. Maybe also the main reason why Chrome added an offline model.

2ndorderthought 10 hours ago | parent [-]

That and it's lucrative for Android/chrome to have a text summarizer model embedded on your phone probably for government contracts and data exfil but we won't go through there.

11 hours ago | parent | prev [-]
[deleted]