Remix.run Logo
LuxBennu 3 hours ago

The title is misleading — there's no trained 100B model, just an inference framework that claims to handle one. But the engineering is worth paying attention to. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck. The 1.58-bit approach is interesting because ternary weights turn matmuls into additions — a fundamentally different compute profile on commodity CPUs. If 5-7 tok/s on a single CPU for 100B-class models is reproducible, that's a real milestone for on-device inference. Framework is ready. Now we need someone to actually train the model.

embedding-shape 3 hours ago | parent | next [-]

> Framework is ready. Now we need someone to actually train the model.

If Microslop aren't gonna train the model themselves to prove their own thesis, why would others? They've had 2 years (I think?) to prove BitNet in at least some way, are you really saying they haven't tried so far?

Personally that makes it slightly worrisome to just take what they say at face value, why wouldn't they train and publish a model themselves if this actually led to worthwhile results?

throwaw12 2 hours ago | parent | next [-]

Because this is Microsoft, experimenting and failing is not encouraged, taking less risky bets and getting promoted is. Also no customer asked them to have 1-bit model, hence PM didn't prioritize it.

But it doesn't mean, idea is worthless.

You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea.

embedding-shape 2 hours ago | parent [-]

> You could have said same about Transformers, Google released it, but didn't move forward,

I don't think you can, Google looked at the research results, and continued researching Transformers and related technologies, because they saw the value for it particularly in translations. It's part of the original paper, what direction to take, give it a read, it's relatively approachable for being a machine learning paper :)

Sure, it took OpenAI to make it into an "assistant" that answered questions, but it's not like Google was completely sleeping on the Transformer, they just had other research directions to go into first.

> But it doesn't mean, idea is worthless.

I agree, they aren't, hope that wasn't what my message read as :) But, ideas that don't actually pan out in reality are slightly less useful than ideas that do pan out once put to practice. Root commentator seems to try to say "This is a great idea, it's all ready, only missing piece is for someone to do the training and it'll pan out!" which I'm a bit skeptical about, since it's been two years since they introduced the idea.

Schlagbohrer an hour ago | parent | next [-]

Google had been working on a big LLM but they wanted to resolve all the safety concerns before releasing it. It was only when OpenAI went "YOLO! Check this out!" that Google then internally said, "Damn the safety concerns, full speed ahead!" and now we find ourselves in this breakneck race in which all safety concerns have been sidelined.

gardnr an hour ago | parent [-]

Scaling seemed like the important idea that everyone was chasing. OpenAI used to be a lot more safety minded because it was in their non profit charter, now they’ve gone for-profit and weaponized their tech for the USA military. Pretty wild turnaround. Saying OpenAI was cavalier with safety in the early days is inaccurate. It was a skill issue. Remember Bard? Google was slow.

zozbot234 2 hours ago | parent | prev | next [-]

What OpenAI did was train increasingly large transformer model instances. which was sensible because transformers allowed for a scaling up of training compared to earlier models. The resulting instances (GPT) showed good understanding of natural language syntax and generation of mostly sensible text (which was unprecedented at the time) so they made ChatGPT by adding new stages of supervised fine tuning and RLHF to their pretrained text-prediction models.

wongarsu 2 hours ago | parent | prev [-]

On the one hand, not publishing any new models for an architecture in almost a year seems like forever given how things are moving right now. On the other hand I don't think that's very conclusive on whether they've given up on it or have other higher priority research directions to go into first either

GorbachevyChase 2 hours ago | parent | prev | next [-]

The most benign answer would be that they don’t want to further support an emerging competitor to OpenAI, which they have significant business ties to. I think the more likely answer which you hinted at is that the utility of the model falls apart as scale increases. They see the approach as a dead end so they are throwing the scraps out to the stray dogs.

riskable 2 hours ago | parent [-]

Not to mention Microsoft's investments in Nvidia and other GPU-adjacent/dependent companies!

A successful ternary model would basically erase all that value overnight. In fact, the entire stock market could crash!

Think about it: This is Microsoft we're talking about! They're a convicted monopolist that has a history of manipulating the market for IT goods and services. I wouldn't put it past them to refuse to invest in training a ternary model or going so far as to buy up ternary startups just to shut them down.

Want to make some easy money: Start a business training a ternary model and make an offer to Microsoft. I bet they'll buy you out for at least a few million even if you don't have a product yet!

observationist an hour ago | parent | prev | next [-]

So is it finally time for a Beowulf cluster to do something amazing?

gregman1 3 hours ago | parent | prev [-]

Cannot agree more!

wongarsu 3 hours ago | parent | prev | next [-]

I've also always though that it's an interesting opportunity for custom hardware. Two bit addition is incredibly cheap in hardware, especially compared to anything involving floating point. You could make huge vector instructions on the cheap, then connect it to the fastest memory you can buy, and you have a capable inference chip.

You'd still need full GPUs for training, but for inference the hardware would be orders of magnitude simpler than what Nvidia is making

monocasa 6 minutes ago | parent | next [-]

These are trits, which provide their own efficiencies.

Interestingly, a trit x float multiplier is cheaper than a trit x integer multiplier in hardware if you're willing to ignore things like NaNs.

0 and one are trivial, just a mux for identity and zero. But because floats are sign-magnitude, multiply by negative one is just an inverter for the sign bit, where as for integers you need a bitwise inverter and full incrermenter.

regularfry 3 hours ago | parent | prev [-]

You only need GPUs if you assume the training is gradient descent. GAs or anything else that can handle nonlinearities would be fine, and possibly fast enough to be interesting.

WithinReason 3 hours ago | parent | prev | next [-]

> a fundamentally different compute profile on commodity CPU

In what way? On modern processors, a Fused Multiply-Add (FMA) instruction generally has the exact same execution throughput as a basic addition instruction

ismailmaj 2 hours ago | parent | next [-]

You drop the memory throughput requirements because of the packed representation of bits so an FMA can become the bottleneck, and you bypass the problem of needing to upscale the bits to whatever FP the FMA instruction needs.

typically for 1-bit matmul, you can get away with xors and pop_counts which should have a better throughput profile than FMA when taking into account the SIMD nature of the inputs/outputs.

ActivePattern 38 minutes ago | parent | prev | next [-]

The win is in how many weights you process per instruction and how much data you load.

So it's not that individual ops are faster — it's that the packed representation lets each instruction do more useful work, and you're moving far less data from memory to do it.

actionfromafar 2 hours ago | parent | prev [-]

Bitnet encoding more information dense per byte perhaps? CPUs have slow buses so would eke out more use of bandwidth?

rustyhancock 3 hours ago | parent | prev | next [-]

Yes. I had to read it over twice, it does strike me as odd that there wasn't a base model to work with.

But it seems the biggest model available is 10B? Somewhat unusual and does make me wonder just how challenging it will be to train any model in the 100B order of magnitude.

wongarsu 3 hours ago | parent | next [-]

Approximately as challenging as training a regular 100B model from scratch. Maybe a bit more challenging because there's less experience with it

The key insight of the BitNet paper was that using their custom BitLinear layer instead of normal Linear layers (as well as some more training and architecture changes) lead to much, much better results than quantizing an existing model down to 1.58 bits. So you end up making a full training run in bf16 precision using the specially adapted model architecture

naasking an hour ago | parent | prev [-]

What's unusual about it? It seems pretty standard to train small models to validate an approach, and then show that training scales with model size to 8B to 14B parameter models, which is what they did.

webXL 2 hours ago | parent | prev | next [-]

It comes from (intentionally?) misleading docs: https://github.com/microsoft/BitNet/issues/391

(only suggesting that it's intentional because it's been there so long)

verdverm 2 hours ago | parent [-]

That issue appears to be the one that's wrong. From the technical report

> We evaluated bitnet.cpp in terms of both inference speed and energy cost. Comprehensive tests were conducted on models with various parameter sizes, ranging from 125M to 100B. specific configurations for each model are detailed in the Appendix A.

webXL 34 minutes ago | parent [-]

Thanks for pointing that out. I'll ask the issue creator if they've considered that. Would be nice if the maintainer would handle that (sigh) and link to the actual models used for testing (double sigh).

august11 3 hours ago | parent | prev | next [-]

In their demo they're running 3B model.

RandomTeaParty an hour ago | parent | prev | next [-]

> The 1.58-bit approach

can we stop already with these decimals and just call it "1 trit" which it exactly is?

cubefox 3 hours ago | parent | prev | next [-]

LLM account

hrmtst93837 2 hours ago | parent | next [-]

I browsed through the history of the user and confirm this statement. I know that there are users who say they used em-dashes even before the rise of ChatGPT and HN statistics support that. For example, one prominent example is dang.

However this user uses — in almost all his posts and he had a speed of 1 comment per minute or so on multiple different topics.

Springtime 2 hours ago | parent | prev | next [-]

Hmm, the user joined in 2019 but had no submissions or comments until just 40 minutes ago (at least judging by the lack of a second page?) and all the comments are on AI related submissions. Benefit of doubt is it'd have to be a very dedicated lurker or dormant account they remembered they had.

Edit: oh, just recalled dang restricted Show HNs the other day to only non-new users (possibly with some other thresholds). I wonder if word got out and some are filling accounts with activity.

verdverm 2 hours ago | parent [-]

There has been a shift to the Ai accounts, they use Show HN less now. This started before dang's comment, I assume because they saw the earlier posts about the increase in quantity / decrease in quality.

I suspect that they are trying to fake engagement prior to making their first "show" post as well.

Jowsey an hour ago | parent | prev | next [-]

Agreed. This is becoming an issue, see also: https://news.ycombinator.com/item?id=47259308

orbital-decay 3 hours ago | parent | prev | next [-]

Funny enough I now involuntarily take RTFA as a slight slop signal, because all these accounts dutifully read the article before commenting, unlike most HNers who often respond to headlines.

vova_hn2 2 hours ago | parent | next [-]

First they claimed that if you use em dashes you are not human

And I did not speak out

Because I was not using em dashes

Then they claimed that if you're crammar is to gud you r not hmuan

And I did not spek aut

Because mi gramar sukcs

Then they claimed that if you actually read the article that you are trying to discuss you are not human...

K0balt 2 hours ago | parent [-]

I’ve been rounded up for things I wrote two decades ago because of my em dashes lol. The pitchfork mentality gives me little hope for how things are going to go once we have hive mind AGI robots pervasive in society.

vova_hn2 2 hours ago | parent [-]

If I was operating a bot farm, at this point I would probably add some bots that go around and accuse legit human users (or just random users) of being bots.

Created confusion and frustration will make it much harder to separate signal from the noise for most people.

yorwba 3 hours ago | parent | prev | next [-]

Not all of them do: https://news.ycombinator.com/item?id=47335156 There are evidently lots of people experimenting with different botting setups. Some do better at blending in than others.

PeterHolzwarth 2 hours ago | parent [-]

Interesting - the account you mention, and the GP, are both doing replies that are themselves all about the same length, and also the same length between the two accounts. I get what you mean.

cubefox 3 hours ago | parent | prev [-]

Yeah. It correctly pointed out that the editorialized HN title is wrong, there is no 100B model.

nkohari 2 hours ago | parent | prev [-]

I would love to understand the thought process behind this. I'm sure it's a fun experiment, to see if it's possible and so on... but what tangible benefit could there be to burning tokens to spam comments on every post?

cyanydeez 2 hours ago | parent | prev | next [-]

Check out the new QWEN coder model.

Also, isnt there different affinities to 8bit vs 4bit for inferences

butILoveLife 3 hours ago | parent | prev [-]

>. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck.

I imagine you got 96gb because you thought you'd be running models locally? Did you not know the phrase Unified Memory is marketing speak?