Remix.run Logo
Show HN: Z80-μLM, a 'Conversational AI' That Fits in 40KB(github.com)
444 points by quesomaster9000 15 hours ago | 100 comments

How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays.

Z80-μLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware!

It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality.

--

The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'.

The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse.

Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P

But anyway, happy code-golf season everybody :)

nineteen999 13 hours ago | parent | next [-]

This couldn't be more perfectly timed .. I have an Unreal Engine game with both VT100 terminals (for running coding agents) and Z80 emulators, and a serial bridge that allows coding agents to program the CP/M machines:

https://i.imgur.com/6TRe1NE.png

Thank you for posting! It's unbelievable how someone sometimes just drops something that fits right into what you're doing. However bizarre it seems.

quesomaster9000 12 hours ago | parent | next [-]

Oh dear, it seems we've... somehow been psychically linked...

I developed a browser-based CP/M emulator & IDE: https://lockboot.github.io/desktop/

I was going to post that instead, but wanted a 'cool demo' instead, and fell down the rabbit hole.

stevekemp 5 hours ago | parent | next [-]

That is beautiful.

I wrote a console-based emulator, and a simple CP/M text-adventure game somewhat recently

https://github.com/skx/cpmulator/

At some point I should rework my examples/samples to become a decent test-suite for CP/M emulators. There are so many subtle differences out there.

It seems I could even upload a zipfile of my game, but the escape-codes for clearing the screen don't work, sadly:

https://github.com/skx/lighthouse-of-doom

jaak 8 hours ago | parent | prev [-]

I've been playing the Z80-μLM demos in your CP/M emulator. Works great! However, I have yet to guess a correct answer in GUESS.COM! I'm not sure if I'm just not asking the right questions or I'm just really bad at it!

quesomaster9000 7 hours ago | parent [-]

Don't tell anybody, but you sit on it

sailfast 2 hours ago | parent [-]

Boris!!!

sixtyj 12 hours ago | parent | prev | next [-]

Connections: Alternative History of Technology by James Burke documents these "coincidences".

TeMPOraL 11 hours ago | parent [-]

Those "coincidences" in Connections are really no coincidence at all, but path dependence. Breakthrough advance A is impossible or useless without prerequisites B and C and economic conditions D, but once B and C and D are in place, A becomes obvious next step.

embedding-shape 9 hours ago | parent [-]

Some of those really are coincidences, like "Person A couldn't find their left shoe and ended up in London at a coffee house, where Person B accidentally ended up when their carriage hit a wall, which lead to them eventually coming up with Invention C" for example.

Although from what I remember from the TV show, most of what he investigates/talks about is indeed path dependence in one way or another, although not everything was like that.

simonjgreen 10 hours ago | parent | prev [-]

Super intrigued but annoyingly I can’t view imgur here

abanana 8 hours ago | parent [-]

Indeed, part of me wants to not use imgur because we can't access it, but a bigger part of me fully supports imgur's decision to give the middle finger to the UK after our government's censorship overreach.

homebrewer 4 hours ago | parent | next [-]

It blocks many more countries than just the UK because it's the lowest effort way of fighting "AI" scrapers.

imgur was created as a sort of protest against how terrible most image hosting platforms were back then, went down the drain several years later, and it's now just like they were.

supern0va a minute ago | parent [-]

It turns out that running free common internet infrastructure at scale is both hard and expensive, unfortunately. What we really need is a non-profit to run something like imgur.

wizzwizz4 6 hours ago | parent | prev [-]

It was a really clever move on Imgur's part. Their blocking the UK has nothing to do with the Online Safety Act: it's a response to potential prosecution under the Data Protection Act, for Imgur's (alleged) unlawful use of children's personal data. By blocking the UK and not clearly stating why, people assume they're taking a principled stand about a different issue entirely, so what should be a scandal is transmuted into positive press.

giancarlostoro 5 hours ago | parent | prev | next [-]

This is something I've been wondering about myself. What's the "Minimally Viable LLM" that can have simple conversations. Then my next question is, how much can we push it so it can learn from looking up data externally, can we build a tiny model with an insanely larger context window? I have to assume I'm not the only one who has asked or thought of these things.

Ultimately, if you can build an ultra tiny model that can talk and learn on the fly, you've just fully localized a personal assistant like Siri.

andy12_ 3 hours ago | parent | next [-]

This is extremely similar to Karpathy's idea of a "cognitive core" [1]; an extremely small model with near-0 encyclopedic knowledge and basic reasoning and tool-use capabilities.

[1] https://x.com/karpathy/status/1938626382248149433

fho 3 hours ago | parent | prev | next [-]

You might be interested in RWKV: https://www.rwkv.com/

Not exactly "minimal viable", but a "what if RNNs where good for LLMs" case study.

-> insanely fast on CPUs

qingcharles 2 hours ago | parent | prev | next [-]

I think what's amazing to speculate is how we could have had some very basic LLMs in at least the 90s if we'd invented the tech previously. I wonder what the world would be like now if we had?

Dylan16807 4 hours ago | parent | prev [-]

For your first question, the LLM someone built in Minecraft can handle simple conversations with 5 million weights, mostly 8 bits.

I doubt it would be able to make good use of a large context window, though.

rahen 11 hours ago | parent | prev | next [-]

I love it, instant Github star. I wrote an MLP in Fortran IV for a punched card machine from the sixties (https://github.com/dbrll/Xortran), so this really speaks to me.

The interaction is surprisingly good despite the lack of attention mechanism and the limitation of the "context" to trigrams from the last sentence.

This could have worked on 60s-era hardware and would have completely changed the world (and science fiction) back then. Great job.

noosphr 9 hours ago | parent [-]

Stuff like this is fascinating. Truly the road not taken.

Tin foil hat on: i think that a huge part of the major buyout of ram from AI companies is to keep people from realising that we are essentially at the home computer revolution stage of llms. I have a 1tb ram machine which with custom agents outperforms all the proprietary models. It's private, secure and won't let me be motetized.

Zacharias030 9 hours ago | parent [-]

how so? sound like you are running Kimi K2 / GLM? What agents do you give it and how do you handle web search and computer use well?

Dwedit 13 hours ago | parent | prev | next [-]

In before AI companies buy up all the Z80s and raise the prices to new heights.

nubinetwork 10 hours ago | parent [-]

Too late, they stopped being available last year.

whobre 8 hours ago | parent [-]

Kind of. There’s still eZ80

gcanyon 7 hours ago | parent | prev | next [-]

So it seems like with the right code (and maybe a ton of future infrastructure for training?) Eliza could have been much more capable back in the day.

antonvs 2 hours ago | parent [-]

The original ELIZA ran on an IBM 7094 mainframe, in the 1960s. That machine had 32K x 36-bit words, and no support for byte operations. It did support 6-bit BCD characters, packed 6 per word, but those were for string operations, and didn't support arithmetic or logical operations.

This means that a directly translated 40 KB Z80 executable might be a tight squeeze on that mainframe, because 40K > 32K, counting words, not bytes. Of course if most of that size is just 2-bit weight data then it might not be so bad.

ELIZA running on later hardware would have been a different story, with the Z80 - released in 1976 - being an example.

vedmakk 13 hours ago | parent | prev | next [-]

If one would train an actual secret (e.g. a passphrase) into such a model, that a user would need to guess by asking the right questions. Could this secret be easily reverse engineered / inferred by having access to models weights - or would it be safe to assume that one could only get to the secret by asking the right questions?

Kiboneu 13 hours ago | parent | next [-]

I don’t know, but your question reminds me of this paper which seems to address it on a lower level: https://arxiv.org/abs/2204.06974

“Planting Undetectable Backdoors in Machine Learning Models”

“ … On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. …”

ronsor 13 hours ago | parent | prev [-]

> this secret be easily reverse engineered / inferred by having access to models weights

It could with a network this small. More generally this falls under "interpretability."

roygbiv2 13 hours ago | parent | prev | next [-]

Awesome. I've just designed and built my own z80 computer, though right now it has 32kb ROM and 32kb RAM. This will definitely change on the next revision so I'll be sure to try it out.

wewewedxfgdf 13 hours ago | parent [-]

RAM is very expensive right now.

wickedsight 10 hours ago | parent | next [-]

I just removed 128 megs of RAM from an old computer and am considering listing it on eBay to pay off my mortgage.

nrhrjrjrjtntbt 9 hours ago | parent [-]

I wonder what year past 128M ram would pay off mortgage. Maybe 1985

tgv 12 hours ago | parent | prev [-]

We're talking kilobytes, not gigabytes. And it isn't DDR5 either.

boomlinde 10 hours ago | parent | next [-]

Yeah, even an average household can afford 40k of slow DRAM if they cut down on luxuries like food and housing.

wewewedxfgdf 10 hours ago | parent | next [-]

Maybe the rich can but not all retro computer enthusiasts are rich.

charcircuit 9 hours ago | parent | prev | next [-]

If you can afford to spend a few dollars without sacrificing housing or food, you are being financial irresponsible.

ant6n 9 hours ago | parent | prev [-]

Busy cut down on the avocado toast!

nrhrjrjrjtntbt 9 hours ago | parent [-]

Then I can afford eggs, ram and a studio appartment!

lacoolj 4 hours ago | parent [-]

Maybe in Ohio

fuzzfactor 14 minutes ago | parent [-]

No apartment then, maybe just green, eggs, and RAM.

StilesCrisis 7 hours ago | parent | prev [-]

thats-the-joke.gif

boznz an hour ago | parent | prev | next [-]

Great work. What is your timeline to AGI ?

fuzzfactor 12 minutes ago | parent [-]

Can't possibly be further than just around the corner.

MagicMoonlight 3 hours ago | parent | prev | next [-]

What I really want is a game where each of the NPCs has a tiny model like this, so you can actually talk to them.

orbital-decay 12 hours ago | parent | prev | next [-]

Pretty cool! I wish free-input RPGs of old had fuzzy matchers. They worked by exact keyword matching and it was awkward. I think the last game of that kind (where you could input arbitrary text when talking to NPCs) was probably Wizardry 8 (2001).

gwern 5 hours ago | parent | prev | next [-]

So if it's not using attention and it processes the entire input into an embedding to process in one go, I guess this is neither a Transformer nor a RNN but just a MLP?

Peteragain 11 hours ago | parent | prev | next [-]

There are two things happening here. A really small LLM mechanism which is useful for thinking about how the big ones work, and a reference to the well known phenomenon, commonly dismissively referred to as a "trick", in which humans want to believe. We work hard to account for what our conversational partner says. Language in use is a collective cultural construct. By this view the real question is how and why we humans understand an utterance in a particular way. Eliza, Parry, and the Chomsky bot at http://chomskybot.com work on this principle. Just sayin'.

nrhrjrjrjtntbt 9 hours ago | parent [-]

MAYBE

cwmoore 3 hours ago | parent [-]

Universally correct reply, although honestly a bit vague.

Zee2 14 hours ago | parent | prev | next [-]

This is super cool. Would love to see a Z80 simulator set up with these examples to play with!

Imustaskforhelp 11 hours ago | parent [-]

100% Please do this! I wish the same

bartread 8 hours ago | parent | prev | next [-]

This is excellent. Thing I’d like to do if I had time: get it running on a 48K Spectrum. 10 year old me would have found that absolutely magical back in the 1980s.

tomduncalf 8 hours ago | parent [-]

This was my first thought too haha. That would be mind blowing

bartread 7 hours ago | parent [-]

Yeah, very WarGames.

EDIT: Actually thinking about it some more…

- Imagine what you could do with 16-bit games of the era with one or more of these models embedded. Swap the model depending on the use case within the game. Great for adventures, RPGs, strategy, puzzle, and trading games (think Elite). With 512K or 1MB of RAM, plus 2 - 4 floppies (which became increasingly common as the era wore on), you could probably do a lot, especially if the outcomes of conversations can result in different game outcomes

- Back in the day nobody was really trying to do anything serious with AI on 8 or even most 16-bit machines, because nobody thought they were powerful enough to do anything useful with. Now the thinking has changed to how much somewhat useful intelligence can I cram into the least powerful device, even if that’s only for fun?

- Imagine showing this running on a CP/M machine, like the C128, to a serious AI researcher working back in the 1980s. Minds blown, right?

- Now spool forward 10 years into the 1990s and think what PC hardware of that era would have been capable of with these limited language models. I wonder what that era might have looked like with something that seems like somewhat useful conversational AI? A sort of electro-steampunk-ish vibe maybe? People having really odd conversations with semi-capable home automation running via their PCs.

anonzzzies 12 hours ago | parent | prev | next [-]

Luckily I have a very large amount of MSX computers, zx, amstrad cpc etc and even one multiprocessor z80 cp/m machine for the real power. Wonder how gnarly this is going to perform with bankswitching though. Probably not good.

jacquesm 10 hours ago | parent | prev | next [-]

Between this and RAM prices Zilog stock must be up! Awesome hack. Now apply the same principles to a laptop and take a megabyte or so, see what that does.

coolius 6 hours ago | parent | prev | next [-]

This is impressive, those are some very restrictive requirements. I wonder what we are able to run on more powerful hardware such as ESP32 or RP2040, has anyone tried this?

andrepd 10 hours ago | parent | prev | next [-]

We should show this every time a Slack/Teams/Jira engineer tries to explain to us why a text chat needs 1.5GB of ram to start up.

dangus 9 hours ago | parent [-]

> It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality.

You can buy a kid’s tiger electronics style toy that plays 20 questions.

It’s not like this LLM is bastion of glorious efficiency, it’s just stripped down to fit on the hardware.

Slack/Teams handles company-wide video calls and can render anything a web browser can, and they run an entire App Store of apps, all from a cross-platform application.

Including Jira in the conversation doesn’t even make logical sense. It’s not a desktop application that consumes memory. Jira has such a wide scope that the word “Jira” doesn’t even describe a single product.

ben_w 8 hours ago | parent | next [-]

> Slack/Teams handles company-wide video calls and can render anything a web browser can, and they run an entire App Store of apps, all from a cross-platform application.

The 4th Gen iPod touch had 256 meg of RAM and also did those things, with video calling via FaceTime (and probably others, but I don't care). Well, except "cross platform", what with it being the platform.

dangus 4 hours ago | parent [-]

Group FaceTime calls didn’t exist at the time. That wasn’t added until 2018 and required iOS 12.

Remember that Slack does simultaneous multiple participants screen sharing plus annotations plus HD video feeds from all participants plus the entirety of the rest of the app continues to function as if you weren’t on a call at all simultaneously.

It’s an extremely powerful application when you really step back and think about it. It just looks like “text” and boring business software.

ben_w an hour ago | parent [-]

> Group FaceTime calls didn’t exist at the time. That wasn’t added until 2018 and required iOS 12.

And CU-SeeMe did that in the early 90s with even worse hardware: https://en.wikipedia.org/wiki/File:CU-Schools.GIF

Even more broadly, group calls were sufficiently widely implemented to get themselves standardised 29 years ago: https://en.wikipedia.org/wiki/H.323

> It’s an extremely powerful application when you really step back and think about it. It just looks like “text” and boring business software.

The *entire operating system of the phone* is more powerful, and ran on less.

messe 9 hours ago | parent | prev | next [-]

> can render anything a web browser can

That's a bug not a feature, and strongly coupled to the root cause for slack's bloat.

dangus 4 hours ago | parent [-]

One person’s “bloat” is another person’s “critical business feature.”

The app ecosystem of Slack is largely responsible for its success. You can extend it to do almost anything you want.

andrepd 8 hours ago | parent | prev [-]

My Pentium 3 in 2005 could do chat and video calls and play chess and send silly emotes. There is no conceivable user-facing reason why in 20 years the same functionality takes 30× as many resources, only developer-facing reasons. But those are not valid reasons for a professional. If a bridge engineer claims he now needs 30× as much concrete to build the same bridge as he did 20 years ago, and the reason is his/her own conveinence, that would not fly.

ben_w 7 hours ago | parent | next [-]

> If a bridge engineer claims he now needs 30× as much concrete to build the same bridge as he did 20 years ago, and the reason is his/her own conveinence, that would not fly.

By itself, I would agree.

However, in this metaphor, concrete got 15x cheaper in the same timeframe. Not enough to fully compensate for the difference, but enough that a whole generation are now used to much larger edifices.

andrepd 6 hours ago | parent [-]

So it means you could save your client 93% of their money in concrete, but you choose to make it 2× more expensive! That only makes my metaphor stronger ahaha.

ben_w 2 hours ago | parent | next [-]

You could save 93% of the money in concrete, at the cost of ???* in the more-expensive-than-ever time of the engineer themselves who now dominates the sticker price.

(At this point the analogy breaks down because who pays for the software being slower is the users' time, not the taxes paid by a government buying a bridge from a civil engineer…)

* I don't actually buy the argument that the last decade or so of layers of "abstraction" save us developers any time at all, rather I think they're now several layers deep of nested inner platforms that each make things more complicated, but that's a separate entire thread, and blog post: https://benwheatley.github.io/blog/2024/04/07-21.31.19.html

beagle3 5 hours ago | parent | prev [-]

But also, there is more traffic on the bridge.

The word processors of 30 years ago often had limits like “50k chapters” and required “master documents” for anything larger. Lotus 123 had much fewer columns or rows than modern excel.

Not an excuse, of course, but the older tools are not usable anymore if you have modern expectations.

dangus 4 hours ago | parent | prev [-]

I have great doubts that you were doing simultaneous screen sharing from multiple participants with group annotation plus HD video in your group calls, all while supporting chatting that allowed you to upload and view multiple animated gifs, videos, rich formatted text, reactions, slash command and application automation integrations, all simultaneously on your Pentium 3.

I would be interested to know the name of the program that did all that within the same app during that time period.

For some reason Slack gets criticism for being “bloated” when it basically does anything you could possibly imagine and is essentially a business communication application platform. Nobody can actually name a specific application that does everything Slack does with better efficiency.

andrepd 2 hours ago | parent [-]

You're grasping at anything to justify the unjustifiable. Not only did I do most (not all, obviously) of those things in my Pentium 3, including video and voice chat, screenshare, and silly animated gifs and rich text formatting, but also: that's beside the point. Let's compare like with like then; how much memory does it take to have a group chat with a few people and do a voice/video in MSN messenger or the original Skype, and how much does Slack or Teams take? What about UI stutter? Load time? There's absolutely no justification for a worse user experience in a 2025 computer that would be a borderline supercomputer in 2005.

vatary 12 hours ago | parent | prev | next [-]

It's pretty obvious this is just a stress test for compressing and running LLMs. It doesn't have much practical use right now, but it shows us that IoT devices are gonna have built-in LLMs really soon. It's a huge leap in intelligence—kind of like the jump from apes to humans. That is seriously cool.

acosmism 11 hours ago | parent [-]

i'll echo that practicality only surfaces once it is apparent what can be done. yea this feels like running "DOOM on pregnancy test devices" type of moment

DrNosferatu 4 hours ago | parent | prev | next [-]

Awesome! Anyone for a port to the MSX?

A web version would also be cool.

a_t48 13 hours ago | parent | prev | next [-]

Nice - that will fit on a Gameboy cartridge, though bank switching might make it super terrible to run. Each bank is only 16k. You can have a bunch of them, but you can only access one bank at a time (well, technically two - bank 0 is IIRC always accessible).

ant6n 9 hours ago | parent [-]

You have 32KB of ROM, plus 8 Kb of ram on original game boy. Game boy color has more. Bank switching is super fast, as well. Given that models are likely streamed, I doubt the bank switching is a problem.

Biggest pain point is likely the text input.

jasonjmcghee 14 hours ago | parent | prev | next [-]

For future projects and/or for this project, there are many LLMs available more than good enough to generate that kind of synthetic data (20 Qs) with permissive terms of use. (So you don’t need to stress about breaking TOS / C&D etc)

integricho 4 hours ago | parent | prev | next [-]

Someone add it to collapseos please :)

magicalhippo 12 hours ago | parent | prev | next [-]

As far as I know, the last layer is very quantization-sensitive, and is typically not quantized, or quantized lightly.

Have you experimented with having it less quantized, and evaluated the quality drop?

Regardless, very cool project.

kouteiheika 11 hours ago | parent [-]

(Not OP)

It depends on the model, but from my experiments (quantizing one layer of a model to 2-bit and then training the model with that layer in 2-bit to fix the damage) the first layer is the most sensitive, and yes, the last layer is also sensitive too. The middle layers take the best to quantization.

Different components of a layer also have a different sensitivity; e.g. the MLP downscale block damages the model the most when quantized, while quantizing the Q projection in self attention damages the model the least.

bitwize 7 hours ago | parent | prev | next [-]

Don't be surprised if you're paid a visit by the SCP Foundation: https://scp-wiki.wikidot.com/scp-079

(edit: change url)

alfiedotwtf 13 hours ago | parent | prev | next [-]

An LLM in a .com file? Haha made my day

teaearlgraycold 12 hours ago | parent [-]

SLM

quesomaster9000 12 hours ago | parent [-]

All the 'Small' language models and the 'TinyML' scene in general tend to bottom out at a million parameters, hence I though 'micro' is more apt at ~150k params.

pdyc 13 hours ago | parent | prev | next [-]

interesting, i am wondering how far can it go if we remove some of these limitations but try to solve some extremely specific problem like generating regex based on user input? i know small models(270M range) can do that but can it be done in say < 10MB range?

Waterluvian 12 hours ago | parent [-]

Generate an LLM that is designed to solve one extremely specific problem: answering the ultimate question of life, the universe, and everything.

Even with modern supercomputing the computation would be outpaced by the heat death of the universe, so token output must be limited to a single integer.

nrhrjrjrjtntbt 9 hours ago | parent [-]

00101010

Y_Y 9 hours ago | parent | prev | next [-]

Very cool. Did you consider using sparse weights?

dirkt 13 hours ago | parent | prev | next [-]

Eliza's granddaughter.

bytesandbits 9 hours ago | parent | prev | next [-]

it's giving Eliza! Ha, fun

NooneAtAll3 11 hours ago | parent | prev | next [-]

did you measure token/s?

Zardoz84 12 hours ago | parent | prev | next [-]

Meanwhile, Eliza was ported to BASIC and was run on many home computers in the 80s.

codetiger 13 hours ago | parent | prev [-]

Imagine, this working on a Gameboy, in those days. Would've sounded like magic

qingcharles 2 hours ago | parent | next [-]

"Look, my Game Boy passes the Turing Test!"

*burns you at the stake*

Sharlin 13 hours ago | parent | prev | next [-]

I don’t think this could beat an ELIZA-style bot in how magical it feels, given the extreme terseness of its replies.

numpad0 9 hours ago | parent | prev | next [-]

Flip phones had predictive texts since forever. LLMs are just* supercharged predi[ctive text algorithms are computer algorithms that are]

lodovic 13 hours ago | parent | prev | next [-]

I love these thought experiments. Looking at the code size, it would have been possible for someone to come up with this back in the days, similar to the idea of a million monkeys on a typewriter eventually producing Shakespeare.

alfiedotwtf 13 hours ago | parent | prev [-]

And would have lasted 3 minutes.

Speaking of - I remember my first digital camera (Fujitsu 1Mb resolution using SmartMedia)… it used so much power that you could take 20-30 photos and then needed to replace all 4 batteries lol