Remix.run Logo
grumpopotamus 8 hours ago

There is a TCEC category for 4k engines. The top ones are ~3000 Elo.

sigmoid10 8 hours ago | parent [-]

It's wild to think that 4096 bytes are sufficient to play chess on a level beyond anything humans ever achieved. Makes you think what other difficult tasks are out there that take even highly gifted humans years or decades to master, but a superior algorithm would more or less fit into one of those big QR code formats.

These things always make me think back to Westworld season 2, where the finale revealed that human minds are much simpler than they themselves believe and fit completely into an algorithm that could be printed in an average book.

vunderba 8 hours ago | parent | next [-]

Well, one of the most fundamental algorithms for building a chess AI is minimax [1] (or variants like negamax), and that’s been around for close to a century. The key difference is that as compute power and available RAM have grown, it’s become possible to search much deeper and evaluate far more plies.

So while 4k is still very impressive for the code base, it comes with a significantly larger runtime footprint.

[1] - https://en.wikipedia.org/wiki/Minimax

senfiaj 4 hours ago | parent [-]

Min-max + alpha-beta pruning is the backbone of the chess engines. I think 2KiB or even 1KiB might be enough (I guess the last one would be a very challenging squeeze). But what separates the best engines from average ones, is the heuristics. Heuristics is the most complicated one, and I doubt it's possible to fit it into a single-digit kilobyte memory (even 2-digit). For heuristics, engines like Stockfish also use neural networks, in addition to hand crafted algorithms. Also huge tables are used for endgames, etc.

kevmo314 8 hours ago | parent | prev | next [-]

The core search algorithm is very simple though. 4KB engines may not run that fast if they do exhaustive search, but they’ll be quite accurate.

According to TCEC the time control is 30 mins + 3 sec, that’s a lot of compute!

sigmoid10 8 hours ago | parent [-]

If you look at the current winner [1], it does a lot more than just brute force tree search. The space state for chess is simply too big to cover without good heuristics. Deep Blue may have been a pure brute force approach to beat Kasparov after Deep Thought failed using the same core algorithm, but modern chess engines search far deeper on the tree with far fewer nodes than Deep Blue ever could thanks to better heuristics.

[1] https://github.com/MinusKelvin/ice4

kevmo314 7 hours ago | parent [-]

I'm not suggesting that it's only brute force tree search, just that it's not very complicated to develop a theoretically perfect chess engine in direct response to the parent

> It's wild to think that 4096 bytes are sufficient to play chess on a level beyond anything humans ever achieved.

gnramires 3 hours ago | parent | prev [-]

It's not just about the base algorithm. It's also about the memory needed to run it, and the clockspeed. For example, even the hardest problem you can imagine, if it has a verifier algorithm that fits in 4k (which means the solution itself can be much larger than 4k), then you can simply do a basic brute force search over the solution space. That doesn't mean this algorithm is very intelligent; it's only very capable if you have a sufficiently fast computer; although indeed brute force is only feasible for the simplest tasks in practice, so the idea that algorithms (of increasing sizes) enable (greater) intelligence is definitely a part of the story, but not the whole story. You can also think of DNA, which represents a recipe for our bodies and brain, which we then use (essentially as an "algorithm") to learn things, with degrees of freedom (memory) far surpassing what DNA stores.

Now if you had a very good chess program running in very constrained (dynamic/RAM) memory, then that'd be partially more revealing. From a cursory search there's a 1800 ELO engine for the C64, which seems very impressive but very far from the best human players.

I'd be interested to see a curve of ELO x Avaliable RAM for the best chess engines (up to given RAM), and how that compares to other games and activities.

On RAM vs ROM (program size) memory, I think at a high level dynamic memory helps you keep track of search paths in a large tree search, saving you some computation. Program size tends to enable improving the effectiveness of your search heuristic, as well as pre-computing e.g. initial and final game optimal moves (potentially saving arbitrarily much compute). I like thinking about those things because I think the search paradigm is pretty informative of computation (and even intelligence) in general. Almost every problem is basically some kind of heuristic search in some kind of space. And you tend to get better at things by refining your heuristics (usually through some experimental training process or theoretical insight), considering more options, exploring deeper consequences, etc..

I think what really defines humans isn't really our ability to solve problems or play chess well etc. (although that's extremely useful and also enjoyable most of the time), it's really our emotions and inner world. We are not really Thinking Machines in essence, we're Feeling Machines most significantly. The thinking part is a neat instrumental part :) We can delegate thinking to machines but what we cannot extinguish is feeling or the human "soul", because that is the source of all meaning.