Remix.run Logo
bwestergard 10 hours ago

I'm shocked to see how poorly these models, which I find useful day to day, do in solving virtually any of the problems in Unlambda.

Before looking at the results my guess was that scores would be higher for Unlambda than any of the others, because humans that learn Scheme don't find it all that hard to learn about the lambda calculus and combinatory logic.

But the model that did the best, Qwen-235B, got virtually every problem wrong.

__alexs 10 hours ago | parent [-]

They are also weirdly bad at Brainfuck which is basically just a subset of C.

culi 2 hours ago | parent | next [-]

Yeah well they also still struggle with "4 + 6 / 9" so I'm not sure why anyone is surprised with these findings

astrange 7 hours ago | parent | prev [-]

BF involves a lot of repeated symbols, which is hard for tokenized models. Same problem as r's in strawberry.

bwestergard 7 hours ago | parent [-]

Interesting. So why do the models seem to handle deeply nested Lisp expressions just fine?

kgeist 6 hours ago | parent [-]

Probably because there's a ton of code that deals with nested parentheses across languages in the training data, and models have learned how to work around tokenization limitations, when it comes to parentheses.