Remix.run Logo
kazinator 12 hours ago

Based on what do we know that Eliza won't scale? Have we tried building an Eliza with a few gigabytes of question/response patterns?

Prior to the rise of LLMs, such a thing would be a waste of time by any respectable AI researcher because it obviously isn't related to intelligence.

antonvs 11 hours ago | parent [-]

Probably the biggest Eliza-like program is ALICE[1], which used a more formalized rule representation called AIML. The size of ALICE distributions is in the single-digit megabytes.

Systems like that don't scale in a human effort sense - i.e. the amount of effort required compared to the value produced is not worthwhile.

Aside from that, models like that didn't have a true grammar model. They responded to keywords, which meant that their responses were often not relevant to the input.

> "Prior to the rise of LLMs, such a thing would be a waste of time by any respectable AI researcher because it obviously isn't related to intelligence."

You might imagine so, but that wasn't really the case. ALICE won the Loebner AI prize multiple times, for example. Before neural networks started "taking over", it wasn't obvious to everyone what direction AI progress might come from.

People even tried to extend ELIZA/ALICE style models, with one of the most prominent examples being MegaHAL[2], which also won a Loebner prize. MegaHAL used a Markov model, so wasn't purely based on hardcoded rules, but like ELIZA and ALICE it still didn't understand grammar.

[1] https://en.wikipedia.org/wiki/Artificial_Linguistic_Internet...

[2] https://en.wikipedia.org/wiki/MegaHAL