Remix.run Logo
antonvs 13 hours ago

> When Eliza was first built it was seen a toy.

It was a toy, and that approach - hardcoded attempts at holding a natural language conversation - never went anywhere, for reasons that have been obvious since Eliza was first created. Essentially, the approach doesn't scale to anything actually useful.

Winograd'd SHRDLU was a great example of the limitations - providing a promising-seeming natural language interface to a simple abstract world - but it notoriously ended up being pretty much above the peak of manageable complexity for the hardcoded approach to natural language.

LLMs didn't grow out of work on programs like Eliza or SHRDLU. If people had been prescient enough to never bother with hardcoded NLP, it wouldn't have affected development of LLMs at all.

kazinator 13 hours ago | parent [-]

Based on what do we know that Eliza won't scale? Have we tried building an Eliza with a few gigabytes of question/response patterns?

Prior to the rise of LLMs, such a thing would be a waste of time by any respectable AI researcher because it obviously isn't related to intelligence.

antonvs 12 hours ago | parent [-]

Probably the biggest Eliza-like program is ALICE[1], which used a more formalized rule representation called AIML. The size of ALICE distributions is in the single-digit megabytes.

Systems like that don't scale in a human effort sense - i.e. the amount of effort required compared to the value produced is not worthwhile.

Aside from that, models like that didn't have a true grammar model. They responded to keywords, which meant that their responses were often not relevant to the input.

> "Prior to the rise of LLMs, such a thing would be a waste of time by any respectable AI researcher because it obviously isn't related to intelligence."

You might imagine so, but that wasn't really the case. ALICE won the Loebner AI prize multiple times, for example. Before neural networks started "taking over", it wasn't obvious to everyone what direction AI progress might come from.

People even tried to extend ELIZA/ALICE style models, with one of the most prominent examples being MegaHAL[2], which also won a Loebner prize. MegaHAL used a Markov model, so wasn't purely based on hardcoded rules, but like ELIZA and ALICE it still didn't understand grammar.

[1] https://en.wikipedia.org/wiki/Artificial_Linguistic_Internet...

[2] https://en.wikipedia.org/wiki/MegaHAL