▲ | antonvs 13 hours ago | |||||||
> When Eliza was first built it was seen a toy. It was a toy, and that approach - hardcoded attempts at holding a natural language conversation - never went anywhere, for reasons that have been obvious since Eliza was first created. Essentially, the approach doesn't scale to anything actually useful. Winograd'd SHRDLU was a great example of the limitations - providing a promising-seeming natural language interface to a simple abstract world - but it notoriously ended up being pretty much above the peak of manageable complexity for the hardcoded approach to natural language. LLMs didn't grow out of work on programs like Eliza or SHRDLU. If people had been prescient enough to never bother with hardcoded NLP, it wouldn't have affected development of LLMs at all. | ||||||||
▲ | kazinator 13 hours ago | parent [-] | |||||||
Based on what do we know that Eliza won't scale? Have we tried building an Eliza with a few gigabytes of question/response patterns? Prior to the rise of LLMs, such a thing would be a waste of time by any respectable AI researcher because it obviously isn't related to intelligence. | ||||||||
|