▲ | ryanjshaw 2 days ago | ||||||||||||||||
Every single HN post on AI or crypto I see this argument and it’s exhausting. When Eliza was first built it was seen a toy. It took many more decades for LLMs to appear. My favourite example is prime numbers: a bunch of ancient nerds messing around with numbers that today, thousands of years later, allow us to securely buy anything and everything without leaving our homes or opening our mouths. You can’t dismiss a technology or discovery just because it’s not useful on an arbitrary timescale. You can dismiss it for other reasons, just not this reason. Blockchain and related technologies have advanced the state of the art in various areas of computer science and mathematics research (zero knowledge proofs, consensus, smart contracts, etc.). To allege this work will bear no fruit is quite a claim. | |||||||||||||||||
▲ | ka94 a day ago | parent | next [-] | ||||||||||||||||
The problem with this kind of argument is what I'd call the "Bozo the Clown" rejoinder: It's true that people spent a lot of time investigating something that decades (centuries, millennia) later came to be seen as useful. But it's also true that people spent a lot of time investigating things that didn't. From the perspective of the present when people are doing the investigating, a strange discovery that has no use can't easily be told apart from a strange discovery that has a use. All we can do in that present is judge the technology on its current merits - or try to advance the frontier. And the burden of proof is on those who try to advance it to show that it would be useful, because the default position (which holds for most discoveries) is that they're not going to have the kind of outsize impact centuries hence that number theory did. Or in other words: It's a bad idea to assume that everybody who get laughed at is a Galileo or Columbus, when they're more likely to be a Bozo the Clown. | |||||||||||||||||
▲ | antonvs a day ago | parent | prev | next [-] | ||||||||||||||||
> When Eliza was first built it was seen a toy. It was a toy, and that approach - hardcoded attempts at holding a natural language conversation - never went anywhere, for reasons that have been obvious since Eliza was first created. Essentially, the approach doesn't scale to anything actually useful. Winograd'd SHRDLU was a great example of the limitations - providing a promising-seeming natural language interface to a simple abstract world - but it notoriously ended up being pretty much above the peak of manageable complexity for the hardcoded approach to natural language. LLMs didn't grow out of work on programs like Eliza or SHRDLU. If people had been prescient enough to never bother with hardcoded NLP, it wouldn't have affected development of LLMs at all. | |||||||||||||||||
| |||||||||||||||||
▲ | pavlov a day ago | parent | prev [-] | ||||||||||||||||
Research is fine. But when corporations and venture capitalists are asking for your money today in exchange for vague promises of eventual breakthroughs, it's not wrong to question their motives. |