| |
| ▲ | jdiff a day ago | parent [-] | | Like most quirks that spread widely, a bandaid is swiftly applied. This is also why they now know how many r's are in "strawberry." But we don't get any closer to useful general intelligence by cobbling together thousands of hasty patches. | | |
| ▲ | llbbdd 19 hours ago | parent [-] | | Seems to have worked fine for humans so far. | | |
| ▲ | bigstrat2003 14 hours ago | parent [-] | | No, humans are not a series of band-aid patches where we learn facts in isolation. A human can reason, and when exposed to novel situations figure out a path forward. You don't need to tell a human how many rs are in "strawberry"; as long as they know what the letter r is they can count it in any word you choose to give them. As proven time and time again, LLMs can't do this. The embarrassing failure of Claude to figure out how to play Pokemon a year or so ago is a good example. You could hand a five year old human a Gameboy with Pokemon in it, and he could figure out how to move around and do the basics. He wouldn't be very good, but he would figure it out as he goes. Claude couldn't figure out to stop going in and out of a building. LLMs, usefulness aside, have repeatedly shown themselves to have zero intelligence. | | |
| ▲ | llbbdd 13 hours ago | parent [-] | | I was referring not to individual learning ability but to natural selection and evolutionary pressure, which IMO is easy to describe as a band-aid patch that takes a generation or more to apply. | | |
| ▲ | vlovich123 12 hours ago | parent [-] | | You would be correct if these issues were fixed by structurally fixing the LLM. But instead it’s patched through RL/data set management. That’s a very different and more brittle process - the evolutionary approach fixes classes of issues while the RL approach fixes specific instances of issues. | | |
| ▲ | llbbdd 7 hours ago | parent [-] | | Sure, and I'd be the first to admit I'm not aware of the intricate details wrt how LLMs are trained and refined, it's not my area. My original comment here was in disagreement of the relatively simple dismissal of the idea that the construction of humanity hasn't been an incremental zig-zag process and that I don't see any reason that a "real" intelligence couldn't follow the same path under our direction. I see a lot of philosophical conversation around this on HN disguised as endless deep discussions about the technicals, which amuses me because it feels like we're in the very early days there, and I think we can circle the drain defining intelligence until we all die. |
|
|
|
|
|
|