| ▲ | bigyabai 2 hours ago | ||||||||||||||||
> In-context learning is a "good enough" continuous learning approximatation, it seems. "it seems" is doing a herculean effort holding your argument up, in this statement. Say, how many "R"s are in Strawberry? | |||||||||||||||||
| ▲ | ACCount37 2 hours ago | parent [-] | ||||||||||||||||
If you think that "strawberry" is some kind of own, I don't know what to tell you. It takes deep and profound ignorance of both the technical basics of modern AIs and the current SOTA to do this kind of thing. LLMs get better release to release. Unfortunately, the quality of humans in LLM capability discussions is consistently abysmal. I wouldn't be seeing the same "LLMs are FUNDAMENTALLY FLAWED because I SAY SO" repeated ad nauseam otherwise. | |||||||||||||||||
| |||||||||||||||||