Remix.run Logo
aleph_minus_one 2 hours ago

> The difference is that, fortunately, fine-tuning them is extremely easy.

If this was true, educating people fast for most jobs would be a really easy and solved problem. On the other hand in March 2018, Y Combinator put exactly this into its list of Requests for Startups, which gives strong evidence that this is a rather hard, unsolved problem:

> https://web.archive.org/web/20200220224549/https://www.ycomb...

armchairhacker 2 hours ago | parent [-]

Easier than to an LLM, compared to inference.

“‘r’s in strawberry” and other LLM tricks remind me of brain teasers like “finished files” (https://sharpbrains.com/blog/2006/09/10/brain-exercise-brain...). Show an average human this brain teaser and they’ll probably fall for it the first time.

But never a second; the human learned from one instance, effectively forever, without even trying. ChatGPT had to be retrained and to not fall for the “r”’s trick, which cost much more than one prompt, and (unless OpenAI are hiding a breakthrough, or I really don’t understand modern LLMs) required much more than one iteration.

That seems to be the one thing that prevents LLMs from mimicking humans, more noticeable and harder to work around than anything else. An LLM can beat a Turing test where it only must generate a few sentences. No LLM can imitate human conversation over a few years (probably not even a few days), because it would start forgetting much more.