▲ | einrealist 6 days ago | |
> Twitter sucked [...] Electric cars sucked [...] Phones sucked All these things are not black boxes and they are mostly deterministic. Based on the inputs, you EXACTLY know what to expect as output. That's not the case with LLMs, how they are trained and how they work internally. We certainly get a better understanding on how to adjust the inputs so we get a desired output. But that's far from guaranteed at the same level as the examples you mentioned. That's a fundamental problem with LLMs. And you can see that in how industry actors are building solutions around that problem. Reasoning (chain-of-thought) is basically a band-aid to narrow a decision tree, because the LLM does not really "reason" about anything. And the results only get better with more training data. We literally have to brute-force useful results by throwing more compute and memory at the problem (and destroying the environment and climate by doing so). The stagnation of recent model releases does not look good for this technology. |