| ▲ | phkahler 2 hours ago | |
>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs. | ||
| ▲ | rootusrootus an hour ago | parent [-] | |
> Companies are currently too busy exploiting the local maxima of LLMs I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own. | ||