| ▲ | pton_xd 2 days ago | |||||||||||||||||||
> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so. It's interesting that Terrence Tao just released his own blog post stating that they're best viewed as stochastic generators. True he's not an AI researcher, but it does sound like he's using AI frequently with some success. "viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems" [0]. | ||||||||||||||||||||
| ▲ | jdub a day ago | parent | next [-] | |||||||||||||||||||
I get the impression that folks who have a strong negative reaction to the phrase "stochastic parrot" tend to do so because they interpret it literally or analogously (revealed in their arguments against it), when it is most useful as a metaphor. (And, in some cases, a desire to deny the people and perspectives from which the phrase originated.) | ||||||||||||||||||||
| ▲ | antirez 2 days ago | parent | prev [-] | |||||||||||||||||||
What happened recently is that all the serious AI researches that were in the stochastic parrot side changed point of view but, incredibly, people without a deep understanding on such matters, previously exposed to such arguments, are lagging behind and still repeat arguments that the people who popularized them would not repeat again. Today there is no top AI scientist that will tell you LLMs are just stochastic parrots. | ||||||||||||||||||||
| ||||||||||||||||||||