▲ | LLMs generate 'fluent nonsense' when reasoning outside their training zone(venturebeat.com) | |
7 points by cintusshied 15 hours ago | 2 comments | ||
▲ | VivaTechnics 14 hours ago | parent | next [-] | |
LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational. | ||
▲ | rsynnott 7 hours ago | parent | prev [-] | |
I mean, define 'reasoning'. |