▲ | cayley_graph 20 hours ago | |
An overview of what? It's entertaining to me when I come away understanding something more than I did before. I expected a high level explanation of the papers, or the faintest intuition behind the phenomenon your comment talked about. If you watched the video, it doesn't actually say anything besides restating variants of "thinking tokens aren't important" in a few different ways, summarizing a distantly related blog post, and entertaining some wild hypotheses about the future of LLMs. It's unclear if the producer has any deeper understanding of the subject; it honestly sounded like some low grade LLM generated fluff. I'm simply not used to that level of lack-of-substance. It wasn't a personal attack against you, as indicated. |