▲ | hungmung 6 days ago | |
Chain of thought is just a way of trying to squeeze more juice out of the lemon of LLM's; I suspect we're at the stage of running up against diminishing returns and we'll have to move to different foundational models to see any serious improvement. | ||
▲ | archaeans 6 days ago | parent [-] | |
The so-called "scaling laws" are expressing diminishing returns. How is it that "if we grow resources used exponentially errors decrease linearly" ever seen as a good sign? |