▲ | PaulRobinson 18 hours ago | |
They made this claim in a peer reviewed paper submitted to Nature, but it’s not clear how peers could evaluate the truth of this claim. If it’s true, and the consensus is that we are hitting limits of how to improve these models, the hypothesis that the entire market is in a bubble over-indexed on GPU costs [0] starts to look more credible. At the very least, OpenAI and Anthropic look ridiculously inefficient. Mind you, given the numbers on the Oracle deal don’t add up, this is all starting to sound insane already. | ||
▲ | fspeech 7 hours ago | parent [-] | |
These numbers were easily supported by those who attempted to replicate the RL portion of their work. The foundational model training is harder to verify but is also not central to the paper. |