▲ | pyman 2 days ago | |||||||||||||||||||||||||
OpenAI raised $40 billion and Anthropic raised $10 billion, claiming they needed the money to buy more expensive Nvidia servers to train bigger models. Then Chinese experts basically said, no you don't. And they proved it. | ||||||||||||||||||||||||||
▲ | ben_w 2 days ago | parent [-] | |||||||||||||||||||||||||
More like the Egg of Columbus or the Red Queen. You need to run as hard as you can just to stay where you are, and once you've got the answer it's very much easier to reproduce the result. This is of course also what annoys a certain fraction of commenters in every discussion about LLMs (and in art, diffusion models): they're overwhelmingly learning from the examples made by others, not investigating things for themselves. While many scientists will have had an example like Katie Mack's viral tweet* with someone who doesn't know what "research" even is in the first place and also mistakes "first thing I read" for such research, the fact many humans also do this doesn't make the point wrong when it's about AI. * https://paw.princeton.edu/article/katie-mack-09-taming-troll | ||||||||||||||||||||||||||
|