▲ | ben_w 2 days ago | ||||||||||||||||
More like the Egg of Columbus or the Red Queen. You need to run as hard as you can just to stay where you are, and once you've got the answer it's very much easier to reproduce the result. This is of course also what annoys a certain fraction of commenters in every discussion about LLMs (and in art, diffusion models): they're overwhelmingly learning from the examples made by others, not investigating things for themselves. While many scientists will have had an example like Katie Mack's viral tweet* with someone who doesn't know what "research" even is in the first place and also mistakes "first thing I read" for such research, the fact many humans also do this doesn't make the point wrong when it's about AI. * https://paw.princeton.edu/article/katie-mack-09-taming-troll | |||||||||||||||||
▲ | pyman 2 days ago | parent [-] | ||||||||||||||||
So what are you trying to say? Do you agree that OpenAI and Anthropic are still claiming they need more data centres and more Nvidia servers to win the AI race, while still trying to understand what China actually did and how they did it? | |||||||||||||||||
|