▲ | tpurves a day ago | ||||||||||||||||
These would be inherently temporary problems though right? If it became eventually clear that alternate methods were the way forward, NVDIA would be highly motivated to do the optimization work wouldn't they? Any new step functions that can forestall the asymptotic plateauing of AI progress are things they desperately need. | |||||||||||||||||
▲ | jszymborski a day ago | parent | next [-] | ||||||||||||||||
That follows reason, but in practice I find that its often not the case. My suspicion is that it's hard to establish that your method is superior to another if, for example, it takes 10-100x the compute to train a model. This is largely in part due to the fact that machine learning is currently a deeply empirical field. Nvidia isn't likely to start releasing updated firmware for an obscure architecture for which there is limited evidence of improvement, and even less adoption. | |||||||||||||||||
| |||||||||||||||||
▲ | ssivark 20 hours ago | parent | prev [-] | ||||||||||||||||
Check out The hardware lottery [1], which drove a lot of discussion a few years ago. |