▲ | ethn 15 hours ago | |
We would see neither squirrels nor crows since these criticisms miss the forest for the trees. But we can address them. > This is irrelevant for AI, because people throw more hardware at bigger problems GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer. Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states: "A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars. > These two sentences contradict each other There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable. 1. https://pmc.ncbi.nlm.nih.gov/articles/PMC6458202/ Further reading on Amdahl's law w.r.t LLM: 2. https://medium.com/@TitanML/harmonizing-multi-gpus-efficient... 3. https://pages.cs.wisc.edu/~sinclair/papers/spati-iiswc23-tot... |