Remix.run Logo
bildung 2 days ago

Well compared to the human brain LLMs do approximately zero work. An LLM neuron is at least 3 orders of magnitude less complex than a neuron in the human brain - and this factor only accounts for the neuronal instrinsics we currently know of.

ben_w 2 days ago | parent | next [-]

Agreed. I think this means the fair comparison is either:

  "transistors vs. *synapses*"
or

  "an entire integrated computer with all necessary cooling, including a modifier to account for the amortised training effort required to achieve human-quality output vs. the amortised energy requirements and output of a human over their lifetime".
Has to be human-quality output to be a fair comparison, a million lines of gibberish is worthless.

The human has to be educated up until 21 or so to be economically viable, retires in their late 60s, works 25% of the hours in a working week (but not at all on non-working week e.g. holiday, sickness, periods of unemployment, and while parental leave is work it isn't the specific work that people want to pay you for), and the brain itself is only ~20% of a human's calorific consumption.

In the (currently quite small number of) tasks where the AI we have is good enough to replace human labour, for some models it is already in the range where the marginal energy cost for inference is smaller than the energy cost (in food calories) to get a human to do the same thing.

But also, last I checked the peak performance of LLMs is not as high as a domain expert at anything, so even infinite cost into the AI isn't going to equal them. On the other hand, human intelligence is not equal for all of us, so I find it very easily believe that there's a significant fraction of the population who will always, over their lifetime, be behind today's SOTA AI, and therefore infinite time and energy for them isn't every going to equal the AI we already have.

pama 2 days ago | parent | prev [-]

Agreed. And that near zero work has a near zero energy cost. In addition, silicon inference (combining hardware and software advances) continues to be optimized and become more energy efficient at a rapid rate.

There exists an unfounded myth surrounding the extreme energy costs of silicon-based inference, which is far from reality.