▲ | card_zero 2 days ago | |||||||||||||||||||||||||
But that's not ten times the workdays. That's just taking a bunch of speed and sitting by yourself worrying about something. Results may be eccentric. Though I don't know what you mean by "width of a human brain". | ||||||||||||||||||||||||||
▲ | Tuna-Fish 2 days ago | parent [-] | |||||||||||||||||||||||||
It's ten times the time to work on a problem. Taking a bunch of speed does not make your brain work faster, it just messes with your attention system. > Though I don't know what you mean by "width of a human brain". A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results. When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can. If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things. | ||||||||||||||||||||||||||
|