▲ | GregarianChild 2 days ago | |||||||||||||||||||||||||||||||||||||||||||
Can you explain the benefit of renaming dataflow as 'neuromorphic'? You do understand that dataflow architectures have been tried many many times? See [1] for a brief history. MIT had a bit dataflow lab for many years (lead by the recently deceased Arvind). What is the benefit of re-inventing dataflow architectures by complete amateurs who are not at all aware of the 1/2 century research tradition on dataflow architecture, and the very clear and concrete reasons when this architecture has so far failed whenever it was tried for general purpose processors? We can not even apply Santayana's "those who forget their history are condemned to repeat it because the 'neuromorphic' milieu doesn't even bother understanding this history. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | HarHarVeryFunny 2 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
> Can you explain the benefit of renaming dataflow as 'neuromorphic'? Neuromorphic just means brain-like or brain inspired, and brains operate in asynchronous dataflow type fashion. I'm not sure how you read into what I wrote that I was "renaming dataflow as neuromorphic", which was certainly not what I meant. I wonder if you regard Steve Furber (who I knew from Acorn), designer of the ARM CPU, as a "complete amateur"? He also designed the AMULET async processors, as well as for that matter the SpiNNaker system for spiking neural network research. In any case, async (dataflow) processor design, while complex, clearly isn't an impossible task, and at some point in the future when the need arises (mobile robotics?) and there is sufficient financial incentive, I expect we'll see it utilized in commercial systems. I'm not sure why you focus on "general purpose processors" given that we're talking about ANNs and neuromorphic systems. A custom chip would seem a better bet if the goal is to minimize power usage. | ||||||||||||||||||||||||||||||||||||||||||||
|