Remix.run Logo
cpldcpu 3 days ago

>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.

Isn't that in essence very similar to Quantization Aware Training (QaT)?

spwa4 3 days ago | parent [-]

Can you explain more? Why would that be the case? What is being passed from one layer to the next is not a linear value but the delay until the next spike, which is very different.

cpldcpu 3 days ago | parent [-]

It was also a question from my side. :)

But I understand that they simulate the spikes as integer events in the forward pass (as described here https://github.com/BICLab/Int2Spike) and calculate a continuous gradient based on high resolution weights for the backward pass.

This seems to be very similar to the straight-through-estimator (STE) approach that us usually used for quantization aware training. I may be wrong though.