The biggest problem with recurrent spiking neural networks is searching for them.
Neuromorphic chips won't help because we don't even know what topology makes sense. Searching for topologies is unbelievably slow. The only thing you can do is run a simulation on an actual problem and measure the performance each time. These simulations turn into tar pits as the power law of spiking activity kicks in. Biology really seems to have the only viable solution to this one. I don't think we can emulate it in any practical way. Chasing STDP and membrane thresholds as some kind of schematic on AI is absolutely the wrong path.
We should be leaning into what our machines do better than biology. Not what they do worse. My CPU doesn't have to leak charge or simulate any delay if I don't want it to. I can losslessly copy and process information at rates that far exceed biological plausibility.
I guess the obvious question is whether something that mimics biology closer is actually useful. Computers are useful exactly because they aren't the same as us. LLMs are useful because they aren't the same as us. The goal is not to be as close to biology as possible, it's to be useful.
Neural networks have turned out to be pretty useful. The goal of distributed parallel processing wasn't to recreate the brain but to recreate it's capabilities.
Neuromorphic chips have been 5 years away for 15 years now.. Nevertheless the Schultz dopamine-TD error convergence is one of the coolest results in neuroscience
7 comments:
The biggest problem with recurrent spiking neural networks is searching for them.
Neuromorphic chips won't help because we don't even know what topology makes sense. Searching for topologies is unbelievably slow. The only thing you can do is run a simulation on an actual problem and measure the performance each time. These simulations turn into tar pits as the power law of spiking activity kicks in. Biology really seems to have the only viable solution to this one. I don't think we can emulate it in any practical way. Chasing STDP and membrane thresholds as some kind of schematic on AI is absolutely the wrong path.
We should be leaning into what our machines do better than biology. Not what they do worse. My CPU doesn't have to leak charge or simulate any delay if I don't want it to. I can losslessly copy and process information at rates that far exceed biological plausibility.
From article:
> Cause and Effect: If Neuron A fires just a few milliseconds before Neuron B, the brain assumes A caused B. The synapse between them gets stronger.
A recent study from Stanford found that it's more complex than this rule, some synapses followed it, some did the opposite, etc.
> A recent study from Stanford
Source?
I guess the obvious question is whether something that mimics biology closer is actually useful. Computers are useful exactly because they aren't the same as us. LLMs are useful because they aren't the same as us. The goal is not to be as close to biology as possible, it's to be useful.
Neural networks have turned out to be pretty useful. The goal of distributed parallel processing wasn't to recreate the brain but to recreate it's capabilities.
Interesting topic, but why am I reading an LLM generated summary?
Neuromorphic chips have been 5 years away for 15 years now.. Nevertheless the Schultz dopamine-TD error convergence is one of the coolest results in neuroscience