1

In an attempt at designing a neural network more closely modeled by the human brain, I wrote code before doing the reading. The neuron I have modeled operates on the following method.

  • Parameters: potential, threshold, activation.
  • [activation] = 0.0
  • Receive inputs, added to [potential].
  • If ([potential] >= [threshold])
    • [activation] = [potential]
    • [potential] = 0.0
  • Else
    • [potential] *= 0.5

In short, the neuron receives inputs, and decides if it "fires" if the threshold is met. If not, the input sum, or potential, decreases. Inputs are applied by adding their values to the input potentials of the input neurons, and connections multiply neuron activation values by weights before applying them to their destination potentials. The only difference between this an a spiking network is the activation model.

I am, however, beginning to learn that Spiking Neural Networks (SNNs), the actual biologically-inspired model, operate quite differently. Forgive me if my understanding is terribly flawed. I seem to have the understanding that signals in these networks are sharp sinusoidal wave-forms with between 100 and 300 "spikes" in a subdivision of "time," given for 1 "second." These signals are sampled for the "1 second" by the neuron, and processed by a differential equation that determines the activation state of the neuron. Synapses seem to function in the same manner -> multiplying the signal by a weight, but increasing or decreasing the period of the graph.

However, I wish to know what form of neuron activation model I created. I have been unable to find papers that describe a method like this.

EDIT. The "learnable" parameters of this model are [threshold] of the neuron and [weight] of the connections/synapses.

  • I just want to know what are the `learnable` parameters in your model and how they are learnt. Please modify your question with these details. One thing that I thought of is that you could make the multiplier to `[potential]` as a learnable parameter which can be learnt from the labelled data (assuming you are attempting a supervised learning problem with this model). – varsh Jan 23 '19 at 05:28

1 Answers1

0

The model you describe is a kind of a leaky integrate-and-fire (LIF) neuron (see p. 7). It is leaky because the membrane potential decreases steadily in the absence of input. In contrast, in the simple integrate-and-fire (IF) model the membrane potential is retained indefinitely until the neuron spikes, at which point it is reset to 0. However, LIF neurons are usually modelled with exponential decay of the membrane potential, where you have a time constant $\tau$ and you compute the potential $P_{t}$ at time $t$ based on the potential $P_{t_{last}}$ at time when the last input arrived as

$P_{t} = P_{t_{last}} exp(- \frac{t - t_{last}}{\tau})$

This is the same formula as radioactive decay (see here for more details). The idea is that this model is inherently 'aware' of time, whereas the IF model (and your design above) do not factor in the timing of the spikes, so they act like a classical neural network activation. In any case, whether or not a neuron would fire does depend on the firing threshold, so I think that treating the threshold as a learnable parameter is justified - you just have to decide what rules to use for updating it.

Based on what you describe as your understanding of spiking neural networks, it seems that you have been reading about the Hodgkin-Huxley (HH) model (also in that paper I linked to). (Please correct me if I'm wrong.) You are correct in thinking that spikes in the brain are not infinitely narrow like a delta function but more like a very sharp sinusoidal signal, and the HH model faithfully reproduces that. However, the reason why the HH model is not actually used for simulations is that it is computationally very taxing. In practice, in most cases we do not actually care about the state of the neuron between inputs, as long as your model accurately describes the neuron state and what happens to it when an input arrives.

There are other models that approximate the HH model very closely but are much faster to simulate (like the Izhikevich model). However, the LIF model is very fast and sufficient in most cases.

Hope this helps!

cantordust
  • 943
  • 6
  • 10