Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update leakyparallel.py #327

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 8 additions & 18 deletions snntorch/_neurons/leakyparallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,13 @@ class LeakyParallel(nn.Module):
* Linear weights are included in addition to
recurrent weights.
* `beta` is clipped between [0,1] and cloned to
`weight_hh_l` only upon layer initialization.
It is unused otherwise.
`weight_hh_l` only upon layer initialization. It is unused otherwise.
* There is no explicit reset mechanism.
* Several functions such as `init_hidden`, `output`,
`inhibition`, and `state_quant` are unavailable
in `LeakyParallel`.
* Only the output spike is returned. Membrane potential
is not accessible by default.
* RNN uses a hidden matrix of size (num_hidden, num_hidden)
to transform the hidden state vector. This would 'leak'
the membrane potential between LIF neurons, and so the
hidden matrix is forced to a diagonal matrix by default.
This can be disabled by setting `weight_hh_enable=True`.
`inhibition`, and `state_quant` are unavailable in `LeakyParallel`.
* Only the output spike is returned. Membrane potential is not accessible by default.
* RNN uses a hidden matrix of size (num_hidden, num_hidden)
to transform the hidden state vector. This would 'leak' the membrane potential between LIF neurons, and so the hidden matrix is forced to a diagonal matrix by default. This can be disabled by setting `weight_hh_enable=True`.

Example::

Expand Down Expand Up @@ -77,10 +71,8 @@ def forward(self, x):
:param hidden_size: The number of features in the hidden state `h`
:type hidden_size: int

:param beta: membrane potential decay rate. Clipped between 0 and 1
during the forward-pass. May be a single-valued tensor (i.e., equal
decay rate for all neurons in a layer), or multi-valued (one weight per
neuron). If left unspecified, then the decay rates will be randomly initialized based on PyTorch's initialization for RNN. Defaults to None
:param beta: membrane potential decay rate. Clipped between 0 and 1
during the forward-pass. May be a single-valued tensor (i.e., equal decay rate for all neurons in a layer), or multi-valued (one weight per neuron). If left unspecified, then the decay rates will be randomly initialized based on PyTorch's initialization for RNN. Defaults to None
:type beta: float or torch.tensor, optional

:param bias: If `False`, then the layer does not use bias weights `b_ih` and `b_hh`. Defaults to True
Expand Down Expand Up @@ -112,9 +104,7 @@ def forward(self, x):
:type learn_threshold: bool, optional

:param weight_hh_enable: Option to set the hidden matrix to be dense or
diagonal. Diagonal (i.e., False) adheres to how a LIF neuron works.
Dense (True) would allow the membrane potential of one LIF neuron to
influence all others, and follow the RNN default implementation. Defaults to False
diagonal. Diagonal (i.e., False) adheres to how a LIF neuron works. Dense (True) would allow the membrane potential of one LIF neuron to influence all others, and follow the RNN default implementation. Defaults to False
:type weight_hh_enable: bool, optional


Expand Down
Loading