Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoding methods #182

Open
slucas03 opened this issue Mar 4, 2022 · 9 comments
Open

Encoding methods #182

slucas03 opened this issue Mar 4, 2022 · 9 comments
Labels
question Further information is requested

Comments

@slucas03
Copy link

slucas03 commented Mar 4, 2022

I have recently started to work with SpikingJelly and I have queries about the kind of encoding methods that SpikingJelly is able to work with. In other words, I have noticed that the encoding methods that are explained in the documentation (https://github.com/fangwei123456/spikingjelly/blob/master/docs/source/clock_driven_en/2_encoding.rst) are mostly based on rate coding, such as, Poisson or weighted phase. In some papers is mentioned that the type of enconding method influences in the training of spiking neural network. So, I do not know whether SpikingJelly is prepared to work with time coding methods or it must be used only with rate coding.

Thanks in advance.

@fangwei123456 fangwei123456 added the question Further information is requested label Mar 5, 2022
@fangwei123456
Copy link
Owner

fangwei123456 commented Mar 5, 2022

Hi, the encoding method is independent with the SNN. So, you can use any encoding method for SNN built by SJ.
We have defined a temporal encoder, the latency encoder (Time-To-First-Spike, TTFS): https://spikingjelly.readthedocs.io/zh_CN/latest/spikingjelly.clock_driven.encoding.html#latencyencoder-init-en

You can use it to train a SNN on CIFAR-10 and get about 80+% accuracy.

@slucas03
Copy link
Author

slucas03 commented Mar 7, 2022

Thank you very much for your quick answer! I am going to try right now :)

@slucas03
Copy link
Author

slucas03 commented Mar 8, 2022

Hello again,

I have another querie about encoding/decoding methods. I am working with this example https://github.com/fangwei123456/spikingjelly/blob/master/spikingjelly/clock_driven/examples/lif_fc_mnist.py. I have noticed that this example is a classification task and I am interested whether SpikingJelly can work with other type of problems, such as regression, or not. I mean the spikes in the output neuron are identified like in a classification problem, but is there any possibility to define the exact position of the spike in the neuron's time window. Does the concept of neuron's time window exist in SpikingJelly?

@fangwei123456
Copy link
Owner

is there any possibility to define the exact position of the spike in the neuron's time window

Yes, you can use spike * t to get the firing time:

import torch
import torch.nn as nn
import torch.nn.functional as F
T = 8
spike = (torch.rand([T]) > 0.5).float()
t = torch.arange(T)

print(f'spike = {spike}')
t_f = spike * t
print(f't_f = {t_f}')
mask = spike == 1
print(f't_f[mask] = {t_f[mask]}')

The outputs are

spike = tensor([1., 1., 0., 0., 0., 0., 1., 1.])
t_f = tensor([0., 1., 0., 0., 0., 0., 6., 7.])
t_f[mask] = tensor([0., 1., 6., 7.])

You can get another useage at https://github.com/fangwei123456/spikingjelly/blob/master/publications.md .

For example:

Deep Q-learning: https://github.com/AptX395/Deep-Spiking-Q-Networks

Regression: https://github.com/urancon/StereoSpike

@slucas03
Copy link
Author

Thank you for your comment. Very nice and helpful info!!

@bhzhang95
Copy link

Hi, the encoding method is independent with the SNN. So, you can use any encoding method for SNN built by SJ. We have defined a temporal encoder, the latency encoder (Time-To-First-Spike, TTFS): https://spikingjelly.readthedocs.io/zh_CN/latest/spikingjelly.clock_driven.encoding.html#latencyencoder-init-en

You can use it to train a SNN on CIFAR-10 and get about 80+% accuracy.

Where can I find the example of time-to-first-spike coding SNN on CIFAR-10 dataset?

@fangwei123456
Copy link
Owner

Hi, the encoding method is independent with the SNN. So, you can use any encoding method for SNN built by SJ. We have defined a temporal encoder, the latency encoder (Time-To-First-Spike, TTFS): https://spikingjelly.readthedocs.io/zh_CN/latest/spikingjelly.clock_driven.encoding.html#latencyencoder-init-en
You can use it to train a SNN on CIFAR-10 and get about 80+% accuracy.

Where can I find the example of time-to-first-spike coding SNN on CIFAR-10 dataset?

You can try to use this encoder to generate input spikes, and train a SNN based on SpikingJelly. For example, you can use the PLIF net for CIFAR10:

You can also try it on FashionMNIST. The network can be obrained from the tutorial. The only difference is that you should modify the input.

@bhzhang95
Copy link

Hi, the encoding method is independent with the SNN. So, you can use any encoding method for SNN built by SJ. We have defined a temporal encoder, the latency encoder (Time-To-First-Spike, TTFS): https://spikingjelly.readthedocs.io/zh_CN/latest/spikingjelly.clock_driven.encoding.html#latencyencoder-init-en
You can use it to train a SNN on CIFAR-10 and get about 80+% accuracy.

Where can I find the example of time-to-first-spike coding SNN on CIFAR-10 dataset?

You can try to use this encoder to generate input spikes, and train a SNN based on SpikingJelly. For example, you can use the PLIF net for CIFAR10:

You can also try it on FashionMNIST. The network can be obrained from the tutorial. The only difference is that you should modify the input.

When using the time-to-first-spike coding, how can I define the loss function? I've tried to train a mlp to classify MNIST, however, the loss cannot converge

@fangwei123456
Copy link
Owner

You can refer to those papers that using the TTFS-based SNNs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants