Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change from internal integrator to DiffEq.jl (only IZ) #16

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Modatu
Copy link

@Modatu Modatu commented Jun 9, 2020

This is only a first try and so far introduces breaking changes!

What works:
Integration of several IZ neurons over time (see examples/iz_neuron.jl) in a broadcasting friendly manner (thanks a lot to @jpsamaroo).

Questions that arise for me now:

  • can parameters be in ArrayPartitions, too? So that we can have different types of neurons in one layer?
  • do we need the IZ struct additionally to the IZ Parameter? Is that now better handled through the fire and affect! functions?

Next steps (as suggested by @jpsamaroo):

  • “thread” do parameter through functions leading from sim! to integrate! for in-place (IIP)

  • OOP has to be figured out later (for Autodiff, if we want that, low priority)

@jpsamaroo
Copy link
Collaborator

Great work!

can parameters be in ArrayPartitions, too? So that we can have different types of neurons in one layer?

Yes, absolutely, that's a great idea!

do we need the IZ struct additionally to the IZ Parameter? Is that now better handled through the fire and affect! functions?

I think we generally want to keep the IZ struct, as we need to keep (at a minimum) the v and u data somewhere.

One thing that needs doing: you haven't yet added the new dependencies (OrdinaryDiffEq, Parameters, etc.) to the Project.toml, so this package won't necessarily work on a regular user's system until that's done.

I'm going to make my own PR or two to get the package ready for this PR, specifically I can handle:

“thread” du parameter through functions leading from sim! to integrate! for in-place (IIP)

@jpsamaroo
Copy link
Collaborator

Sorry, nevermind, this PR hasn't yet been rebased on master. @Modatu can you rebase on SpikingNeuralNetworks master branch?

@Modatu
Copy link
Author

Modatu commented Jun 10, 2020

Can I rebase? Isn't that only possible with write permissions on this repo?

@jpsamaroo
Copy link
Collaborator

You do the rebase locally and then force-push to your branch. Assuming you push your commits to a remote called Modatu, the main SpikingNeuralNetworks repository is origin, and you've already got your feature_diffeq_integrator branch checked out, you can do:

git fetch --all
git rebase origin/master
git push -f Modatu feature_diffeq_integrator

If you'd like practice with git rebase, I recommend trying the tutorial at https://git-rebase.io/

@Modatu Modatu force-pushed the feature_diffeq_integrator branch from 2f874df to 95c1bde Compare June 10, 2020 17:28
@Modatu
Copy link
Author

Modatu commented Jun 10, 2020

@jpsamaroo Thanks! I rebased to master and added the dependencies to the Project.toml

@Modatu
Copy link
Author

Modatu commented Jun 10, 2020

Going forward I will try to incorporate the IZParameter into the ArrayPartition.
I have an idea how this could work.

@jpsamaroo jpsamaroo mentioned this pull request Jun 16, 2020
3 tasks
@jpsamaroo jpsamaroo marked this pull request as draft June 16, 2020 14:23
@jpsamaroo
Copy link
Collaborator

Ok @Modatu, can you rebase again? My latest PR has made it so that we can specify what types to use for neuron/synapse internal fields, which is key for doing fancy things like doing Autodiff or running on GPUs (although it may not have been strictly required, it's very good to have).

What we need to do now is to modify src/main.jl to use a DiffEq interface (like you're using here), and then modify the rest of the neuron/synapse models to use the same interface. I didn't change the actual simulation interface like I said I would, since that would make it hard to make my PR non-breaking will also not inserting dead/ugly code.

Let me know if you'd like help with any of this, as I can push commits to your branch.

@jpsamaroo
Copy link
Collaborator

Also regarding the ArrayPartition: we should automatically convert our neuron/synapse structs into two ArrayPartition s within the simulator interface functions in src/main.jl. One will be for parameters, the other will be for the rest of the fields, in general. Then we can construct the ODEProblem and call solve for the user with a default solver and the specified timestep. Advanced users should be expected and encouraged to do some of this by themselves to exert more control, and I think we can cater to them with conveniences in future PRs.

Note that some of the models that use SparseArrays probably won't work well like this (anything that mentions rowptr/colptr is usually sparse array-based); we might need to change those models to use a different implementation. I would worry about those last.

@Modatu Modatu force-pushed the feature_diffeq_integrator branch from 55953e0 to 35c742a Compare June 25, 2020 16:49
@Modatu
Copy link
Author

Modatu commented Jun 25, 2020

@jpsamaroo I did the rebase, but it was a little bit confusing and the auto merge did not really work.
I hope it is ok this way.

I looked at integrating the Parameters into ArrayPartitons, that was not to straightforward at a first glance.
What should be fairly easy for me to do is changing the simulator interface in src/main.jl.

@jpsamaroo
Copy link
Collaborator

Ok, no worries, I'm fixing it up locally and also getting the interface prepared so that this actually works. I'll ping you once it's pushed.

@jpsamaroo jpsamaroo force-pushed the feature_diffeq_integrator branch from 35c742a to c6ba28d Compare November 8, 2020 15:23
@jpsamaroo
Copy link
Collaborator

I've completed the internal changes to make SNN's sim! API use OrdinaryDiffEq for solving, and done so in a way that is almost 100% type-stable and should be quite performant (although it now requires an extra step, SNN.prepare, to construct an efficient object to be used for solving). I need to convert the rest of the models to this format, and then make sure that most of the pre-existing API still works.

One thing to note is that the old sim!(N, S) call is now quite inefficient, since it constructs a new object from ArrayPartitions internally; however, the tradeoff is that solving of larger models for longer durations will be much more efficient, and is now trivial to parallelize and run on accelerators.

@Modatu
Copy link
Author

Modatu commented Apr 5, 2021

@jpsamaroo:
Sorry for the long delays. Last year and this year are somewhat crazy around here. ;-)
I pulled your changes, but now the code in /examples/iz_neuron.jl does not work anymore.
There are also some odd things I can not wrap my head around.
function integrate!(_du, _u, p::IZ, t)
does not use t anymore and I also do not really get your changes in the integrate! function.
function condition(iz::IZ, u) v, u = u.x any(_v->_v > 30f0, v) end
does not use IZ.

BTW, if I try to run your code in examples/iz_neuron.jl I get:
ERROR: LoadError: MethodError: no method matching integrate!

Otherwise, I really like the progress!

@jpsamaroo
Copy link
Collaborator

Due to the complexity of this approach, and my lack of free time, I've mostly shelved this PR. I know @wsphillips is interested in doing this sort of thing, but he's starting from a purely DiffEq/MTK approach, which I think is probably a better way to get started.

@Modatu
Copy link
Author

Modatu commented Apr 6, 2021

Ok, thats unfortunate.
Is @wsphillips working on that from scratch or does he want to incorporate it into SpikingNeuralNetworks.jl
It would be awesome if we could have something like nengo in Julia.
I would be very interested to simulate/work with Spaun or Leabra like models.
And I am happy to help, if I can.

@wsphillips
Copy link

Is @wsphillips working on that from scratch or does he want to incorporate it into SpikingNeuralNetworks.jl

Writing from scratch using ModelingToolkit.jl + component modeling so that we can flexibly performance optimize user models and take advantage of the SciML ecosystem.

@Modatu
Copy link
Author

Modatu commented Apr 7, 2021

Sounds great, I opened a discussion in you FutureNeuralSimulation repo. Maybe we can move there for further discussion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants