Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] subtract exponential to remove offset (a.k.a baseline_renormalize) #270

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

cjayb
Copy link
Collaborator

@cjayb cjayb commented Feb 11, 2021

The PR is based on the gist shown here

The purpose is to demonstrate that examples that set postproc=True (default) in simulate_dipole do not change in any meaningful way (i.e., qualitatively this method does the job)

Tests will fail, however, as these are against the old piecewise-linear normalisation method.

Closes #220

@cjayb cjayb changed the title [WIP] subtract exponential to remove offset [WIP] subtract exponential to remove offset (a.k.a baseline_renormalize) Feb 11, 2021
@cjayb
Copy link
Collaborator Author

cjayb commented Feb 11, 2021

Ping #265

@cjayb
Copy link
Collaborator Author

cjayb commented Feb 12, 2021

The gist is updated to show that the exponential fits for both L2 and L5 dipole moments scale with network size, i.e., it makes sense to use the fit values multiplied by the number of cells in the network.

I think the origin of the (perfectly) exponential decay is membrane capacitance charging from an initial zero value. It makes sense that this would take a lot longer for the large L5 vs. L2 pyramidals. Note however that this will be input-dependent: as soon as synapses start activating, the time constants will change. So it's not feasible to remove the effect of capacitance by a simple exponential fit, see below.

In d70209c I've used values for the initial Vm of all pyramidal cell segments calculated as show in this gist. The dipole moment in L2 cells (right plot) jumps immediately to the steady-state value, but L5 cells (left plot) still require considerable time to stabilise (due to their size-capacitance product).

better_Vm

The exponential fit for the L5 cells now has slightly different values, but basically the same story as before

download-4

To illustrate my point about synaptic activity making the steady-state values invalid, look at the alpha-example output of the latest commit

Screenshot 2021-02-12 at 12 30 18

compared with master:

Screenshot 2021-02-12 at 12 30 34

Sure enough, the baseline is now lifted to 0, but the exponential-based correction over-compensates, presumably because open synapses due to the drives shunt current along the dendrites and soma.

In summary, I think this PR explains both the initial "ringing" of the dipole moment, and the slow low-amplitude drift. However, it also speaks against doing any form of implicit 'baseline renormalisation'. Note that even a constant-removal will depend strongly on the length of the simulation.

@@ -602,22 +602,138 @@ def aggregate_data(self):
def state_init(self):
"""Initializes the state closer to baseline."""

# These were empirically derived from a 5 s blank simulation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this would be a "different" model than the one currently in the GUI. I propose that we keep the initialize the values similarly as before -- i.e., initialization per section not segment. With a comment how the numbers are derived and what approximation it makes and then have this dictionary thing so that it can be generalized after 0.1 release.

I'm not against this improved approximation per se but I think it would be more valuable in the context where we can load different cell types so the more detailed initialization would help for a more detailed cell type. Otherwise, I'm not sure if it's worth deviating from the HNN-GUI model. For 0.1, I really want to be able to tell a GUI user -> hey give HNN-core a shot, it actually gives you the same output. Without them trying to hunt down tiny numerical differences.

L5_exp = -4.60641e+00 * np.exp(2.51721e-03 * self.times) - 4.80496e+01
L2_exp = 4.43005e-02
self.data['L5'] -= L5_exp * N_pyr_x * N_pyr_y
self.data['L2'] -= L2_exp * N_pyr_x * N_pyr_y
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as below. For 0.1, I would just explain the rationale better without changing the model

@jasmainak
Copy link
Collaborator

@cjayb could we perhaps zoom to discuss all that you have explored here so far? I think this is all very interesting and valuable. Has it been discussed at any of the group meetings so far?

@cjayb cjayb added bug Something isn't working hnn-gui HNN GUI labels Feb 17, 2021
@cjayb
Copy link
Collaborator Author

cjayb commented Feb 17, 2021

My conclusion based on this "experiment" is that no form of 'baseline renormalisation' can be physiologically justified. The leak current down (and through) the L5 pyramidal cell dendrites is always going to depend on the input drives, so no constant, linear or exponential function is going to be appropriate.

This calls for a principled use of "baselining": on the short time scales of a simulation, simply removing a (50 ms-or-so) baseline would likely do wonders. A linear detrending could be optionally applied (an exponential with a 1/2.62e-3 ~ 390 ms time constant is very close to linear for short simulations).

I'm labelling this as a bug until a principled decision can be made on how to proceed.

@cjayb cjayb marked this pull request as draft February 17, 2021 22:48
@jasmainak
Copy link
Collaborator

Thinking more about it, perhaps "burn-in" is indeed a better name than "baseline" if we want to discard first x ms

@jasmainak
Copy link
Collaborator

@cjayb I'm looking at this PR again and also network_builder.py. It seems the values in state_init are hardcoded currently and that's a point of contention. Do you think it would make sense to add a state_init parameter to the Cell object that initializes the state based on a dictionary/list. That way, if you want to change the initialization based on more principled methods, it should be easy to do so.

@cjayb
Copy link
Collaborator Author

cjayb commented May 11, 2021

I'm beginning to think that the state_init is pretty useless: the problem isn't that seg.v's are too far from resting potential. Rather, the several overlapping mechanisms (hh2, ca, cat, ...) all have in-built state variables (m, n, ..., tau_this and tau_that). We need to figure out how to pass sane values for those states. I think it's possible, just haven't quite figured out how yet.

@jasmainak
Copy link
Collaborator

jasmainak commented May 11, 2021

That you can already ... well almost

p_secs[sec_name]['mechs']['ca']['gbar_ca'] = 1e-3

anything that's in the PARAMETER block should be possible to change in this way

@cjayb
Copy link
Collaborator Author

cjayb commented May 12, 2021

I think not, I'm pretty sure we need to use this with type=1. We want to set the internal state variables, not just the external parameter values.

@cjayb cjayb mentioned this pull request Jun 17, 2021
15 tasks
@jasmainak
Copy link
Collaborator

jasmainak commented Dec 9, 2021

@cjayb I'm looking at this with fresh eyes. I agree with you that "baseline normalization" is not really justified. Instead we need to explain to users the concept of "steady state" in a dynamical system in a tutorial. With an optional "burn in" parameter somewhere.

And then in our tutorials, use an initialization through a method but mention the caveat that we have empirically observed this is similar to the "steady state" and are using this only because it "works" and to keep the simulation short. What do you think?

@rythorpe
Copy link
Contributor

@cjayb I'm looking at this with fresh eyes. I agree with you that "baseline normalization" is not really justified. Instead we need to explain to users the concept of "steady state" in a dynamical system in a tutorial. With an optional "burn in" parameter somewhere.

I'm not Chris, but I vote +1 for this!

And then in our tutorials, use an initialization through a method but mention the caveat that we have empirically observed this is similar to the "steady state" and are using this only because it "works" and to keep the simulation short. What do you think?

When you say "initialization through a method", you mean just rebrand the baseline normalization code and clarify that it's just a hack that we use in examples?

@cjayb
Copy link
Collaborator Author

cjayb commented Dec 10, 2021

I like both suggestions! I understood Mainak's comment as Ryan did. I feel a more simple ad hoc 'normalisation', such as an exponential, might be more appropriate than the piecewise-linear solution in there now. As long as it's documented it shouldn't really matter.

I think it still makes sense to investigate whether the internal state variables could be saved and set. If it turns out to be problematic, though, a multi-trial simulation should be done using a more M/EEG-like epoching approach (one long sim, with event markers). The latter might be preferable semantically anyway...

@rythorpe
Copy link
Contributor

I like the idea of subtracting a fitted exponential function as well. Are we cool with updating our ground truth default dpl.txt file?

@jasmainak
Copy link
Collaborator

I'm fine with it. I'd say hnn-core is kind of become more the standard now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working hnn-gui HNN GUI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

hardcoded values in baseline_renormalize
3 participants