-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] subtract exponential to remove offset (a.k.a baseline_renormalize) #270
base: master
Are you sure you want to change the base?
Conversation
Ping #265 |
The gist is updated to show that the exponential fits for both L2 and L5 dipole moments scale with network size, i.e., it makes sense to use the fit values multiplied by the number of cells in the network. I think the origin of the (perfectly) exponential decay is membrane capacitance charging from an initial zero value. It makes sense that this would take a lot longer for the large L5 vs. L2 pyramidals. Note however that this will be input-dependent: as soon as synapses start activating, the time constants will change. So it's not feasible to remove the effect of capacitance by a simple exponential fit, see below. In d70209c I've used values for the initial Vm of all pyramidal cell segments calculated as show in this gist. The dipole moment in L2 cells (right plot) jumps immediately to the steady-state value, but L5 cells (left plot) still require considerable time to stabilise (due to their size-capacitance product). The exponential fit for the L5 cells now has slightly different values, but basically the same story as before To illustrate my point about synaptic activity making the steady-state values invalid, look at the alpha-example output of the latest commit Sure enough, the baseline is now lifted to 0, but the exponential-based correction over-compensates, presumably because open synapses due to the drives shunt current along the dendrites and soma. In summary, I think this PR explains both the initial "ringing" of the dipole moment, and the slow low-amplitude drift. However, it also speaks against doing any form of implicit 'baseline renormalisation'. Note that even a constant-removal will depend strongly on the length of the simulation. |
@@ -602,22 +602,138 @@ def aggregate_data(self): | |||
def state_init(self): | |||
"""Initializes the state closer to baseline.""" | |||
|
|||
# These were empirically derived from a 5 s blank simulation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so this would be a "different" model than the one currently in the GUI. I propose that we keep the initialize the values similarly as before -- i.e., initialization per section not segment. With a comment how the numbers are derived and what approximation it makes and then have this dictionary thing so that it can be generalized after 0.1 release.
I'm not against this improved approximation per se but I think it would be more valuable in the context where we can load different cell types so the more detailed initialization would help for a more detailed cell type. Otherwise, I'm not sure if it's worth deviating from the HNN-GUI model. For 0.1, I really want to be able to tell a GUI user -> hey give HNN-core a shot, it actually gives you the same output. Without them trying to hunt down tiny numerical differences.
L5_exp = -4.60641e+00 * np.exp(2.51721e-03 * self.times) - 4.80496e+01 | ||
L2_exp = 4.43005e-02 | ||
self.data['L5'] -= L5_exp * N_pyr_x * N_pyr_y | ||
self.data['L2'] -= L2_exp * N_pyr_x * N_pyr_y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as below. For 0.1, I would just explain the rationale better without changing the model
@cjayb could we perhaps zoom to discuss all that you have explored here so far? I think this is all very interesting and valuable. Has it been discussed at any of the group meetings so far? |
My conclusion based on this "experiment" is that no form of 'baseline renormalisation' can be physiologically justified. The leak current down (and through) the L5 pyramidal cell dendrites is always going to depend on the input drives, so no constant, linear or exponential function is going to be appropriate. This calls for a principled use of "baselining": on the short time scales of a simulation, simply removing a (50 ms-or-so) baseline would likely do wonders. A linear detrending could be optionally applied (an exponential with a I'm labelling this as a bug until a principled decision can be made on how to proceed. |
Thinking more about it, perhaps "burn-in" is indeed a better name than "baseline" if we want to discard first x ms |
@cjayb I'm looking at this PR again and also |
I'm beginning to think that the |
That you can already ... well almost p_secs[sec_name]['mechs']['ca']['gbar_ca'] = 1e-3 anything that's in the PARAMETER block should be possible to change in this way |
I think not, I'm pretty sure we need to use this with |
@cjayb I'm looking at this with fresh eyes. I agree with you that "baseline normalization" is not really justified. Instead we need to explain to users the concept of "steady state" in a dynamical system in a tutorial. With an optional "burn in" parameter somewhere. And then in our tutorials, use an initialization through a method but mention the caveat that we have empirically observed this is similar to the "steady state" and are using this only because it "works" and to keep the simulation short. What do you think? |
I'm not Chris, but I vote +1 for this!
When you say "initialization through a method", you mean just rebrand the baseline normalization code and clarify that it's just a hack that we use in examples? |
I like both suggestions! I understood Mainak's comment as Ryan did. I feel a more simple ad hoc 'normalisation', such as an exponential, might be more appropriate than the piecewise-linear solution in there now. As long as it's documented it shouldn't really matter. I think it still makes sense to investigate whether the internal state variables could be saved and set. If it turns out to be problematic, though, a multi-trial simulation should be done using a more M/EEG-like epoching approach (one long sim, with event markers). The latter might be preferable semantically anyway... |
I like the idea of subtracting a fitted exponential function as well. Are we cool with updating our ground truth default dpl.txt file? |
I'm fine with it. I'd say |
The PR is based on the gist shown here
The purpose is to demonstrate that examples that set
postproc=True
(default) insimulate_dipole
do not change in any meaningful way (i.e., qualitatively this method does the job)Tests will fail, however, as these are against the old piecewise-linear normalisation method.
Closes #220