-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Fix average_dipoles to allow for one trial #368
[MRG] Fix average_dipoles to allow for one trial #368
Conversation
hnn_core/network.py
Outdated
@@ -887,7 +887,7 @@ def _instantiate_drives(self, n_trials=1): | |||
drive['name']]['events'].append(event_times) | |||
|
|||
def add_tonic_bias(self, *, cell_type=None, amplitude=None, | |||
t0=None, T=None): | |||
t0=None, tstop=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kenloi it looks like there are changes from your old pull request in this pull request. Every time you make a new PR, you should fetch a fresh copy of master and branch from there:
$ git fetch upstream master:master
$ git checkout master
$ git branch fix_average_dipoles
$ git checkout fix_average_dipoles
now since you already started the work, I would recommend using git cherry-pick
in the new branch. What you should do to fix the current situation is the following:
$ git branch -m fix_average_dipoles_old
$ git fetch upstream master:master
$ git checkout master
$ git branch fix_average_dipoles
$ git checkout fix_average_dipoles
$ git cherry-pick <commit1-hash>
$ git cherry-pick <commit2-hash>
$ git push -f origin fix_average_dipoles
does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just read this^ @jasmainak. That makes sense. I'll try and fix my current branch. Thanks!
Codecov Report
@@ Coverage Diff @@
## master #368 +/- ##
=======================================
Coverage 89.94% 89.94%
=======================================
Files 16 16
Lines 2934 2934
=======================================
Hits 2639 2639
Misses 295 295
Continue to review full report at Codecov.
|
681f89a
to
9310f28
Compare
hnn_core/dipole.py
Outdated
return dpls[0] | ||
elif not dpls: | ||
raise ValueError("Need at least one dipole object to return a" | ||
" dipole") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would instead raise an error in the loop below if any of the objects in the list is not an instance of Dipole
. Look around in the codebase to see how it's done
Could you also update a test after doing this? So you get familiar with the test suite and how it works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, that makes sense. I'll put the error detection into the loop and update a test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated both the loop and the test. Haven't created test cases before so I hope that what I did makes sense. I used the # average two identical dipole objects
as a guide for creating the single dipole object average test. Are there docs that explain the standards for developing test cases with examples?
hnn_core/dipole.py
Outdated
if len(dpls) < 2: | ||
raise ValueError("Need at least two dipole object to compute an" | ||
" average") | ||
if len(dpls) == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function shouldn't throw any warnings if:
- You get a list (with len > 0)
- Each element of the list is a
Dipole
object
Also, if you really want to throw a warning, you should use the warnings
module:
from warnings import warn
warn('important message')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak This is me just nitpicking but I was thinking that it might help to let users know that when calling the average_dipoles() function when only one trial was run, they are effectively not averaging anything. I'm cool with removing the message if you think it doesn't help much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer to limit the number of warnings and only throw them when there's something to be careful about. I think average
should work silently for a single instance, makes it more flexible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alrighty, I'll remove the message :)
hnn_core/tests/test_dipole.py
Outdated
@@ -48,6 +48,12 @@ def test_dipole(tmpdir, run_hnn_core_fixture): | |||
"average of 2 trials"): | |||
dipole_avg = average_dipoles([dipole_avg, dipole_read]) | |||
|
|||
# average single dipole object | |||
dpl_avg1 = average_dipoles([dipole]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about this for a test. You can create a Dipole
object with two simulations, but second the data stored under the second trials to dpl[1].data = np.zeroes(len(dpl[0].times))
.
Then your test can be
assert np.all(dpl_avg == dpl[0].data / 2)
I think the current test is good and should remain, but it'd be good to make sure the average function is actually doing some averaging too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ntolley Sounds good! I'll give it a try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ntolley I tried your implementation of setting the data for the second trial to zero. However the assertion
assert np.all(dpl_avg == dpl[0].data / 2)
causes the entire array to be set to 0. floats. This raises an IndexError later when trying to compute the average because numpy isn't able to compute a mean for:
[0. 0. 0. ... 0. 0. 0.]
I'll try and find a way around this in the meantime. I might have to rely on a different way to set the data to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should use np.allclose
to compare arrays
hnn_core/dipole.py
Outdated
for dpl_idx, dpl in enumerate(dpls): | ||
if not dpls: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not a type check. If I give dpls = True
, then this check will not raise any error. You need to use isinstance
... and for dpl
not dpls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops I'll take the heat for this, I think this was a suggestion I gave in a brief meeting with @kenloi. Misunderstood the purpose of the original check, isinstance()
is definitely the way to go.
Alternatively, check out how _validate_type()
is used in the code. It provides a boilerplate error message as these sorts of checks come up frequently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak That makes sense. I'll take care of it asap.
@ntolley Thanks for the suggestion, will check it out.
hnn_core/tests/test_dipole.py
Outdated
single_dpl_avg = average_dipoles([dipole]) | ||
for dpl_key in single_dpl_avg.data.keys(): | ||
np.allclose(dipole_read.data[dpl_key], | ||
single_dpl_avg.data[dpl_key], rtol=0, atol=0.000051) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the test won't fail with this, sorry my bad. You have to use:
https://numpy.org/doc/stable/reference/generated/numpy.testing.assert_allclose.html
hnn_core/tests/test_dipole.py
Outdated
dpl_avg = average_dipoles(dpl_1) | ||
for dpl_key in dpl_avg.data.keys(): | ||
np.allclose(dpl_1[0].data[dpl_key], | ||
dpl_avg.data[dpl_key], rtol=0, atol=0.000051) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dpl_avg.data[dpl_key], rtol=0, atol=0.000051) | |
dpl_avg.data[dpl_key]) |
very strange these numbers are. I think we should just go with the defaults if it works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak You're right, it works without the defaults. I set the numbers because I saw it used in the previous tests and figured that those were the accepted tolerances.
hnn_core/tests/test_dipole.py
Outdated
dpl_null = dipole_read | ||
dpl_null.data['agg'] = np.zeros(len(dpl_null.data['agg'])) | ||
dpl_null.data['L2'] = np.zeros(len(dpl_null.data['L2'])) | ||
dpl_null.data['L5'] = np.zeros(len(dpl_null.data['L5'])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just use the constructor: https://jonescompneurolab.github.io/hnn-core/dev/generated/hnn_core.dipole.Dipole.html#hnn_core.dipole.Dipole
n_times = len(dipole_read)
dpl_zeros = Dipole(np.zeros(n_times, ), np.zeros(n_times, 3))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak I like that. Much cleaner. Will go make the changes.
hnn_core/dipole.py
Outdated
raise ValueError("Need at least one dipole object to return a" | ||
" dipole") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there needs to be a test to check that this works. Or is there already a test? Also, I think the text should be changed to reflect the fact that all objects in the list must instances of Dipole
. Also, when you raise an error, you should tell the user what you got. Something like:
raise ValueError("Need at least one dipole object to return a" | |
" dipole") | |
raise ValueError(f"All elements in the list should be instances of Dipole. Got {type(dpl)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak Yea, this test already exists. It's under line 51, the comment # average an n_of_1 dipole list
which calls average_dipoles()
on a list with only one dipole.
And thanks for the more verbose suggestion. I'll go make the change.
hnn_core/tests/test_dipole.py
Outdated
dpl_1 = [dipole, dpl_null] | ||
dpl_avg = average_dipoles(dpl_1) | ||
for dpl_key in dpl_avg.data.keys(): | ||
np.allclose(dpl_1[0].data[dpl_key], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am misunderstanding the code I think this should fail. Since you're averaging a dipole
of random numbers with zeros, shouldn't every value be cut in half?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I see the tests passing so now I'm extra confused 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ntolley Yea, I see what you're talking about. It shouldn't be working. I'll try debugging and see what's really happening.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's because np.allclose
returns a bool. You need an assert statement to make it fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jasmainak. I think I was able to fix it with your suggestion. Thanks @ntolley for bringing it to my attention.
@ntolley @jasmainak @cjayb I think this issue is complete and ready for a final review. Took into account everyone's input. If anyone has the time, please take a look and let me know if there are any else to fix. I've also updated the whats_new.rst documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @kenloi, I think this is good to merge after a rebase (we'll make sure to merge this PR before any others since you've had to rebase a few times already...)
Thanks @ntolley . I rebased again about an hour ago. Hopefully that resolves some of the conflicts? |
So it look like you have a merge conflict in Esentially if there have been any pull requests merged since your last rebase, you need to rebase again. Your IDE should have an interface to help resolve conflicts similar to this: https://code.visualstudio.com/docs/editor/versioncontrol#_merge-conflicts. The last PR merged added some code to Let me know if you want some help going through the steps for resolving a merge conflict |
@kenloi you'll get the hang of github soon :) Remember for contributing to public repositories, only rebase is allowed. Don't merge the $ git fetch upstream master:master
$ git rebase master |
hnn_core/tests/test_dipole.py
Outdated
# average an n_of_1 dipole list | ||
single_dpl_avg = average_dipoles([dipole]) | ||
for dpl_key in single_dpl_avg.data.keys(): | ||
np.testing.assert_allclose( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
np.testing.assert_allclose( | |
assert_allclose( |
hnn_core/tests/test_dipole.py
Outdated
dpl_1 = [dipole, dpl_null] | ||
dpl_avg = average_dipoles(dpl_1) | ||
for dpl_key in dpl_avg.data.keys(): | ||
assert np.allclose([data / 2 for data in dpl_1[0].data[dpl_key]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert np.allclose([data / 2 for data in dpl_1[0].data[dpl_key]], | |
assert_allclose(dpl_1[0].data[dpl_key] / 2., |
4556a75
to
46f52ea
Compare
I've rebased and implemented the changes suggested by @jasmainak. However, it looks like I've failed one of the tests which is reporting: I ran pytest on the I'll try digging into the details of the report, but I'm not sure what this means. |
No need to worry about the tests, if only one of the builds is failing that usually means it's a problem on the server side. I just restarted the jobs and expect them to pass (for some reason our tests can be pretty finicky, I'm mildly suspicious it's a neuron problem...) Also from the code review suggestions, if it looks reasonable you can actually commit them directly with the |
@ntolley Yea, I'm quickly realizing that my workflow is pretty inefficient. Thanks for the tip! |
@jasmainak @cjayb I think this PR is ready for another round of reviews. @ntolley has reviewed and approved. Just to double check, if I think the PR is ready to be merged, at what point do I switch the tag to |
You can go ahead and switch the tag! I'm not sure if we have an official system for [MRG], but in general we just use it to signal that we're ready for reviews (not that the PR is actually perfect). WIP is meant to indicate that we're not quite ready for input yet and are aware glaring errors. |
yeah, [MRG} is to indicate you are yourself satisfied with what you have done and would be okay if the PR was merged as it stands. [WIP] means you are just beginning to work on it, early feedback is welcome but not nitpicks ... |
@kenloi two final nitpicks. Once you fix them, it's +1 for merge from my end. Hope you're learning stuff and not losing motivation. Future PRs will definitely be easier and fun! |
Co-authored-by: Mainak Jas <[email protected]>
Co-authored-by: Mainak Jas <[email protected]>
Thanks @kenloi ! Looking forward to the next contribution :) |
Closes #306
Fixed bug in average_dipoles() function in dipole.py, now allowing for computation of an average even for n_of_1 simulation. I essentially had the function just return the first and only dipole instance in the input parameter dipoles list.
For n_of_1 trials, I decided to also include a message to let users know that when calling an average on a sample size of 1, they should consider simulating at least two trials to properly produce an average.