-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
events.load_trials doesn't work with stim length > 1 min #30
Comments
Either 2 or 3 |
Do 2. I think there will be a performance penalty in loading the trials for long recordings if it has to figure out for itself when stimuli might end. Unless you know a cleverer way to do 3. |
here is roughly how to do option 3: the trial parsing algorithm finds digmarks that indicate stim ends ( I did this because it's more reliable than the stimstart digmark ( so the trials dataframe is built from the digmark dataframe in the next line so the info is grabbed, then discarded, then this who is the idiot who wrote this code!? a better approach might be something like... stim_end_mask = digmarks['codes'].isin(('>','#'))
trials = digmarks[stim_end_mask]['codes'].rename({'codes':'stimulus_end'})
# get_stim_start will need to be rewritten to return... something else. just the stim start time I guess?
trials['stimulus_start'] = trials.apply(lambda row: get_stim_start(row,digmarks),axis=1)
trials.reset_index(inplace=True)
# then do the rest of the trial info parsing...
trials['stimulus'] = trials.apply(lambda row: get_stim_info(row,stimulus,fs)['text'],axis=1)
trials['response'] = trials.apply(lambda row: get_response(row,digmarks,fs)['codes'],axis=1)
trials['response_time'] = trials.apply(lambda row: get_response(row,digmarks,fs)['time_samples'],axis=1)
trials['consequence'] = trials.apply(lambda row: get_consequence(row,digmarks,fs)['codes'],axis=1)
trials['correct'] = trials['consequence'].apply(is_correct) |
I solved this using rigid_pandas. Not sure if load_trials ever got fixed. I'm not sure if people still use load_trials... |
Lauren and @nvahidi have both run in to this issue.
couple possible solutions:
any recommendations?
@neuromusic @zekearneodo @sasen @theilmbh
The text was updated successfully, but these errors were encountered: