You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Q: Events also look substantially different depending on whether they are recorded close or far away. I haven’t thought much about this, but perhaps distance of event from station, or location, should be a ‘feature’ in the model, or used in weighting somehow. The problem is we don’t have locations for 99% of events, but the Amplitude Source Location method I independently invented in 2000 could give useful constraints.
A: Our discussion centred on the fact that an event near MBGH could look like noise on MBWH and vice versa. So we need a way to filter out noise traces - perhaps with a separate ML step, or regular QC metrics, e.g. looking to see if there are significant amplitude and frequency changes within a waveform. Or we could eliminate noise traces manually (or do event labelling based on each channel, rather than each event), and build up a set of labelled noise versus signal for an earlier step of machine learning. If we do not do this, event classifications could be biased by noise, e.g. MBWH LP class might be trained on 70 LPs and 30 noise signals. Same for all channels and event classes. (I recall that Jean-Philippe had also suggested training a separate noise model).
The text was updated successfully, but these errors were encountered:
Q: Events also look substantially different depending on whether they are recorded close or far away. I haven’t thought much about this, but perhaps distance of event from station, or location, should be a ‘feature’ in the model, or used in weighting somehow. The problem is we don’t have locations for 99% of events, but the Amplitude Source Location method I independently invented in 2000 could give useful constraints.
A: Our discussion centred on the fact that an event near MBGH could look like noise on MBWH and vice versa. So we need a way to filter out noise traces - perhaps with a separate ML step, or regular QC metrics, e.g. looking to see if there are significant amplitude and frequency changes within a waveform. Or we could eliminate noise traces manually (or do event labelling based on each channel, rather than each event), and build up a set of labelled noise versus signal for an earlier step of machine learning. If we do not do this, event classifications could be biased by noise, e.g. MBWH LP class might be trained on 70 LPs and 30 noise signals. Same for all channels and event classes. (I recall that Jean-Philippe had also suggested training a separate noise model).
The text was updated successfully, but these errors were encountered: