Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cast correction term to float in LCCUpdater #627

Merged
merged 1 commit into from
May 4, 2022

Conversation

ekhunter123
Copy link
Contributor

@ekhunter123 ekhunter123 commented Apr 28, 2022

TLDR: Gaussian components updated using the LCCUpdater have weights which are Probability objects. This in turn causes some component state vectors to be made of Probabilities instead of floats. To fix this, we can cast the l1 correction term from a Probability to a float.

Problem:
When using the GM-LCC filter, sometimes an updated component's state vector is filled with Probability types, instead of floats. For example, an updated component may have state vector
[Probability(15.0), Probability(0.5), 8.0, Probability(0.3)]
On its own, this is not a big problem. But it isn't the proper use of a Probability object and none of the other filters behave this way.

Cause:
I traced the problem back and found that it was caused by two things:

The LCCUpdater calculates an l1 correction factor. As seen in the code below, that correction factor will be a Probability object (I have checked this).

def _calculate_update_terms(self, updated_sum_list, hypotheses):
"""
Calculate the higher order terms used in the LCC Filter
"""
# Get the predicted weight sum
predicted_weight_sum =\
Probability.sum(hypothesis.prediction.weight for hypothesis in
hypotheses[-1]) * self.prob_survival
# Second order predicted cumulant c(2)
predicted_c2 = self.second_order_cumulant * self.prob_survival**2
# Detected predicted weight mu_d
detected_weight_sum = self.prob_detection*predicted_weight_sum
# Detected predicted weight mu_phi
misdetected_weight_sum = (1-self.prob_detection)*predicted_weight_sum
# Calculate the alpha of the predicted Panjer process
alpha_pred = ((predicted_weight_sum +
self.mean_number_of_false_alarms)**2)\
/ (predicted_c2+self.second_order_false_alarm_cumulant+1e-26)
# Calculate l1 and l2 correction factors
denominator = alpha_pred + detected_weight_sum \
+ self.mean_number_of_false_alarms
number_of_measurements = len(hypotheses)-1
numerator = alpha_pred + number_of_measurements
l1 = numerator/denominator

Later, the correction factor is used as a factor in the weight calculation for updated components. Thus, the calculated weight will be a Probability. This means that updated components in a GM-LCC (other than the birth component) will have Probability weights.

l1 = self._calculate_update_terms(weight_sum_list, hypotheses)
for missed_detected_hypotheses in hypotheses[-1]:
# Add all active components except birth component back into
# mixture as miss detected components
if missed_detected_hypotheses.prediction.tag != "birth":
component = TaggedWeightedGaussianState(
tag=missed_detected_hypotheses.prediction.tag,
weight=missed_detected_hypotheses.prediction.weight
* (1-self.prob_detection) * l1,
state_vector=missed_detected_hypotheses.prediction.mean,
covar=missed_detected_hypotheses.prediction.covar,
timestamp=missed_detected_hypotheses.prediction.timestamp)
updated_components.append(component)

This Probability weight causes problems further down the line in the GaussianMixtureReducer. When merging two components, the state vector of each original component is multiplied by its weight. If that weight is a Probability, it will make the merged component mean have Probability elements. It is also true that a Probability weight will perpetuate the problem in the next timesteps even if the component is not merged with another.

Proposed Solution:
In the LCCUpdater function _calculate_update_terms(), simply cast l1 as a numpy float64 before returning it. Note that it is cast as a numpy 64 float so that the weights are also numpy64, thus matching the result from a PHDUpdater.

In the LCCUpdater, the l1 correction term was calculated as a Probability object. This caused issues later, so it is not converted to a float.
@ekhunter123 ekhunter123 requested a review from a team as a code owner April 28, 2022 18:41
@ekhunter123 ekhunter123 requested review from hpritchett-dstl and orosoman-dstl and removed request for a team April 28, 2022 18:41
@codecov
Copy link

codecov bot commented Apr 28, 2022

Codecov Report

Merging #627 (fa46ed4) into main (d38a196) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main     #627   +/-   ##
=======================================
  Coverage   94.22%   94.22%           
=======================================
  Files         158      158           
  Lines        7832     7833    +1     
  Branches     1506     1506           
=======================================
+ Hits         7380     7381    +1     
  Misses        343      343           
  Partials      109      109           
Flag Coverage Δ
integration 67.80% <50.00%> (+<0.01%) ⬆️
unittests 91.98% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
stonesoup/updater/pointprocess.py 97.18% <100.00%> (+0.04%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d38a196...fa46ed4. Read the comment docs.

@sdhiscocks sdhiscocks merged commit c0bd5dd into dstl:main May 4, 2022
@ekhunter123 ekhunter123 deleted the lcc_update_term branch May 4, 2022 13:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants