You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Diagnosing an issue. I noticed my UI got super slow, so I started looking at the dataset. I was considering splitting raw_logs into new tables by year just to improve lookup speed, when I noticed something...Here's the number of raw_logs per year:
2014: 112,093
2015: 781,507
2016: 1,215,115
2017: 13,541,123
2018 (as of January 15): 1,028,641
Looking at a sample track, there's a clear issue:
So, it seems that there is an issue ending tracks. If I export the same track from the torque trip in the app, it's a csv with just the correct information.
I'm going to peruse the apache logs to see what the data upload looks like and see why the track is continuously logging despite CLEARLY being done. Also I'm gonna come up with some sort of delete command which will purge logs where the car is clearly off, stopped, and in the same location for more than, say, 30 seconds. I would hate to have to write a daemon for that, but if I can't find a root cause that may be the right solution.
The text was updated successfully, but these errors were encountered:
Diagnosing an issue. I noticed my UI got super slow, so I started looking at the dataset. I was considering splitting raw_logs into new tables by year just to improve lookup speed, when I noticed something...Here's the number of raw_logs per year:
Looking at a sample track, there's a clear issue:
So, it seems that there is an issue ending tracks. If I export the same track from the torque trip in the app, it's a csv with just the correct information.
I'm going to peruse the apache logs to see what the data upload looks like and see why the track is continuously logging despite CLEARLY being done. Also I'm gonna come up with some sort of delete command which will purge logs where the car is clearly off, stopped, and in the same location for more than, say, 30 seconds. I would hate to have to write a daemon for that, but if I can't find a root cause that may be the right solution.
The text was updated successfully, but these errors were encountered: