You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, most retrievals (>99.99%) fail. We are not measuring the duration of failed retrievals, and therefore we don't know how many tasks can an honest checker node complete every round.
Let's start collecting that data.
Modify spark checker to report duration for failed retrievals too.
Modify spark-evaluate to produce two retrieval stats - duration of successful requests and duration of all requests.
Figure out how to handle measurements with end_at set to Date(0) - they are clearly invalid, but why are we receiving them? Are they produced by fraudulent nodes?
The text was updated successfully, but these errors were encountered:
Figure out how to handle measurements with end_at set to Date(0) - they are clearly invalid, but why are we receiving them? Are they produced by fraudulent nodes?
At the moment, most retrievals (>99.99%) fail. We are not measuring the duration of failed retrievals, and therefore we don't know how many tasks can an honest checker node complete every round.
Let's start collecting that data.
Modify spark checker to report duration for failed retrievals too.
Note: we should be collecting this data, but apparently, some measurements come with an invalid
end_at
value. See Report retrieval network errors #43 (comment)Let's ensure
end_at
is always set correctly.Modify spark-evaluate to produce two retrieval stats - duration of successful requests and duration of all requests.
Figure out how to handle measurements with
end_at
set toDate(0)
- they are clearly invalid, but why are we receiving them? Are they produced by fraudulent nodes?The text was updated successfully, but these errors were encountered: