-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Improve on-demand dispatch and add support for batch requests #5419
Conversation
Strongly-typed API in place: let (header, code, account) = on_demand.request(&ctx, (hdr_req, code_req, account_req))
.expect("all back-references ok").wait(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of nitpicks, will need to dive into the logic deeper.
} | ||
|
||
/// Request a canonical block's chain score. | ||
/// Returns the chain score. | ||
pub fn chain_score_by_number(&self, ctx: &BasicContext, req: request::HeaderProof) -> Receiver<U256> { | ||
let (sender, receiver) = oneshot::channel(); | ||
pub fn chain_score_by_number(&self, ctx: &BasicContext, req: request::HeaderProof) -> BoxFuture<U256, Canceled> { | ||
let cached = { | ||
let mut cache = self.cache.lock(); | ||
cache.block_hash(&req.num()).and_then(|hash| cache.chain_score(&hash)) | ||
}; | ||
|
||
match cached { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a lot of repetition with getting stuff from cache, maybe use something like this?
fn from_cache<T, F: FnOnce(&mut Cache) -> T>(f: F) -> Option<BoxFuture<T, Canceled>> {
//..
}
self.from_cache(|cache| cache.block_hash(&req.num).and_then(|hash| cache.chain_score(hash)))
.unwrap_or_else(|| self.request(ctx, req)...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these methods are removed and replaced with a more general answer_from_cache
in #5573, would prefer to leave this refactor specifically to that PR.
@@ -57,16 +57,16 @@ pub type ExecutionResult = Result<Executed, ExecutionError>; | |||
|
|||
impl LightFetch { | |||
/// Get a block header from the on demand service or client, or error. | |||
pub fn header(&self, id: BlockId) -> BoxFuture<Option<encoded::Header>, Error> { | |||
pub fn header(&self, id: BlockId) -> BoxFuture<encoded::Header, Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be futures::Either
with a typedef instead of Box
everywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, although I think optimizing this stuff is beyond the scope of the PR. After #5573 , most of these RPC helpers won't need to dispatch futures, and will rather just push onto a request builder.
rpc/src/v1/impls/light/eth.rs
Outdated
// three possible outcomes: | ||
// - network is down. | ||
// - we get a score, but our hash is non-canonical. | ||
// - we get ascore, and our hash is canonical. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/ascore/a score
@@ -350,6 +364,33 @@ impl IncompleteRequest for Request { | |||
Request::Execution(req) => req.complete().map(CompleteRequest::Execution), | |||
} | |||
} | |||
|
|||
fn adjust_refs<F>(&mut self, mapping: F) where F: FnMut(usize) -> usize { | |||
match *self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make sense to implement Request::request()
method on the enum itself, seems that pattern matching every kind is quite common in this file. (Probably same with Response
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, although there are currently tests for behavior of the individual types as well as their enum-wrapped counterparts, and implementing just for the enum would make these tests impossible. It's kind of a drag to do all the matching, but I think this code could probably be refactored better with macros
// fill as much of the next request as we can. | ||
if let Some(ref mut req) = self.requests.get_mut(self.answered) { | ||
// fill as much of each remaining request as we can. | ||
for req in self.requests.iter_mut().skip(self.answered) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't we filling up the same requests couple of times? Seems that supply_response
is called on every iteration of the loop in respond_to_all
. Is req::fill
only requesting the outputs that it doesn't already have?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do iterate over all remaining requests upon each supplied response, but as you guessed it will only request outputs that haven't already been filled.
An optimization would be to just fill the next request upon each supplied response and then do a sweep of all unanswered requests after supplying all responses.
@@ -65,44 +73,28 @@ impl RequestBuilder { | |||
|
|||
/// Requests pending responses. | |||
#[derive(Debug, Clone, PartialEq, Eq)] | |||
pub struct Requests { | |||
pub struct Requests<T: IncompleteRequest> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why put constraints here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll fix that, it's definitely unidiomatic
} | ||
|
||
#[test] | ||
fn detects_hangup() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Loving this file, the tests looks so clean 👍
// update pending fields and re-queue. | ||
let capabilities = guess_capabilities(&pending.requests.requests()[num_answered..]); | ||
pending.net_requests = builder.build(); | ||
pending.required_capabilities = capabilities; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shoudn't you update pending.requests
here as well (i.e. remove the answered ones?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, pending.requests
is intentionally not shortened. although that approach would work, I thought it made more sense to keep the original requests batch around and only adjust back-references when building the net_requests
, which should lead to slightly less work over-all.
use util::memorydb::MemoryDB; | ||
use util::sha3::Hashable; | ||
use util::trie::{Trie, TrieDB, TrieError}; | ||
|
||
const SUPPLIED_MATCHES: &'static str = "supplied responses always match produced requests; qed"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this match ensures that a response is invalid if it doesn't have the correct type for the request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth including the function name in the proof then. Something like:
check_request called earlier so supplied response matches produced request; qed
prover.check_response(cache, &res.code).map(Response::Code), | ||
(&CheckedRequest::Execution(ref prover, _), &NetResponse::Execution(ref res)) => | ||
prover.check_response(cache, &res.items).map(Response::Execution), | ||
_ => Err(Error::WrongKind), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be more future proof to enumerate all CheckedRequest
(so that when the protocol is extended you immediatelly see all places requiring implementation), but I know it would make the code here a bit more ugly (and macro would be required):
match *self {
CheckedRequest::Account(..) => match!(NetResponse::Account(), response) {
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
honestly, that looks cleaner and I never even considered it :)
After #5383, #5320
Closes #4593
Partially addresses #5311
This PR adds batch-request support to the
OnDemand
service. Batch responses are handled up to the point where they are incomplete or incorrect, and the remainder will be reassigned to another peer.There's still a lot of room for improvement in terms of multiplexing different logical batches into different packets, along with submitting portions of batches at a time instead of submitting the whole thing (which will cause heavy requests to potentially never be served) but this come later.
Requests are now prioritized by submission order.
A strongly-typed API for
OnDemand
has been added; for tuples up to a certain size, you can submitand get back a
The RPCs haven't been ported to use the new system yet, so they should be as inefficient as ever. The next PR will do this, along with implementing back-referencing for on-demand requests, where verification data can be extracted from prior responses.