Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Improve on-demand dispatch and add support for batch requests #5419

Merged
merged 27 commits into from
May 17, 2017

Conversation

rphmeier
Copy link
Contributor

@rphmeier rphmeier commented Apr 6, 2017

After #5383, #5320
Closes #4593
Partially addresses #5311

This PR adds batch-request support to the OnDemand service. Batch responses are handled up to the point where they are incomplete or incorrect, and the remainder will be reassigned to another peer.

There's still a lot of room for improvement in terms of multiplexing different logical batches into different packets, along with submitting portions of batches at a time instead of submitting the whole thing (which will cause heavy requests to potentially never be served) but this come later.

Requests are now prioritized by submission order.

A strongly-typed API for OnDemand has been added; for tuples up to a certain size, you can submit

(request_type_a, request_type_b, request_type_c)

and get back a

(response_type_a, response_type_b, response_type_c)

The RPCs haven't been ported to use the new system yet, so they should be as inefficient as ever. The next PR will do this, along with implementing back-referencing for on-demand requests, where verification data can be extracted from prior responses.

@rphmeier
Copy link
Contributor Author

rphmeier commented Apr 6, 2017

Strongly-typed API in place:

let (header, code, account) = on_demand.request(&ctx, (hdr_req, code_req, account_req))
    .expect("all back-references ok").wait();

@rphmeier rphmeier added A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. M4-core ⛓ Core client code / Rust. labels Apr 6, 2017
@rphmeier rphmeier added A1-onice 🌨 Pull request is reviewed well, but should not yet be merged. and removed A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. labels Apr 10, 2017
@rphmeier rphmeier added A0-pleasereview 🤓 Pull request needs code review. and removed A1-onice 🌨 Pull request is reviewed well, but should not yet be merged. labels Apr 13, 2017
Copy link
Collaborator

@tomusdrw tomusdrw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of nitpicks, will need to dive into the logic deeper.

}

/// Request a canonical block's chain score.
/// Returns the chain score.
pub fn chain_score_by_number(&self, ctx: &BasicContext, req: request::HeaderProof) -> Receiver<U256> {
let (sender, receiver) = oneshot::channel();
pub fn chain_score_by_number(&self, ctx: &BasicContext, req: request::HeaderProof) -> BoxFuture<U256, Canceled> {
let cached = {
let mut cache = self.cache.lock();
cache.block_hash(&req.num()).and_then(|hash| cache.chain_score(&hash))
};

match cached {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like a lot of repetition with getting stuff from cache, maybe use something like this?

fn from_cache<T, F: FnOnce(&mut Cache) -> T>(f: F) -> Option<BoxFuture<T, Canceled>> {
  //..
}

self.from_cache(|cache| cache.block_hash(&req.num).and_then(|hash| cache.chain_score(hash)))
  .unwrap_or_else(|| self.request(ctx, req)...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these methods are removed and replaced with a more general answer_from_cache in #5573, would prefer to leave this refactor specifically to that PR.

@@ -57,16 +57,16 @@ pub type ExecutionResult = Result<Executed, ExecutionError>;

impl LightFetch {
/// Get a block header from the on demand service or client, or error.
pub fn header(&self, id: BlockId) -> BoxFuture<Option<encoded::Header>, Error> {
pub fn header(&self, id: BlockId) -> BoxFuture<encoded::Header, Error> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be futures::Either with a typedef instead of Box everywhere.

Copy link
Contributor Author

@rphmeier rphmeier May 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, although I think optimizing this stuff is beyond the scope of the PR. After #5573 , most of these RPC helpers won't need to dispatch futures, and will rather just push onto a request builder.

// three possible outcomes:
// - network is down.
// - we get a score, but our hash is non-canonical.
// - we get ascore, and our hash is canonical.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/ascore/a score

@@ -350,6 +364,33 @@ impl IncompleteRequest for Request {
Request::Execution(req) => req.complete().map(CompleteRequest::Execution),
}
}

fn adjust_refs<F>(&mut self, mapping: F) where F: FnMut(usize) -> usize {
match *self {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it make sense to implement Request::request() method on the enum itself, seems that pattern matching every kind is quite common in this file. (Probably same with Response)

Copy link
Contributor Author

@rphmeier rphmeier May 12, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, although there are currently tests for behavior of the individual types as well as their enum-wrapped counterparts, and implementing just for the enum would make these tests impossible. It's kind of a drag to do all the matching, but I think this code could probably be refactored better with macros

// fill as much of the next request as we can.
if let Some(ref mut req) = self.requests.get_mut(self.answered) {
// fill as much of each remaining request as we can.
for req in self.requests.iter_mut().skip(self.answered) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't we filling up the same requests couple of times? Seems that supply_response is called on every iteration of the loop in respond_to_all. Is req::fill only requesting the outputs that it doesn't already have?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do iterate over all remaining requests upon each supplied response, but as you guessed it will only request outputs that haven't already been filled.

An optimization would be to just fill the next request upon each supplied response and then do a sweep of all unanswered requests after supplying all responses.

@@ -65,44 +73,28 @@ impl RequestBuilder {

/// Requests pending responses.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Requests {
pub struct Requests<T: IncompleteRequest> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why put constraints here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix that, it's definitely unidiomatic

}

#[test]
fn detects_hangup() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loving this file, the tests looks so clean 👍

// update pending fields and re-queue.
let capabilities = guess_capabilities(&pending.requests.requests()[num_answered..]);
pending.net_requests = builder.build();
pending.required_capabilities = capabilities;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shoudn't you update pending.requests here as well (i.e. remove the answered ones?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, pending.requests is intentionally not shortened. although that approach would work, I thought it made more sense to keep the original requests batch around and only adjust back-references when building the net_requests, which should lead to slightly less work over-all.

use util::memorydb::MemoryDB;
use util::sha3::Hashable;
use util::trie::{Trie, TrieDB, TrieError};

const SUPPLIED_MATCHES: &'static str = "supplied responses always match produced requests; qed";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/paritytech/parity/pull/5419/files/0fd3e36c23123ddd7f33b3891979214e3f23ceb0#diff-abb081d3b17a1ecf465e845b3fef0728R354

this match ensures that a response is invalid if it doesn't have the correct type for the request.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth including the function name in the proof then. Something like:
check_request called earlier so supplied response matches produced request; qed

prover.check_response(cache, &res.code).map(Response::Code),
(&CheckedRequest::Execution(ref prover, _), &NetResponse::Execution(ref res)) =>
prover.check_response(cache, &res.items).map(Response::Execution),
_ => Err(Error::WrongKind),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be more future proof to enumerate all CheckedRequest (so that when the protocol is extended you immediatelly see all places requiring implementation), but I know it would make the code here a bit more ugly (and macro would be required):

match *self {
  CheckedRequest::Account(..) => match!(NetResponse::Account(), response) {
  }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

honestly, that looks cleaner and I never even considered it :)

@tomusdrw tomusdrw added A8-looksgood 🦄 Pull request is reviewed well. and removed A0-pleasereview 🤓 Pull request needs code review. labels May 16, 2017
@rphmeier rphmeier merged commit 5d973f8 into master May 17, 2017
@rphmeier rphmeier deleted the on-demand-priority branch May 17, 2017 10:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A8-looksgood 🦄 Pull request is reviewed well. M4-core ⛓ Core client code / Rust.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants