-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Conversation
ethcore/sync/src/chain/supplier.rs
Outdated
@@ -238,6 +249,10 @@ impl SyncSupplier { | |||
if let Some(node) = io.chain().state_data(&r.val_at::<H256>(i)?) { | |||
data.push(node); | |||
added += 1; | |||
// Check that the packet won't be oversized | |||
if data.len() > payload_soft_limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems incorrect - it's just a vector of state data objects, not bytes.
Maybe worth computing RLP inline and checking the actual size (maybe every N
items to avoid unecessary work)? It seems that begin_list(len)
and begin_unbounded_list()
is exactly the same - the only difference is that you need to complete_unbounded_list
and it will panic if you exceed declared size (TBH the bounded list API seems completely useless, CC @debris).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep sorry, I wasn't careful enough... I'll change this but I don't think we should try to compute the precise RLP length since we only care about a soft_limit
, meaning it's OK if it's a bit more than the limit ; it would add to much complexity for nothing IMO.
@@ -215,6 +221,10 @@ impl SyncSupplier { | |||
if let Some(body) = io.chain().block_body(BlockId::Hash(r.val_at::<H256>(i)?)) { | |||
data.append(&mut body.into_inner()); | |||
added += 1; | |||
// Check that the packet won't be oversized | |||
if data.len() > payload_soft_limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as with headers, not sure how reliable is RlpStream::len
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here data
is of type Bytes
@@ -182,6 +183,10 @@ impl SyncSupplier { | |||
} else if let Some(hdr) = io.chain().block_header(BlockId::Number(number)) { | |||
data.append(&mut hdr.into_inner()); | |||
count += 1; | |||
// Check that the packet won't be oversized | |||
if data.len() > payload_soft_limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not sum up the raw header bytes that we append to RLP a line earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be more efficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: this won't include the size of the fork header. we can move this to just before the if reverse
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
ethcore/sync/src/chain/supplier.rs
Outdated
@@ -266,6 +282,10 @@ impl SyncSupplier { | |||
added_receipts += receipts_bytes.len(); | |||
added_headers += 1; | |||
if added_receipts > MAX_RECEIPTS_TO_SEND { break; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we get rid of this? It seems redudant now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Acutally, I kept MAX_RECEIPTS_TO_SEND
which as you pointed is already in bytes (cf. #10042 (comment)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but do you know why do we have this limit in the first place? Maybe let's use soft limit instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I don't know either, let me do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. The actual size of the encoded RLP is slightly larger than what we're tracking but it should be fine, I think we should be able to increase the soft limit to half the max payload size. Do you know what is the limit that geth sets?
@@ -182,6 +183,10 @@ impl SyncSupplier { | |||
} else if let Some(hdr) = io.chain().block_header(BlockId::Number(number)) { | |||
data.append(&mut hdr.into_inner()); | |||
count += 1; | |||
// Check that the packet won't be oversized | |||
if data.len() > payload_soft_limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: this won't include the size of the fork header. we can move this to just before the if reverse
.
It seems that when answering a
get_block_bodies
,get_block_headers
, etc. request, we only check the size of the response payload when sending it, instead of when building it. This means that if a peer asks for 100 big (meaning w/ a lot of transactions) blocks, we would construct the payload of the 100 block bodies even though it will never be sent because it would exceed the packet size.With this PR, we stop adding blocks to the response when we know the size will exceed a certain limit.
For now, this limit is lower than the official protocol packet limit (set to 2MB), but if needed this can be changed.
I think it would solve these issues: #9770 and #9726 which would occur when a node is connected to only Parity nodes and is asking for big blocks (and thus never receive any answer).
When this PR is merged, I think that the next step would be to increase the number of blocks we ask our peers, since they shouldn't stop when the answer gets too big (Geth already does this).