Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Remove RefCell from Header #8227

Merged
merged 6 commits into from
Apr 3, 2018
Merged

Remove RefCell from Header #8227

merged 6 commits into from
Apr 3, 2018

Conversation

tomusdrw
Copy link
Collaborator

This makes Header Sync.

I've analyzed the actual usage of Header hash cache and encoded::Header hashing and:

  1. During the initial sync we only work on Decoded headers and we fetch hash() multiple times (and bare_hash() at most once)
  2. After we are fully synced we often request best_block_header and it's hash, previously we were usually decoding encoded::Header to do that.

This PR removes Blockchain::block_header since it was always causing decoding, and was mostly used to get best block header. Instead it introduces Blockchain::best_block_header which returns a cached and decoded header and a helper method in Client doing the same.

As a result even though we are dropping bare_hash cache we should not see any performance drawback. Actually we might see a small improvement due to less decoding and hashing. This is still being benchmarked.

It will also help with #8074 and the unsafe Sync discussion there.

@tomusdrw tomusdrw added A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. M4-core ⛓ Core client code / Rust. labels Mar 26, 2018
@5chdn 5chdn added this to the 1.11 milestone Mar 27, 2018
Copy link
Collaborator

@debris debris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely a step in the right direction, but eventually I'd love to see a full separation of Header with and without hash :) #1267

@@ -269,7 +269,6 @@ impl<'x> OpenBlock<'x> {
r.block.header.set_author(author);
r.block.header.set_timestamp_now(parent.timestamp());
r.block.header.set_extra_data(extra_data);
r.block.header.note_dirty();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️

@@ -374,7 +375,7 @@ impl<'x> OpenBlock<'x> {
s.block.header.set_uncles_hash(keccak(&uncle_bytes));
s.block.header.set_state_root(s.block.state.root().clone());
s.block.header.set_receipts_root(ordered_trie_root(s.block.receipts.iter().map(|r| r.rlp_bytes())));
s.block.header.set_log_bloom(s.block.receipts.iter().fold(Bloom::zero(), |mut b, r| {b = &b | &r.log_bloom; b})); //TODO: use |= operator
s.block.header.set_log_bloom(s.block.receipts.iter().fold(Bloom::zero(), |b, r| &b | &r.log_bloom ));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will be even more efficient if you use accrue_bloom method as it modifying bloom inplace without allocating 256 bytes for every loop iteration.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logged an issue for that :)

@@ -407,7 +409,7 @@ impl<'x> OpenBlock<'x> {
}

s.block.header.set_state_root(s.block.state.root().clone());
s.block.header.set_log_bloom(s.block.receipts.iter().fold(Bloom::zero(), |mut b, r| {b = &b | &r.log_bloom; b})); //TODO: use |= operator
s.block.header.set_log_bloom(s.block.receipts.iter().fold(Bloom::zero(), |b, r| &b | &r.log_bloom));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, please look into accrue_bloom

@@ -53,19 +53,19 @@ lazy_static! {
// only "first" proofs are such.
struct StateProof {
contract_address: Address,
header: Mutex<Header>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@debris
Copy link
Collaborator

debris commented Mar 28, 2018

lgtm, but I'm not changing the label as it is currently marked 'inprogress'

@tomusdrw
Copy link
Collaborator Author

@debris Marked as in progress cause I wanted to benchmark blockchain import speed (that it's not worse).

@tomusdrw
Copy link
Collaborator Author

Did some performance tests (importing first 2M blocks of eth mainnet):

(old)
2018-03-29 14:23:10  main INFO parity::blockchain  Import completed in 4612 seconds, 2000000 blocks, 433 blk/s, 8057391 transactions, 1746 tx/s, 306703 Mgas, 66 Mgas/s

vs (new)
2018-03-29 11:26:46  main INFO parity::blockchain  Import completed in 4559 seconds, 2000000 blocks, 438 blk/s, 8057391 transactions, 1766 tx/s, 306703 Mgas, 67 Mgas/s

So the difference is insignificant (note I was using my computer during this time, so it's not comparative benchmark) and new code doesn't cause any major performance regression during sync (which I believe is not a good benchmark for that code either, but at least we know that normal mode operation is not affected).

@tomusdrw tomusdrw added A8-looksgood 🦄 Pull request is reviewed well. and removed A3-inprogress ⏳ Pull request is in progress. No review needed at this stage. labels Mar 29, 2018
Copy link
Contributor

@andresilva andresilva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}

impl PartialEq for Header {
fn eq(&self, c: &Header) -> bool {
if let (&Some(ref h1), &Some(ref h2)) = (&self.hash, &c.hash) {
if h1 == h2 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be return h1 == h2 instead? I.e. if the hash is defined for both we do the comparison strictly on the hash.

@andresilva
Copy link
Contributor

There's a conflict that needs fixing.

@debris debris merged commit 9f775a7 into master Apr 3, 2018
@debris debris deleted the td-header branch April 3, 2018 08:01
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A8-looksgood 🦄 Pull request is reviewed well. M4-core ⛓ Core client code / Rust.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants