-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposer LMD Score Boosting #2730
Conversation
Thanks @joachimneu for pointing out that the proposer score boost should allow for more granularity. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
A handful of formatting comments. Logic looks generally good
Co-authored-by: Danny Ryan <[email protected]>
2de8b3b
to
88c76ab
Compare
44e064a
to
cebe6ba
Compare
1e985b5
to
ea09df5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice! minor comments. and one substnative one
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_get_head.py
Outdated
Show resolved
Hide resolved
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_get_head.py
Outdated
Show resolved
Hide resolved
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_get_head.py
Outdated
Show resolved
Hide resolved
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_on_block.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flagging this as "requesting changes" so we don't merge before changing propser score boost to 70
b823455
to
e2c396c
Compare
e2c396c
to
bdd7b07
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed commit of configuration invariant checks. (Sorry for ruining it for a couple minutes)
I have a question about the is_before_attesting_interval
calculation.
Generally, the approach looks good to me.
@@ -446,7 +446,7 @@ def get_block_signature(state: BeaconState, block: BeaconBlock, privkey: int) -> | |||
|
|||
A validator is expected to create, sign, and broadcast an attestation during each epoch. The `committee`, assigned `index`, and assigned `slot` for which the validator performs this role during an epoch are defined by `get_committee_assignment(state, epoch, validator_index)`. | |||
|
|||
A validator should create and broadcast the `attestation` to the associated attestation subnet when either (a) the validator has received a valid block from the expected block proposer for the assigned `slot` or (b) one-third of the `slot` has transpired (`SECONDS_PER_SLOT / 3` seconds after the start of `slot`) -- whichever comes _first_. | |||
A validator should create and broadcast the `attestation` to the associated attestation subnet when either (a) the validator has received a valid block from the expected block proposer for the assigned `slot` or (b) `1 / INTERVALS_PER_SLOT` of the `slot` has transpired (`SECONDS_PER_SLOT / INTERVALS_PER_SLOT` seconds after the start of `slot`) -- whichever comes _first_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpicking: we use SECONDS_PER_SLOT // INTERVALS_PER_SLOT
in code and have set SECONDS_PER_SLOT % INTERVALS_PER_SLOT == 0
. It could be a bit off when SECONDS_PER_SLOT % INTERVALS_PER_SLOT != 0
.
Is SECONDS_PER_SLOT % INTERVALS_PER_SLOT == 0
an invariant that should be checked in test_config_invariants.py
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that this is division and not integer division.
I don't think this makes sense as a strict config invariant but maybe deserves a note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EDITED: Sorry, I meant "we use SECONDS_PER_SLOT // INTERVALS_PER_SLOT
in code ".
^^^^^ typo fixed
note that this is division and not integer division.
Yes. Clarify: so the value that the validator guide describes may be different from the value from the fork choice rule code when SECONDS_PER_SLOT % INTERVALS_PER_SLOT != 0
. INTERVALS_PER_SLOT := 3
is fine for our minimal and mainnet config.
Another option is also using integer division here: "... SECONDS_PER_SLOT // INTERVALS_PER_SLOT
seconds after the start of slot
..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, right. Then it is valuable to have these work well together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The on_tick
call seems incorrect for test vectors.
Proposed fix that you can review and cherry-pick: https://github.com/ethereum/consensus-specs/compare/proposer-score-boost-test-fix
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_on_block.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one more on_tick
usage issue. otherwise, it looks good.
tests/core/pyspec/eth2spec/test/phase0/fork_choice/test_get_head.py
Outdated
Show resolved
Hide resolved
Add `proposer_boost_root` field to test vector "checks" step
## Issue Addressed Resolves: #2741 Includes: #2853 so that we can get ssz static tests passing here on v1.1.6. If we want to merge that first, we can make this diff slightly smaller ## Proposed Changes - Changes the `justified_epoch` and `finalized_epoch` in the `ProtoArrayNode` each to an `Option<Checkpoint>`. The `Option` is necessary only for the migration, so not ideal. But does allow us to add a default logic to `None` on these fields during the database migration. - Adds a database migration from a legacy fork choice struct to the new one, search for all necessary block roots in fork choice by iterating through blocks in the db. - updates related to ethereum/consensus-specs#2727 - We will have to update the persisted forkchoice to make sure the justified checkpoint stored is correct according to the updated fork choice logic. This boils down to setting the forkchoice store's justified checkpoint to the justified checkpoint of the block that advanced the finalized checkpoint to the current one. - AFAICT there's no migration steps necessary for the update to allow applying attestations from prior blocks, but would appreciate confirmation on that - I updated the consensus spec tests to v1.1.6 here, but they will fail until we also implement the proposer score boost updates. I confirmed that the previously failing scenario `new_finalized_slot_is_justified_checkpoint_ancestor` will now pass after the boost updates, but haven't confirmed _all_ tests will pass because I just quickly stubbed out the proposer boost test scenario formatting. - This PR now also includes proposer boosting ethereum/consensus-specs#2730 ## Additional Info I realized checking justified and finalized roots in fork choice makes it more likely that we trigger this bug: ethereum/consensus-specs#2727 It's possible the combination of justified checkpoint and finalized checkpoint in the forkchoice store is different from in any block in fork choice. So when trying to startup our store's justified checkpoint seems invalid to the rest of fork choice (but it should be valid). When this happens we get an `InvalidBestNode` error and fail to start up. So I'm including that bugfix in this branch. Todo: - [x] Fix fork choice tests - [x] Self review - [x] Add fix for ethereum/consensus-specs#2727 - [x] Rebase onto Kintusgi - [x] Fix `num_active_validators` calculation as @michaelsproul pointed out - [x] Clean up db migrations Co-authored-by: realbigsean <[email protected]>
This PR implements a fix for the LMD balancing attacks, as described here.
Note: Old PR for this feature was #2353.
Proposer Score Boost
A new "proposer score boost" has been introduced, which boosts the LMD score of blocks that are received in a timely manner. The boost increases the score of the block by
committee_weight // 4
for that slot. If a block for slotX
is received in the firstSECONDS_PER_SLOT // INTERVALS_PER_SLOT
(= 4) seconds of the slot, then the block receives the boost for the duration of slotX
. After slotX
is over, the boost is removed and the usual LMD score calculations from only attestations are done.The boost is not applied to blocks that are not received in the first
SECONDS_PER_SLOT // INTERVALS_PER_SLOT
seconds of their slots, and this PR does not affect LMD score calculations for such blocks.In the above example:
B
is received in a timely manner and qualifies for the boost.B
's score is boosted for only slotX
.C
is not received in a timely manner and does not qualify for the boost.Summary of Changes
ATTESTATION_OFFSET_QUOTIENT
, which is used to specify the time at which attestations are produced within a slotStore
:store.proposer_score_boost
, which is a specialLatestMessage
that stores the boostget_latest_attesting_balance
to take into accountstore.proposer_score_boost
on_block
&on_tick
to appropriately set/resetstore.proposer_score_boost
store.proposer_score_boost
accounting