Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use target state to compute committees #4235

Open
wants to merge 6 commits into
base: unstable
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 30 additions & 18 deletions beacon_node/beacon_chain/src/beacon_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5463,11 +5463,16 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
where
F: Fn(&CommitteeCache, Hash256) -> Result<R, Error>,
{
let head_block = self
.canonical_head
.fork_choice_read_lock()
.get_block(&head_block_root)
.ok_or(Error::MissingBeaconBlock(head_block_root))?;
let (head_block, target_block) = {
let fork_choice = self.canonical_head.fork_choice_read_lock();
let head_block = fork_choice
.get_block(&head_block_root)
.ok_or(Error::MissingBeaconBlock(head_block_root))?;
let target_block = fork_choice
.get_block(&head_block.target_root)
.ok_or(Error::MissingBeaconBlock(head_block.target_root))?;
(head_block, target_block)
};

let shuffling_id = BlockShufflingIds {
current: head_block.current_epoch_shuffling_id.clone(),
Expand Down Expand Up @@ -5543,28 +5548,35 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let (mut state, state_root) = if let Some((state, state_root)) = head_state_opt {
(state, state_root)
} else {
let state_root = head_block.state_root;
// There's no reason we'd want to load the shuffling for a cold
// state; attestation duties will never be pre-finalized and
// attestations to finalized blocks are useless.
let split_slot = self.store.get_split_slot();
if split_slot > target_block.slot {
paulhauner marked this conversation as resolved.
Show resolved Hide resolved
return Err(Error::RequestedColdShuffling {
split_slot,
request_slot: target_block.slot,
});
}
Comment on lines +5552 to +5561
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned we have a race condition here that could make this error reachable, resulting in an ugly error! here. The race is:

  1. Verify attestation block isn't finalized (it's not, yet)
  2. Check fork choice, attestation block is present
  3. Finalization updates, and so does the split point
  4. The attestation slot is now prior to the split and triggers this error

Alternatively:

  1. Verify attestation block isn't finalized (it's not, yet)
  2. Check fork choice, attestation block is present
  3. Check attestation slot against split slot, still fine
  4. Finalization migration runs and advances the split
  5. Try to run get_state and fail, because the state has been pruned. Or (today) load a freezer state at great expense ( 😱 ).

This second race is related to #4610.

My preference would be to soak #4160 in Hydra and ensure it's solid before making any changes here. Then we can add this change in maybe with the following defensive changes:


// Load the state that is the target from the perspective of
// `head_block_root`. It has everything we need and is on an
// epoch boundary so it should be cheaper to load from the DB.
let state_root = target_block.state_root;
let state = self
.store
.get_inconsistent_state_for_attestation_verification_only(
&state_root,
Some(head_block.slot),
)?
.ok_or(Error::MissingBeaconState(head_block.state_root))?;
.get_state(&state_root, Some(target_block.slot))?
.ok_or(Error::MissingBeaconState(state_root))?;
(state, state_root)
};

/*
* IMPORTANT
*
* Since it's possible that
* `Store::get_inconsistent_state_for_attestation_verification_only` was used to obtain
* the state, we cannot rely upon the following fields:
*
* - `state.state_roots`
* - `state.block_roots`
* The state that we're using might not be at the same slot as
* `beacon_block_root`. It is only guaranteed to be in the same
* epoch.
*
* These fields should not be used for the rest of this function.
*/

metrics::stop_timer(state_read_timer);
Expand Down
4 changes: 4 additions & 0 deletions beacon_node/beacon_chain/src/errors.rs
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,10 @@ pub enum BeaconChainError {
finalized_slot: Slot,
request_slot: Slot,
},
RequestedColdShuffling {
split_slot: Slot,
request_slot: Slot,
},
AttestingToAncientSlot {
lowest_permissible_slot: Slot,
request_slot: Slot,
Expand Down
44 changes: 0 additions & 44 deletions beacon_node/beacon_chain/tests/store_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -444,50 +444,6 @@ async fn forwards_iter_block_and_state_roots_until() {
test_range(Slot::new(0), head_state.slot());
}

#[tokio::test]
async fn block_replay_with_inaccurate_state_roots() {
let num_blocks_produced = E::slots_per_epoch() * 3 + 31;
let db_path = tempdir().unwrap();
let store = get_store(&db_path);
let harness = get_harness(store.clone(), LOW_VALIDATOR_COUNT);
let chain = &harness.chain;

harness
.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
)
.await;

// Slot must not be 0 mod 32 or else no blocks will be replayed.
let (mut head_state, head_root) = harness.get_current_state_and_root();
assert_ne!(head_state.slot() % 32, 0);

let mut fast_head_state = store
.get_inconsistent_state_for_attestation_verification_only(
&head_root,
Some(head_state.slot()),
)
.unwrap()
.unwrap();
assert_eq!(head_state.validators(), fast_head_state.validators());

head_state.build_all_committee_caches(&chain.spec).unwrap();
fast_head_state
.build_all_committee_caches(&chain.spec)
.unwrap();

assert_eq!(
head_state
.get_cached_active_validator_indices(RelativeEpoch::Current)
.unwrap(),
fast_head_state
.get_cached_active_validator_indices(RelativeEpoch::Current)
.unwrap()
);
}

#[tokio::test]
async fn block_replayer_hooks() {
let db_path = tempdir().unwrap();
Expand Down
35 changes: 0 additions & 35 deletions beacon_node/store/src/hot_cold_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -537,41 +537,6 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> HotColdDB<E, Hot, Cold>
}
}

/// Fetch a state from the store, but don't compute all of the values when replaying blocks
/// upon that state (e.g., state roots). Additionally, only states from the hot store are
/// returned.
///
/// See `Self::get_state` for information about `slot`.
///
/// ## Warning
///
/// The returned state **is not a valid beacon state**, it can only be used for obtaining
/// shuffling to process attestations. At least the following components of the state will be
/// broken/invalid:
///
/// - `state.state_roots`
/// - `state.block_roots`
pub fn get_inconsistent_state_for_attestation_verification_only(
&self,
state_root: &Hash256,
slot: Option<Slot>,
) -> Result<Option<BeaconState<E>>, Error> {
metrics::inc_counter(&metrics::BEACON_STATE_GET_COUNT);

let split_slot = self.get_split_slot();

if slot.map_or(false, |slot| slot < split_slot) {
Err(HotColdDBError::AttestationStateIsFinalized {
split_slot,
request_slot: slot,
state_root: *state_root,
}
.into())
} else {
self.load_hot_state(state_root, StateRootStrategy::Inconsistent)
}
}

/// Delete a state, ensuring it is removed from the LRU cache, as well as from on-disk.
///
/// It is assumed that all states being deleted reside in the hot DB, even if their slot is less
Expand Down