-
Notifications
You must be signed in to change notification settings - Fork 678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/nakamoto coordinator #4009
Conversation
…s (update the structs and DB tables to reflect this). Also, add top-level Nakamoto block-processing.
…t_canonical_block_header()
…ompatible with both StacksBlock and NakamotoBlock
…th epoch2) and make the remaining directives compatible with epoch3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great to me, just have some review comments.
Great to see this PR!
@jcnelson really appreciate the effort to get early feedback. A lot of devs are really interested in learning/knowing more Nakamoto implementation and sharing things early even when they don't feel fully ready helps a ton with that 🙏 🙌 |
pub fn set_nakamoto_signing_key(&mut self, pubkey_hash160: &Hash160) { | ||
if self.memo.len() < 20 { | ||
let mut new_memo = vec![0; 20]; | ||
new_memo[0..self.memo.len()].copy_from_slice(&self.memo); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused by this.Not sure I am reading this correclty, but this is what I think is happening:
new_memo is intended to pad self.memo with 0s (if it is less than 20 bytes in length), preserving the original self.memo.len() contents of memo. But then it immediately overwrites this with the contents of the pubkey_hash160. What I don't understand is why bother copying the contents of self.memo back into new_memo as it will get overwritten on line 93? Can't we just remove line 90 for the exact same result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this looks overly complicated. It looks like you're just trying to resize self.memo
. How about just doing:
self.memo.resize(20);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Nakamoto, the memo
field has been repurposed to contain the hash160 of the miner's public key (which is 20 bytes). I don't think this is enforced by this PR, but I just opened an issue for it. It'll ship with Neon.
// did we already calculate the reward cycle info? If so, then return it. | ||
let first_sortition_id = if let Some(first_sn) = prepare_phase_sortitions.first() { | ||
if let Some(persisted_reward_cycle_info) = | ||
SortitionDB::get_preprocessed_reward_set(sort_db.conn(), &first_sn.sortition_id)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that a reward set is preprocessed BEFORE DKG is set and then this call occurs AFTER DKG is set, but the preprocessed reward set is not updated correctly to include this? (commenting mostly for my PR built ontop of this one)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that will need to be addressed in your PR. The reward set will need to be updated with the aggregate public key if it has since become available. Perhaps the aggregate public key could be stored in a separate column in the same table? Then you don't have to load the entire reward set to check each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to submit what I have and save the rest of this PR for tomorrow. Most of my comments are pretty minor and about Rust style/safety/optimization. I don't understand the codebase well enough yet to find the sort of logic errors that you and Aaron can.
let new_seed = if let Some(new_seed) = new_seed { | ||
new_seed | ||
} else { | ||
// prove on the last-ever sortition's hash to produce the new seed | ||
let proof = miner | ||
.make_proof(&leader_key.public_key, &last_snapshot.sortition_hash) | ||
.expect(&format!( | ||
"FATAL: no private key for {}", | ||
leader_key.public_key.to_hex() | ||
)); | ||
|
||
let new_seed = VRFSeed::from_proof(&proof); | ||
new_seed | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somewhat confusing logic here, consider using unwrap_or_else()
:
let new_seed = if let Some(new_seed) = new_seed { | |
new_seed | |
} else { | |
// prove on the last-ever sortition's hash to produce the new seed | |
let proof = miner | |
.make_proof(&leader_key.public_key, &last_snapshot.sortition_hash) | |
.expect(&format!( | |
"FATAL: no private key for {}", | |
leader_key.public_key.to_hex() | |
)); | |
let new_seed = VRFSeed::from_proof(&proof); | |
new_seed | |
}; | |
let new_seed = new_seed.unwrap_or_else(|| { | |
// prove on the last-ever sortition's hash to produce the new seed | |
let proof = miner | |
.make_proof(&leader_key.public_key, &last_snapshot.sortition_hash) | |
.expect(&format!( | |
"FATAL: no private key for {}", | |
leader_key.public_key.to_hex() | |
)); | |
VRFSeed::from_proof(&proof) | |
}); |
if *unlock_height >= (v3_unlock_height as u64) { | ||
v3_unlock_height as u64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know u32 as u64
is perfectly safe, but IMO it's better to always use from()
and try_from()
to save developers the cognitive load of having to think about whether any particular use of as
is safe:
if *unlock_height >= (v3_unlock_height as u64) { | |
v3_unlock_height as u64 | |
if *unlock_height >= (u64::from(v3_unlock_height)) { | |
u64::from(v3_unlock_height) |
This applies to other uses of as
in the PR as well. Ideally, we wouldn't have any usage of as
in our codebase and could enable the Clippy lint for it
debug!( | ||
"Memoized canonical Stacks chain tip is now {}/{}", | ||
&ch, &bhh | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor style nit, but IMO putting variables inside the format string is cleaner and easier to read:
debug!( | |
"Memoized canonical Stacks chain tip is now {}/{}", | |
&ch, &bhh | |
); | |
debug!("Memoized canonical Stacks chain tip is now {ch}/{bhh}"); |
.optional() | ||
.map_err(db_error::from)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
map_err()
is unnecessary here, ?
automatically converts for error types where trait From
is implemented:
.optional() | |
.map_err(db_error::from)?; | |
.optional()?; |
sort_db.conn(), | ||
&sns.last() | ||
.as_ref() | ||
.expect("FATAL; unreachable: sns is never empty") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo?
.expect("FATAL; unreachable: sns is never empty") | |
.expect("FATAL: unreachable: sns is never empty") |
}; | ||
sort_db | ||
.get_next_block_recipients(burnchain, sortition_tip, reward_cycle_info.as_ref()) | ||
.map_err(|e| Error::from(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.map_err(|e| Error::from(e)) | |
.map_err(Error::from) |
BurnchainDB::get_burnchain_block(&self.burnchain_blocks_db.conn(), &cursor) | ||
.map_err(|e| { | ||
warn!( | ||
"ChainsCoordinator: could not retrieve block burnhash={}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo, extra space:
"ChainsCoordinator: could not retrieve block burnhash={}", | |
"ChainsCoordinator: could not retrieve block burnhash={}", |
debug!( | ||
"Unprocessed burn chain blocks [{}]", | ||
dbg_burn_header_hashes.join(", ") | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably more efficient debug print a string vector:
debug!( | |
"Unprocessed burn chain blocks [{}]", | |
dbg_burn_header_hashes.join(", ") | |
); | |
debug!( | |
"Unprocessed burn chain blocks {dbg_burn_header_hashes:?}" | |
); |
|
||
let burn_tip = SortitionDB::get_canonical_chain_tip_bhh(burn_dbconn.conn())?; | ||
let burn_tip_height = | ||
SortitionDB::get_canonical_burn_chain_tip(burn_dbconn.conn())?.block_height as u32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
u64 as u32
can truncate. Yes it will take thousands of years at current Bitcoin block rate, but suppose one day Bitcoin decides on faster blocks, or this code gets used elsewhere (like Subnets, or another chain). Or suppose we add integration tests with unrealistic values for burn_height
. This sort of bug is unlikely to occur, but very difficult to track down, so it's better to make it impossible
SortitionDB::get_canonical_burn_chain_tip(burn_dbconn.conn())?.block_height as u32; | |
u32::try_from(SortitionDB::get_canonical_burn_chain_tip(burn_dbconn.conn())?.block_height).expect("block_height overflow"); |
} | ||
|
||
// tenure-changes must all come first, and must be in order | ||
for i in 0..tenure_change_positions.len() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prob better to do iter.enumerate() so you don't do a direct array access
chain_id: u32, | ||
epoch_id: StacksEpochId, | ||
) -> bool { | ||
if self.txs.len() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: will prob get fixed in a clippy PR, but in general when doing len() checks against 0, should use is_empty() (otherwise silence the clippy warning)
stackslib/src/core/mod.rs
Outdated
pub static STACKS_EPOCH_3_0_MARKER: u8 = 0x0a; | ||
pub static STACKS_EPOCH_2_5_MARKER: u8 = 0x0a; | ||
|
||
/// Stacks 3.0 epoch marker. All block-commits in 2.4 must have a memo bitfield with this value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this say "All block-commits in 3.0 must have a memo..."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
stackslib/src/clarity_vm/clarity.rs
Outdated
@@ -2471,6 +2610,10 @@ mod tests { | |||
u32::MAX | |||
} | |||
|
|||
fn get_v3_unlock_height(&self) -> u32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a purpose to keeping these unlock height and activation height functions separated into v3/v1 and pox_3/pox_4? they always seem to be the same in which case it makes more sense to keep just a single "get_unlock_height" and a "get_pox_activation_height" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes -- they have to be different block heights. Specifically, the new PoX contract needs to be instantiated first, and then after that, the tokens in the old PoX contract must unlock. But, this can't happen in the same block.
@@ -658,7 +658,13 @@ pub struct SchnorrThresholdSignature { | |||
//pub scalar: wsts::Scalar, | |||
} | |||
|
|||
/// Reasons why a `TenureChange` transaction can be de | |||
impl SchnorrThresholdSignature { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I didn't see this when i was building off this branch. Perhaps I should be using this then instead of my SchnorrSignature in my own PR.
Is there a reason you don't want to use the wsts::common::Signature directly? Are we going to be implementing a struct wrapper around the wsts point and scalar types to basically add our own custom SchnorrThresholdSignature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should do whatever makes the most sense in your PR :) This PR only needs this struct to encode/decode; it doesn't care about validating the contents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ThresholdSignature
is the same as wsts::common::Signature
, I defined a struct in this repo so that I could implement StacksMessageCodec
for it (can't do that with types defined outside this crate)
@@ -8544,9 +8546,15 @@ pub mod test { | |||
fn get_v2_unlock_height(&self) -> u32 { | |||
u32::MAX | |||
} | |||
fn get_v3_unlock_height(&self) -> u32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar comment as before. Not sure why these are separated by version and pox since they are all identical
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They're not identical on mainnet. This trait impl is just for testing transaction-handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All my comments are nits or about typos really. Don't think I know enough to find logic errors, so take this review with a grain of salt XD
Sooooo
LGTM
/// Find all positionally-valid tenure changes in this block. | ||
/// They must be the first transactions. | ||
/// Return their indexes into self.txs | ||
fn find_tenure_changes(&self) -> Vec<usize> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the caller of this might be simplified if this returned Vec<&TenureChangePayload>
because you wouldn't need to check the payload type again when validating the payloads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but that would also require lifetimes, which could be an issue depending on when/how the result is used. Also, minor nit: Using filter_map()
would simplify and clarify this logic a bit, and it might make more sense to return an Iterator
than a Vec
} | ||
if let Some(false) = wellformed { | ||
// block isn't well-formed | ||
return None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this case error and invalidate the block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, by way of validate_transactions_static()
, which gets called by validate_nakamoto_block_burnchain()
, which in turn is called by accept_block()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, an invalid Nakamoto block (i.e. one that's missing a coinbase) will never get stored to the staging DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an added safety check, the case where the call to get_coinbase_tx()
returns None
causes append_block()
to fail with an invalid-block error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- just had a few comments, and one or two questions about behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More comments...
PaidRewards { | ||
pox: vec![], | ||
burns: 0, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be a little cleaner to #[derive(default)]
for this struct and then do:
PaidRewards { | |
pox: vec![], | |
burns: 0, | |
} | |
PaidRewards::default() |
.find(|txn| { | ||
if let TransactionPayload::TenureChange(..) = txn.payload { | ||
true | ||
} else { | ||
false | ||
} | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can simplify this to:
.find(|txn| { | |
if let TransactionPayload::TenureChange(..) = txn.payload { | |
true | |
} else { | |
false | |
} | |
}) | |
.find(|txn| matches!(txn.payload, TransactionPayload::TenureChange(..))) |
/// Find all positionally-valid tenure changes in this block. | ||
/// They must be the first transactions. | ||
/// Return their indexes into self.txs | ||
fn find_tenure_changes(&self) -> Vec<usize> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but that would also require lifetimes, which could be an issue depending on when/how the result is used. Also, minor nit: Using filter_map()
would simplify and clarify this logic a bit, and it might make more sense to return an Iterator
than a Vec
let wellformed = self.is_wellformed_first_tenure_block(); | ||
if wellformed.is_none() { | ||
// block isn't a first-tenure block, so no valid tenure changes | ||
return None; | ||
} else if let Some(false) = wellformed { | ||
// this block is malformed | ||
info!("Block is malformed"; | ||
"block_id" => %self.block_id()); | ||
return Some(false); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a function returns an Option
, you can use the ?
operator with Option
types:
let wellformed = self.is_wellformed_first_tenure_block(); | |
if wellformed.is_none() { | |
// block isn't a first-tenure block, so no valid tenure changes | |
return None; | |
} else if let Some(false) = wellformed { | |
// this block is malformed | |
info!("Block is malformed"; | |
"block_id" => %self.block_id()); | |
return Some(false); | |
} | |
if !self.is_wellformed_first_tenure_block()? { | |
// this block is malformed | |
info!("Block is malformed"; | |
"block_id" => %self.block_id()); | |
return Some(false); | |
} |
let wellformed = self.is_wellformed_first_tenure_block(); | ||
if wellformed.is_none() { | ||
// block isn't a first-tenure block, so no coinbase | ||
return None; | ||
} | ||
if let Some(false) = wellformed { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify:
let wellformed = self.is_wellformed_first_tenure_block(); | |
if wellformed.is_none() { | |
// block isn't a first-tenure block, so no coinbase | |
return None; | |
} | |
if let Some(false) = wellformed { | |
if !self.is_wellformed_first_tenure_block()? { |
self.txs.iter().find(|tx| { | ||
if let TransactionPayload::Coinbase(..) = &tx.payload { | ||
true | ||
} else { | ||
false | ||
} | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify:
self.txs.iter().find(|tx| { | |
if let TransactionPayload::Coinbase(..) = &tx.payload { | |
true | |
} else { | |
false | |
} | |
}) | |
self.txs.iter().find(|tx| mathes!(&tx.payload, TransactionPayload::Coinbase(..))) |
.map(|coinbase_tx| { | ||
if let TransactionPayload::Coinbase(_, _, vrf_proof) = &coinbase_tx.payload { | ||
vrf_proof.as_ref() | ||
} else { | ||
// actually unreachable | ||
None | ||
} | ||
}) | ||
.flatten() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify:
.map(|coinbase_tx| { | |
if let TransactionPayload::Coinbase(_, _, vrf_proof) = &coinbase_tx.payload { | |
vrf_proof.as_ref() | |
} else { | |
// actually unreachable | |
None | |
} | |
}) | |
.flatten() | |
.and_then(|coinbase_tx| { | |
if let TransactionPayload::Coinbase(_, _, vrf_proof) = &coinbase_tx.payload { | |
vrf_proof.as_ref() | |
} else { | |
// actually unreachable | |
None | |
} | |
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving... I left plenty of Rust suggestions but nothing critical. Only thing you should definitely fix are truncating as
conversions
block_commit.key_block_ptr as u64, | ||
block_commit.key_vtxindex as u32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
block_commit.key_block_ptr as u64, | |
block_commit.key_vtxindex as u32, | |
u64::from(block_commit.key_block_ptr), | |
u32::from(block_commit.key_vtxindex), |
return false; | ||
} | ||
for tx in txs.iter() { | ||
if let TransactionPayload::Coinbase(_, ref recipient_opt, ref proof_opt) = &tx.payload { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a match
statement rather than a series of if let
s?
pub burn_header_timestamp: u64, | ||
/// Size of the block corresponding to `anchored_header` in bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding doc comments to struct fields, really helps those of us who are not so familiar with the codebase
match &self { | ||
StacksBlockHeaderTypes::Nakamoto(ref x) => Some(x), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots of unnecessary ref/deref operations in this codebase:
match &self { | |
StacksBlockHeaderTypes::Nakamoto(ref x) => Some(x), | |
match self { | |
StacksBlockHeaderTypes::Nakamoto(x) => Some(x), |
.registered_observers | ||
.iter() | ||
.enumerate() | ||
.filter(|(obs_id, _observer)| self.miner_observers_lookup.contains(&(*obs_id as u16))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usize as u16
can truncate
.filter(|(obs_id, _observer)| self.miner_observers_lookup.contains(&(*obs_id as u16))) | |
.filter(|(obs_id, _observer)| self.miner_observers_lookup.contains(&(u16::try_from(*obs_id).expect("obs_id overflow")))) |
cost: consumed.clone(), | ||
tx_events, | ||
}) | ||
.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This probably should never fail, but it's still preferable to use expect()
in non-test code
.unwrap(); | |
.expect("Deserializing MinedNakamotoBlockEvent failed"); |
parent_block_burn_height: parent_block.block_height, | ||
parent_block_total_burn: parent_block.total_burn, | ||
parent_block_burn_height: parent_block_height, | ||
parent_block_total_burn: parent_block_total_burn, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parent_block_total_burn: parent_block_total_burn, | |
parent_block_total_burn, |
let inner_obj = if let Some(inner_obj) = txevent_obj.get("Success") { | ||
inner_obj | ||
} else if let Some(inner_obj) = txevent_obj.get("ProcessingError") { | ||
inner_obj | ||
} else if let Some(inner_obj) = txevent_obj.get("Skipped") { | ||
inner_obj | ||
} else { | ||
panic!("TransactionEvent object should have one of Success, ProcessingError, or Skipped") | ||
}; | ||
inner_obj | ||
.as_object() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be simplified:
let inner_obj = if let Some(inner_obj) = txevent_obj.get("Success") { | |
inner_obj | |
} else if let Some(inner_obj) = txevent_obj.get("ProcessingError") { | |
inner_obj | |
} else if let Some(inner_obj) = txevent_obj.get("Skipped") { | |
inner_obj | |
} else { | |
panic!("TransactionEvent object should have one of Success, ProcessingError, or Skipped") | |
}; | |
inner_obj | |
.as_object() | |
txevent_obj.get("Success") | |
.or_else(|| txevent_obj.get("ProcessingError")) | |
.or_else(|| txevent_obj.get("Skipped")) | |
.expect("TransactionEvent object should have one of Success, ProcessingError, or Skipped") | |
.as_object() |
Hi @jbencin, I appreciate your feedback, but I'm going to defer on most of the ergonomics changes for now. I'm already too anxious about this PR blocking other work, and these changes don't affect the behavior or even the legibility of the code. I did take the time to go through all the files I touched and use explicit integer conversions, however. |
…s really needs a clippy pass)
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
This is a PR which implements the full Nakamoto coordinator. It makes the following notable changes:
Nakamoto blocks now contain only the consensus hash and parent StacksBlockId to identify their placement in the block history. No more parent consensus hash, and no more
burn_view
.It reuses the canonical Stacks tip memoization logic from the 2.x days to enable the Sortition DB to track the canonical Stacks tip as blocks get processed
It splits the coordinator's main loop into a 2.x and a Nakamoto subroutine. It will use the 2.x coordinator loop up until the PoX anchor block for the first Nakamoto reward cycle is processed. At this point, the Nakamoto coordinator loop will be used.
It prevents the 2.x coordinator logic from processing Stacks and Bitcoin blocks beyond the start of the Nakamoto epoch.
It requires that the Nakamoto PoX anchor block be the parent of the first Nakamoto block mined in the prepare phase. In doing so, it requires that the node calculate the reward set at the beginning of the prepare phase, instead of at the end.
The last bullet point imposes three additional design requirements: the Nakamoto epoch must start ~100 blocks into a reward phase (so Stackers can come online and DKG can happen), the last 2.x reward cycle must be a PoX reward cycle, and there needs to be a "Nakamoto bootstrap" contract deployed to mainnet in which Stackers can register their signing keys prior to the first Nakamoto reward cycle. This contract would be used just for this purpose, and it cannot be a boot contract.
I still need to do a lot of testing in this PR, but I'm posting it as a draft for now to get early feedback in the design.