-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/multiple commits #599
Conversation
pallets/subtensor/src/lib.rs
Outdated
pub type WeightCommits<T: Config> = StorageDoubleMap< | ||
_, | ||
Twox64Concat, | ||
u16, | ||
u16, // netuid | ||
Twox64Concat, | ||
T::AccountId, | ||
(H256, u64), | ||
OptionQuery, | ||
BTreeMap<u64, (H256, u64)>, // nonce -> (hash, block_number) | ||
ValueQuery, | ||
>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need a migration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done. Can you please review the implementation and test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, you should also
- put the migration in the
OnRuntimeUpgrade
trait - add the migration to the runtime
- set the in-code pallet storage version in the pallet (see https://github.com/paritytech/polkadot-sdk/blob/a466a490fb20945b834e6c1983cf37daba6fb3e4/substrate/frame/balances/src/lib.rs#L319)
0b73bd9
to
2da8c95
Compare
Co-authored-by: orriin <[email protected]>
d30b26e
to
5d670f8
Compare
), | ||
Error::<Test>::InvalidRevealCommitHashNotMatch | ||
); | ||
}); | ||
} | ||
|
||
#[test] | ||
fn test_multiple_commit_reveal() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also test for out-of-order reveal.
pallets/subtensor/src/lib.rs
Outdated
pub type WeightCommits<T: Config> = StorageDoubleMap< | ||
_, | ||
Twox64Concat, | ||
u16, | ||
u16, // netuid | ||
Twox64Concat, | ||
T::AccountId, | ||
(H256, u64), | ||
OptionQuery, | ||
BTreeMap<u64, (H256, u64)>, // nonce -> (hash, block_number) | ||
ValueQuery, | ||
>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, you should also
- put the migration in the
OnRuntimeUpgrade
trait - add the migration to the runtime
- set the in-code pallet storage version in the pallet (see https://github.com/paritytech/polkadot-sdk/blob/a466a490fb20945b834e6c1983cf37daba6fb3e4/substrate/frame/balances/src/lib.rs#L319)
pallets/subtensor/src/weights.rs
Outdated
let last_processed_nonce = Self::get_last_processed_nonce(netuid, &who); | ||
ensure!(nonce > last_processed_nonce, Error::<T>::NonMonotonicNonce); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you brick your subnet by using a nonce that's u64 max? Should we enforce cur nonce + 1 instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You couldnt brick your subnet by doing this , what would happen is that you would brick your validator , as the nonces are per validator, not subnet
pallets/subtensor/src/weights.rs
Outdated
/// Get the last processed nonce for a given netuid and account | ||
/// | ||
/// # Arguments | ||
/// | ||
/// * `netuid` - The network ID | ||
/// * `account` - The account ID | ||
/// | ||
/// # Returns | ||
/// | ||
/// Returns the last processed nonce for the given netuid and account, or 0 if not found | ||
/// | ||
/// # Note | ||
/// | ||
/// This function is used to retrieve the last nonce that was processed for a specific account | ||
/// in a specific subnet. It's crucial for maintaining the order of transactions and | ||
/// preventing replay attacks. | ||
pub fn get_last_processed_nonce(netuid: u16, account: &T::AccountId) -> u64 { | ||
LastProcessedNonce::<T>::get(netuid, account) | ||
// ^ If no nonce is found, we return 0 as the default value | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just use LastProcessedNonce::<T>::get(netuid, account)
directly in-line in our code, instead of adding all this boilerplate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have not decided on whether or not to start accessing storage values directly in code , and this follows the current pattern of creating getters and setters for storage values.
pallets/subtensor/src/weights.rs
Outdated
/// Set the last processed nonce for a given netuid and account | ||
/// | ||
/// # Arguments | ||
/// | ||
/// * `netuid` - The network ID | ||
/// * `account` - The account ID | ||
/// * `nonce` - The nonce to set | ||
/// | ||
/// # Note | ||
/// | ||
/// This function updates the last processed nonce for a specific account in a specific subnet. | ||
/// It's important for maintaining the state of processed transactions and ensuring | ||
/// that each nonce is only processed once. | ||
/// | ||
/// # TODO | ||
/// | ||
/// Consider adding error handling or logging for cases where the nonce update fails. | ||
pub fn set_last_processed_nonce(netuid: u16, account: &T::AccountId, nonce: u64) { | ||
LastProcessedNonce::<T>::insert(netuid, account, nonce); | ||
// ^ This operation overwrites any existing nonce for the given netuid and account | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just use LastProcessedNonce::<T>::insert(netuid, account, nonce)
directly in-line in our code, instead of adding all this boilerplate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
Co-authored-by: orriin <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
everything looks solid to me
pallets/subtensor/src/weights.rs
Outdated
ensure!(!commits.contains_key(&nonce), Error::<T>::DuplicateNonce); | ||
|
||
// Check if there are already 10 commits | ||
if commits.len() >= 10 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to maybe make this configurable and/or are we happy with 10?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commit-reveal interval duration is configurable per subnet so I think the maximum commits within that interval should be customizable as well.
bunch of merge conflicts now btw |
The base branch was changed.
Description
Motivation for change
The current weight commitment system lacks flexibility and doesn't allow validators to queue multiple weight updates. This limitation can lead to inefficiencies in the network, especially during periods of high activity or when validators need to make rapid adjustments to their weights.
Behaviour pre-change
Behaviour post-change
Alternatives considered
Batch commits: We considered allowing validators to submit multiple weight updates in a single transaction. This was rejected due to increased complexity in processing and potential issues with transaction size limits.
Time-based expiry: An alternative to the FIFO system was to expire commits based on time. This was not implemented as it could lead to issues with network congestion and block time variations.
Unlimited commits: We considered allowing unlimited pending commits per validator. This was rejected to prevent potential DoS attacks and to limit storage usage.
Related Issue(s)
Type of Change
##Breaking Change
If this PR introduces a breaking change, please provide a detailed description of the impact and the migration path for existing applications.
Checklist
cargo fmt
andcargo clippy
to ensure my code is formatted and linted correctlyScreenshots (if applicable)
Please include any relevant screenshots or GIFs that demonstrate the changes made.
Additional Notes
Please provide any additional information or context that may be helpful for reviewers.