-
Notifications
You must be signed in to change notification settings - Fork 682
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stacks-core] Update .pox-4 to calculate signer key vote shares #4059
Comments
Needs revisiting |
Something related to this that occurs to me is the consequences of participation in the DKG process. The DKG process cannot require 100% participation. Otherwise, someone qualifying for a single stacker slot could simply stall the process. If the DKG has 100% participation, then the threshold for signing should be 70%, and the vote threshold should be 100% -- not 70%. Because otherwise, a 70% group could construct an aggregate key including only shares for themselves, and then successfully elect that aggregate key. If the DKG has less than 100% participation, then the threshold for signing must be higher (i.e, if only 70% of stackers participate in DKG, then the threshold signature must be 100%), and the voting process must enforce this. |
I don't think there was ever a world where the system could only work with 100% Stacker participation in DKG. It needs a way to operate in a degraded mode of operation when less than 100% of Stackers are online. However, I also believe that there is no world in which the system allows blocks to be produced with fewer than 70% of the locked STX signing them. So, that leaves us in a position where if fewer than 100% Stackers participate in DKG, then a greater fraction of the participating Stackers will need to participate in signing rounds so that the absolute 70% minimum is cleared. A naive solution of running DKG once per reward cycle and just doing our best afterwards could put us in a precarious position where there's essentially no room for signers to drop off. For example, suppose only 71% of the Stackers (by stacked STX) participated in DKG -- then, >99% of these Stackers would be required to sign blocks, and the remaining 29% of Stackers don't have anything to do. So, I'd propose two mitigations:
|
Regarding this point:
This does not have to be a map in the contract. The node already calculates the number of reward slots per signing key as part of calculating the reward set. It only needs to be written into the contract if we need Clarity code to interact with it (and it's not clear to me that we do). EDIT: actually we will need to write some of this state to the Clarity contract: we need to be able to validate votes for the aggregate public key. So, we need a data map like this: ;; This gets written by the Stacks node once it calculates the reward cycle.
;; It'll get written as part of processing the first tenure-start block after last tenure-start-block
;; in the preceding reward phase, since that's the earliest time at which it can be calculated.
;; Note that this would not be limited to being written in a prepare phase; it simply gets written
;; as part of the block-processing logic for the next tenure after the end of the reward phase
;; (regardless of where it falls in the next reward cycle).
(define-data-map reward-cycle-signing-keys
;; key: (reward-cycle, signer-address)
{
reward-cycle: uint,
signer-address: principal
}
;; value: num-slots
uint
) We'll also need a data map for registering ballots for multiple votes over the course of the reward cycle: ;; This gets updated by a contract-call to vote.
;; Once a vote is cast for a particular (reward-cycle, round) pair, it cannot be cast again.
(define-data-map aggregate-pubkey-votes
;; key: (signer-addr, reward-cycle, round-number)
{
signer-address: principal,
reward-cycle: uint,
vote-round: uint
}
;; value: (vote count, public key)
{
num-slots: uint,
agg-pubkey: (buff 33)
}
)
;; This is the running tally for each vote round.
;; It gets updated as part of casting a vote.
(define-data-map aggregate-pubkey-vote-tally
;; key: (aggregate pubkey, reward-cycle, round-number)
{
agg-pubkey: (buff 33)
reward-cycle: uint,
vote-round: uint
}
;; value: total votes
uint
)
;; This is the last aggregate public key to be chosen.
;; The node writes this directly if it notices that an entry in aggregate-pubkey-vote-tally
;; exceeds 70% of the signing round, has a higher reward cycle or the same reward cycle
;; and a higher vote round, *and* has a higher total number of votes.
(define-data-var agg-pubkey
{
reward-cycle: uint,
vote-round: uint,
total-votes: uint,
key: (buff 33)
}
{ u0, u0, u0, 0x }) There's no upper bound on how many vote rounds there can be. The vote function body will emit an event (via a |
Double-checking my understanding here the source of truth for the "vote-round" will live in agg-pubkey? Any reason key for agg-pubkey isn't an optional- shouldn't it be "none" until a key > 70% consensus? |
Pasting design doc that mentions this issue for reference: https://docs.google.com/document/d/1IFTaHjEGHJkdFiEfKNayfmD0UlseoXpVcoMgJv0VW1g/edit?usp=sharing |
Ah, this data var could have type |
I don't think so. |
Okay -- issue created (#4111), and I dropped some thoughts on it there. |
Some work started in #4116 I am confused about how the work is distributed. |
@jcnelson should we use synthesis_pox_events? or print? |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Given the map of signer-key (public key) to an amount of stacked stx, calculate the number of vote shares allocated to that signer. Note that the vote power should be capped at 25% and the remaining percentage normalized over the other signers.
The text was updated successfully, but these errors were encountered: