-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Txmv2 #15467
Conversation
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a few comments
tx.LastBroadcastAt = nil // Mark reorged transaction as if it wasn't broadcasted before | ||
unconfirmedTransactionIDs = append(unconfirmedTransactionIDs, tx.ID) | ||
m.UnconfirmedTransactions[*tx.Nonce] = tx | ||
delete(m.ConfirmedTransactions, *tx.Nonce) | ||
} | ||
} | ||
|
||
if len(m.ConfirmedTransactions) >= maxQueuedTransactions { | ||
if len(m.ConfirmedTransactions) > maxQueuedTransactions { | ||
prunedTxIDs := m.pruneConfirmedTransactions() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to prune Confirmed Txs?
The maxQueuedTransactions setting meant to apply only to unstarted txs in txmv1.
Is this a new config for txmv2? Also, why is it set hard-coded to 250?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a way to prevent the queues from growing indefinitely (all queues), otherwise they could end up using the entire memory, resulting in the node crashing. maxQueuedTransactions
will eventually be replaced by the existing config we have MaxQueued
once I implement the persistence storage. To figure out the optimal value memory-wise I'd have to run extensive tests, so for now, it was easier to start with a hard-coded value that we use in most of our cases just to be on the safe side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, as long as TXM v2 is not widely use, we are good.
But I know CCIP uses their own MaxQueued values, and expect only unstarted Txs to get pruned.
if err != nil { | ||
t.lggr.Errorw("Error when fetching initial pending nonce", "address", address, "err", err) | ||
t.lggr.Errorw("Error when fetching initial nonce", "address", address, "err", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should be warn level. Same for below
ctxWithTimeout, cancel := context.WithTimeout(ctx, pendingNonceDefaultTimeout) | ||
defer cancel() | ||
for { | ||
pendingNonce, err = t.client.PendingNonceAt(ctxWithTimeout, address) | ||
pendingNonce, err := t.client.PendingNonceAt(ctxWithTimeout, address) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this function is called repeatedly within broadcastLoop(), it would be efficient if we check here first if nonce is already set for this address. If yes, then return early.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is intentional. Nonce won't have been stored unless the TXM has restarted, in which case we actually do want to overwrite the nonce.
This PR introduces a new experimental version of the Transaction Manager. Not all of the existing features are implemented, and they will be rolled out in future PRs. Currently, OEV is using this TXMv2 because of its properties.