Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: block.timestamp is not accurate #3398

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 25 additions & 16 deletions core/node/state_keeper/src/keeper.rs
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ pub struct ZkSyncStateKeeper {
sealer: Arc<dyn ConditionalSealer>,
storage_factory: Arc<dyn ReadStorageFactory>,
health_updater: HealthUpdater,
should_create_l2_block: bool,
}

impl ZkSyncStateKeeper {
Expand All @@ -89,6 +90,7 @@ impl ZkSyncStateKeeper {
sealer,
storage_factory,
health_updater: ReactiveHealthCheck::new("state_keeper").1,
should_create_l2_block: false,
thomas-nguy marked this conversation as resolved.
Show resolved Hide resolved
}
}

Expand Down Expand Up @@ -187,7 +189,10 @@ impl ZkSyncStateKeeper {

// Finish current batch.
if !updates_manager.l2_block.executed_transactions.is_empty() {
self.seal_l2_block(&updates_manager).await?;
if !self.should_create_l2_block {
// l2 block has been already sealed
self.seal_l2_block(&updates_manager).await?;
}
thomas-nguy marked this conversation as resolved.
Show resolved Hide resolved
// We've sealed the L2 block that we had, but we still need to set up the timestamp
// for the fictive L2 block.
let new_l2_block_params = self
Expand All @@ -199,6 +204,7 @@ impl ZkSyncStateKeeper {
&mut *batch_executor,
)
.await?;
self.should_create_l2_block = false;
}

let (finished_batch, _) = batch_executor.finish_batch().await?;
Expand Down Expand Up @@ -585,14 +591,30 @@ impl ZkSyncStateKeeper {
return Ok(());
}

if self.io.should_seal_l2_block(updates_manager) {
if !self.should_create_l2_block && self.io.should_seal_l2_block(updates_manager) {
tracing::debug!(
"L2 block #{} (L1 batch #{}) should be sealed as per sealing rules",
updates_manager.l2_block.number,
updates_manager.l1_batch.number
);
self.seal_l2_block(updates_manager).await?;
self.should_create_l2_block = true;
thomas-nguy marked this conversation as resolved.
Show resolved Hide resolved
}
let waiting_latency = KEEPER_METRICS.waiting_for_tx.start();
let Some(tx) = self
.io
.wait_for_next_tx(POLL_WAIT_DURATION, updates_manager.l2_block.timestamp)
thomas-nguy marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not noticing this before, but wait_for_next_tx() usage is bogus, since updates_manager.l2_block.timestamp may refer to the timestamp of the previous block. This timestamp is used in the MempoolIO implementation to check timestamp_asserter_range of the transaction. I think the easiest solution would be to create a tentative block timestamp before calling wait_for_next_tx() if necessary. This circles back to the idea we've discussed about creating a block in UpdatesManager early on, but not passing it to the VM until the first tx in the block (and adjusting the block timestamp with time before that). The adjustment likely has to be expressed as a StateKeeperIO method since we don't want to rely on any sort of I/O (like getting the wall clock time) unconditionally. E.g., for ExternalIO, block timestamps must never be adjusted.

To make invariants clearer, it may make sense to change StateKeeperIO to have 2 methods to create blocks:

  • Creating an ordinary block, which returns a transaction together with a block.
  • Creating a fictive block.

I think this would make the intended workflow obvious on the type system level.

.instrument(info_span!("wait_for_next_tx"))
.await
.context("error waiting for next transaction")?
else {
waiting_latency.observe();
continue;
};
waiting_latency.observe();
let tx_hash = tx.hash();

if self.should_create_l2_block {
let new_l2_block_params = self
.wait_for_new_l2_block_params(updates_manager, stop_receiver)
.await
Expand All @@ -605,22 +627,9 @@ impl ZkSyncStateKeeper {
);
Self::start_next_l2_block(new_l2_block_params, updates_manager, batch_executor)
.await?;
self.should_create_l2_block = false;
}
let waiting_latency = KEEPER_METRICS.waiting_for_tx.start();
let Some(tx) = self
.io
.wait_for_next_tx(POLL_WAIT_DURATION, updates_manager.l2_block.timestamp)
.instrument(info_span!("wait_for_next_tx"))
.await
.context("error waiting for next transaction")?
else {
waiting_latency.observe();
tracing::trace!("No new transactions. Waiting!");
continue;
};
waiting_latency.observe();

let tx_hash = tx.hash();
let (seal_resolution, exec_result) = self
.process_one_tx(batch_executor, updates_manager, tx.clone())
.await?;
Expand Down
Loading