Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interop: Supervisor Drives L1 Data Source #13181

Closed
axelKingsley opened this issue Dec 2, 2024 · 5 comments
Closed

interop: Supervisor Drives L1 Data Source #13181

axelKingsley opened this issue Dec 2, 2024 · 5 comments
Assignees

Comments

@axelKingsley
Copy link
Contributor

From this document: https://github.com/ethereum-optimism/design-docs/pull/171/files

Decomposed into: https://www.notion.so/oplabs/tasks-to-make-150f153ee162809e981ac7d30c0bfae2?showMoveTo=true&saveParent=true

What

The Supervisor should be in charge of what L1 blocks are being processed by the Supervisor and its Owned Nodes

Why

Because without this we suffer inter-node inconsistency which leads to database inconsistency in the supervisor which must be managed

How

We should make the following modifications:

Replicate L1 data discovery into Supervisor

The op-node already searches for and identifies the next L1 block to process. Take these components and represent them in the Supervisor as some L1 monitor component.
(the L1 connection should also serve L1 finality signal to existing Supervisor code paths)

Owned nodes to no longer look up “next L1 block number”

The Supervisor indicates the next L1 block to be used (if its not the same as the previous) via the call to TryDeriveNext

Owned nodes use the hash given to it to complete derivation per usual

Write to database on new L1

Whenever a new L1 is discovered, we should immediately write the new L1 with the existing L2 to all fromDA databases. This is a new behavior suggestion from @protolambda that we should include this record so that only one head increases at a time.

We can write this value as soon as the L1 is discovered.

(L2 blocks discovered through the TryDeriveNext calls will also load into the database, after this point. not tracked in this ticket)

@axelKingsley axelKingsley self-assigned this Dec 2, 2024
@protolambda protolambda transferred this issue from ethereum-optimism/op-geth Dec 3, 2024
@axelKingsley
Copy link
Contributor Author

Daily update:

So far I have written an L1 Processor which:

  • Establishes L1 connection
  • Attempts to load the data into the database
  • Stubs out where it would command the "Owned Node" to update because that path isn't established yet.

Right now I've got everything wired up, starting and running as a processor/worker similar to the other chain processors. I am now working on figuring out why it is generating conflicting data. My guess is that the new "L1 writes once with the existing L2" is bucking some logic elsewhere, but I'm still diving it.

    /Users/axelkingsley/Workspace/optimism/op-e2e/interop/l1_processor.go:132:        �[33mWARN �[0m[12-03|15:41:31.771] Failed to get latest derived from to insert new L1 block �[33mrole�[0m=supervisor �[33m"!BADKEY"�[0m=l1 �[33mchain�[0m=900200 �[33merr�[0m="found BlockSeal(hash:0x1432b7e7d31d898679eb222990af84b8f95dfaa3d8a270bdb277e5973656e453, number:2, time:1733262079), but expected 0xd7d836c3db266a9c830f85561f02e768633f5c87cd80c815909c2432e6899978:2: conflicting data"
    /Users/axelkingsley/Workspace/optimism/op-e2e/interop/l1_processor.go:104:        �[33mWARN �[0m[12-03|15:41:31.771] Failed to process L1                     �[33mrole�[0m=supervisor �[33m"!BADKEY"�[0m=l1 �[33merr�[0m="found BlockSeal(hash:0x1432b7e7d31d898679eb222990af84b8f95dfaa3d8a270bdb277e5973656e453, number:2, time:1733262079), but expected 0xd7d836c3db266a9c830f85561f02e768633f5c87cd80c815909c2432e6899978:2: conflicting data"

I'll upload whatever I have to a PR by the end of my workday and will update here as well.

@axelKingsley
Copy link
Contributor Author

#13206

The PR is now ready for review, it tracks the L1 and inserts the newly discovered L1 into the databases. Some basic unit tests written, and I observed good log output.

@axelKingsley
Copy link
Contributor Author

Things which connect to this PR include:

  • turning off updates from Nodes to the Supervisor UpdateLocalSafe
  • calling some sort of derivation stepping when a new L1 is discovered (Owned Node orchestration)
  • Finality tracking in the L1 Processor (instead of listening to nodes for it)

These will build onto the above PR, and connect into the RPC work @protolambda is working on

@axelKingsley
Copy link
Contributor Author

Finality Tracking: #13274

@axelKingsley
Copy link
Contributor Author

L1 and Finality are both independently tracked by the Supervisor 👍

@github-project-automation github-project-automation bot moved this from In progress to Done in Interoperability Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

2 participants