diff --git a/All Core Devs Meetings/Meeting 110.md b/All Core Devs Meetings/Meeting 110.md new file mode 100644 index 00000000..02fab93e --- /dev/null +++ b/All Core Devs Meetings/Meeting 110.md @@ -0,0 +1,419 @@ +# All Core Devs Meeting 110 +### Meeting Date/Time: April 16th, 2021, 14:00 UTC +### Meeting Duration: 90 mins +### [GitHub Agenda](https://github.com/ethereum/pm/issues/293) +### [Audio/Video of the meeting](https://www.youtube.com/watch?v=-H8UpqarZ1Y) +### Moderator: Tim Beiko +### Notes: Joel Cahill + +## Decisions Made +| Decision Item | Description | Video ref | +| ------------- | ----------- | --------- | +| **1** | 3198: BASEFEE opcode being included in London | [1:19:53](https://youtu.be/-H8UpqarZ1Y?t=4793) | +| **2** |Newly planned dev call next week | [1:35:03](https://youtu.be/-H8UpqarZ1Y?t=5703) | + + +## Berlin Updates + +**Tim Beiko** +[8:00](https://youtu.be/-H8UpqarZ1Y?t=482) + +* Berlin happened yesterday, there was an issue with open Ethereum + +**Karim Agha** + +* Bug in the way open Ethereum implemented 2029, gas cost calculation was incorrect +* Thanked everyone from other client teams and community for help with identifying and fixing, pushed a fix to 3.2.3 +* They will publish a written postmortem for the community with conclusions and findings next week + +**Martin Holst Swende** + +* Passing and reference case both failed to find the error, only way he can think to find this error is if the hive chain config had been identical to the main chain config with the only difference being the numbers, but otherwise its hard to figure out a process to better avoid this kind of thing. +* To debug things when they happen, its good to have tools that enable a node to forcibly import and trace a non-canonical block that was previously rejected + +**Karim** + +* Create a working group to define a set of integration tests so that all clients can talk to each other through a language agnostic API, and having a staging environment to swap clients before releasing and a certification process before marking a client ready for a fork, so that if something like this happens it can be a test case for all clients + +**Tim** + +* How is that similar/different than what we have with Hive? + +**Karim** + +* Not familiar with Hive + +**Martin** + +* [Hive](https://hivetests.ethdevops.io/) - It takes 4 clients, executes states tests on the clients – start the client, import the blocks, rlp, checks if the latest block after import is expected to be, eth protocol, graphql. +* What you are describing sounds like something that can be done in Hive, although I am not sure if you were talking more specifically about debug analysis or testing in general + +**Karim** + +* More about testing in general. Each client has its own set of tests, and rather combine all tests to one repository + +**Martin** + +* Yeah, that’s basically what Hive does. Feel free to reach out off chat for any help + +**Tomasz** + +* Whichever environments we define will always end up having some inconsistent configurations vs mainnet, so what they did with an issue with a config file was add internal nethermind specific tests for the config file formats and they have added that for all the configs files they have used, but it’s hard to do something like this for all the clients because there is always going to be something client specific + +**James Hancock** + +* Not idea for bugs/issues to pop up during a hardfork, but more importantly the issue was handled respectfully and responsibly and is something to admire about the group and its resiliency, and congrats to open Ethereum team for how they handled it. Dragan echoed this sentiment + +**Tim** + +* Should we have hive run with mainnet configs? Should we look at this or take a next step on it? + +**Martin** + +* Its more a note for self in future to try to make the hive specs more closely aligned to mainnet specs + +**Dragan** + +* Bug fixed, recommend using Open Ethereum version 3.2.4 + +**Tomasz** + +* Could we put effort into setting up part of Hive to test encoding of all the types of dev p2p messages. We currently only test in hive in the synchronization test, but we could do every single message separately for encoding decoding for each client. + +**Martin** + +* Unsure if it is fully tested in Hive yet, but if not, it should be added, and the goal is to have all the network packets tested in Hive. + +**Lightclient in chat:** + +* It would be nice to have more effort into a standard tracing format because different clients use different formats, and it can be hard to find the divergence in outputs + +**Tim** + +* Something to investigate more offline + +## Updates for London +[33:10](https://youtu.be/-H8UpqarZ1Y?t=1990) + +**Marek** + +* From nethermind team: We are ready with our client and are in sync with London devnet. + +**Tim** + +* What is the next best step here? JSON RPC handling? Previous calls we talked about potentially having another devnet once 1559 was fully implemented and spec’d out? What do we think the next step should be? + +**Lightclient** + +* Are the RPC changes fully defined now? My understanding is most things will continue to be optional fields, the one question was how to bring the effect of gas price to the transaction object, and either have effective gas price as an element or just replace gas price with it. + +**Tomasz** + +* It would be reasonable to suggest some dates in July for testnets, or even start in June and set July date for London and let everyone work against those dates and have cleared planning for when everything needs to be implemented. + +**Lightclient** + +* Why not do that and continue working towards these EIPs to see if they can be implemented by then + +## EIP 3403 + +**Tim** +[38:10](https://youtu.be/-H8UpqarZ1Y?t=2290) + +* EIP 3403, which is disabling the refunds, has been discussed as needing to go along with EIP 1559 for security reasons, is that still the case, and should we bring it in now so that people know they need to implement it? + +**William** + +* If we believe miners will push an infinite gas limit, then it will not matter if the elasticity is 2x or 4x. If this is a major security concern, we should revisit the hardcap, though I would recommend a much higher limit than the original proposal. On the other hand, if we assume miners want to prevent each other from submitting DOS blocks, the base fee would be more secure with a possibility of 4x vs 2x. + +**Tomasz** + +* Miners are showing great care for network security and are using MEV now which is offsetting potential losses from 1559. We heard threats from miners they would pick up aggressive actions against the chain, but we have not seen this behavior from miners. + +**Martin** + +* I’m in favor of EIP 3403, not sure if it is a security requirement for 1559 but thinks it probably is. + +**Micah** + +* Is gas pricing for storage based on long term state growth cost, or just the read/write to disk cost? + +**Vitalik** + +* The cost of a s load was intended to represent the short-term operation cost, the cost of an s store was intended to represent the combination of short-term operation cost and the growth in storage for an archive node. + +**William** + +* I agree the long term was the major component of s store, and for that reason we should consider keeping the stipend to incentivize in good behavior and design. 3 protocol engineers concurred in the Ethereum additions forum that this proposal punishes state cleanup, and therefore make state cleanup worse. + +**Tim** + +* Should we think through this stuff async before the next call before deciding since there is a lot to consider? Take 2 weeks to think before coming to a decision. + +**Martin** + +* Sounds like this could be a contentious thing, debate in an offline forum is nice but it does not look like it will be resolved where everyone agrees, and we will have to make a call at some point. + +**Artem** + +* What we are seeing is the gas token businessman who does not want to see their old business model turned to garbage, but we should not pay attention to it and proceed as planned + +**Matin** + +* I agree + +**Ansgar** + +* I tend to agree with what was just said, if we do not come to a decision today what does it mean for the devnet, does it make sense to include it in the devnet for London with 1559 before the more optional EIPs + +**Vitalik** + +* Just want to remind that whatever choice is made on 3403 is not inherently permanent because the state expiry roadmap which will be one of the top priorities after the merge, which will reform state into a structure where state being removed won’t be a concept that exists, so the cost of making the wrong choice for 3403 isn’t that large. + +**James** + +* We’ve discussed some these same ideas in previous calls and going through permutations of the same ideas won’t get us anywhere helpful, and Id rather go ahead with it from the perspective of the design isn’t working as intended. We should go with putting it in for devnet and London, and revisit if needed after implementation and the fork. + +**Tomasz** + +* We’re generally in favor, just not for London + +**Tim** + +* Turbo geth and geth are strongly in favor for + +**James** + +* Piper has said strongly in favor for in previous conversations + +**Dragan** + +* Open Ethereum in favor of EIP 3403, maybe plan for London but if something happens delay is okay + +**James** + +* Micah’s point that we can include it in the devnet and not have it apart of London + +**Martin Koppelmann** + +* Do not have a strong opinion, but from smart contract developer point of view, developing/auditing/bug bounty for contracts takes a long time, so he expects to see more and more contracts that are making heavy use of deleting unnecessary storage + +**Martin Swende** + +* Do not want to overstress contract developers are not developing correctly, more its issue of various gas tokens that use the state as a battery to charge up and get back later on + +**Gary Schulte** + +* If we include it in dev net but not London, are we not opening the door to another bls precompile type of consensus issue? + +**Artem** + +* If we are not aiming for London, then this EIP will be effectively buried. We must aim for London. + +**Tomasz** + +* This is why we need clear timeline with the merge and EIP upgrades. If EIPs are not included in London, it could be 6-9 months before next opportunity for them to be implemented. + +**Danno** + +* We should step back and see if there is not a compromise to be made. What if we just apply the refund to the account and have the total gas used on the block always go up, and never be a benefit from refunds. This can in some uses reduce it from heavy defi operations, but at the same time preserve the market for the utility the contracts have for refunds. + +**Vitalik** + +* I've thought about things like that a bit, and I think the challenge is making refunds anything other than anything globally scoped introduces more complexity and edge cases that would have to be tested, so it would complexify and bloat the whole thing significantly. + +**Rai** + +* Do gas futures with the introduction of base fee help with there being less of this battery usage of the state? Or is it equivalent in some sense with just as much state claimed? + +**William** + +* The motivation for using gas tokens is to increase the throughput during congestion, so if you use them in that context then the futures market won’t help you. + +**James** + +* Tomasz plan works if Shanghai is an EIP fork and happens in the fall, followed by the merge. But if Shanghai is the merge and not an EIP fork in the fall, then we should expand the scope of London. And I am okay with either of those plans, but we should decide on one + +**Vitalik** + +* There is also the 3rd track of being willing to wait a year for EIPs because none of them are that critical, and the Ethereum ecosystem has been willing to wait a year for an EIP for pretty much all its history. + +**Artem** + +* We shouldn’t focus on the merge too much because there isn’t a lot of concrete or finalized on how things will look. We should introduce one hard fork between London and the merge, and if we want to stagger EIPs for London that is fine. + +**Ansgar** + +* Timeline: With London being in July because of the difficulty bomb in August, the community sentiment seems to be merge as soon as possible, so the community might not support a merge delay to prioritize EIP hardfork before it. + +**Tomasz** + +* Adding more EIPs to London can cause a delay to the merge + +**Tim** + +* Main challenge with the merge is there is a lot of work to specify it, core devs need to spend attention on it and spec it out + +**Micah** + +* Last week we talked about getting a rough estimate on the difficult of the proposed EIPs, did we ever get that made? I feel like that is a significant component here + +**Tim** + +* Two things worth considering here, whats the expected amount of work to implement something, and would it be the end of the world if we waited a year to have it? +* Is there anything from this list: EIP-3403 #277, EIP-3198 #270, EIP-3074 #260, EIP-2537 #269, EIP-2677 #271 that would be the end of world if we waited a year? + +**Lightclient** + +* Unless it is a security issue I don’t think its ever the end of the world, but I would like to say 3074 users are spending about 25million a month of token approvals, and 3074 will reduce that by at least 30%, so it was save end users millions of dollars per month in extra gas fees. Which is only a small aspect of 3074, so we should not push it out a year + +**Tim** + +* We’ve talked about 3403, 3198 the base fee op code being the smallest amount of work, 3074 being pushed out by the community. The 2537 (bls recompile) and 2677 (capping the size of init code) haven’t had as much advocacy for them. + +**Martin** + +* Doesn’t see 2677 as something that needs to be done right now + +**James Prestwich** + +* 2537 has been ready to go since yolo v1 with no issues so there is not really an update from him + +**Danno** + +* I would like bls in the next plan if possible, I see utility in it + +**Tim** + +* Hypothetical timelines for London, say latest possible date we can have it is August 1st, that means we want to have last of the test net fork 3-4 weeks before then by early July, so if we want to have 1st of test net a month before then, so when do we need client versions out? So in 2 months from now we need client releases out with full support for London. + +**Tomasz** + +* Question to Open Ethereum team is how much is left to be worked on 1559 implementation + +**Dusan** + +* Our implementation is 90% finished, we are in a sync with dev net. We have additional work on RPC implements, improvements on transaction pool. But for consensus part of the implementation, we are good. + +**Tim** + +* Did you also implenet the basefee opcode? That is the smallest amount of work to be implemented. Do you have any issue implementing that for London? + +**Dusan** + +* No, we can do it. + +**Tim** + +* Can we agree to bring it into London? (basefee opcode) +* **DECISION 1:** Okay, basefee’s will be in London [1:19:53](https://youtu.be/-H8UpqarZ1Y?t=4793) +* What’s next best step about 3403, 3074, 2537, 2677? + +**James** + +* It might make sense for 2537 to be a part of the merge as well + +**Micah** + +* I would love to see Open Ethereum’s rough time estimates for each of the EIPs, because historically they have been a little slower implementing the changes because they are a newer team taking on an existing codebase, which is difficult. I suspect in the end they will be the decision maker on what can be included regarding difficulty and timing. This can be done async. + +**Tim** + +* That will be valuable, and I can follow up with each of the client teams over the next 2 weeks and chat about these things. + +## EIP 2935 + +**Tomasz** + +* I believe we can skip the discussion for now. Lets put it after London. + +## EIP 3436 + +**Danno** + +* What I propose to clique, can clique only, is a few specific block choice rules that go on when you see two equally valuable heads. Of course, the first rule is to pick the blockchain head with the highest total difficulty. The second rule is to pick the one that is the shortest. The third rule is you decide based on who is either closest to intern or furthest, it doesn’t really matter as long as everyone picks on the same side. And fourth, is to take the hash of the block, convert it to a uint 256, and pick the block with the lowest number. This should prevent all chain halts at this point. Encourage people to go to the Ethereum additions thread for feedback. + +**Tomasz** + +* Strong support of what Danno is suggested. + +***Zoom audio cuts out during Peter Szilagyi’s, and Danno’s response from 1:30:45 – 1:33:45*** + +**Tim** + +* Timeline for London: If we want to aim for mid-July, we need a client release for fork by May 15th, and the next call in 2 weeks will be right before then so we absolutely need to make a call about the EIPs on the next call. Thoughts about a call off schedule next week to discuss the EIPs? Or do them async? +* **DECISION 2:** Ill organize a sync call next week and follow up with the different client teams about effort/time/value ratio for various EIPs, and we can discuss that next call. [1:35:03](https://youtu.be/-H8UpqarZ1Y?t=5703) + +##EIP 3074 + +**Sam Wilson** + +* Our second 3074 testnet is going with geth and working on our Open Ethereum implementation + +**Lightclient** + +* We have been getting an audit to look at the specification as well as do analysis on how it will impact contracts that already are on mainnet. We have gotten a couple proposals and are optimistic about the people doing the audits and the timeline can be completed by the end of May. I think our team is committed, along with other premier teams, to funding or providing developers to make 3074 happen for London, with the goal to minimize the amount of work current client teams need to do. + +**Tim** + +* Are people opposed to putting 3074 into a dev net before the results of the audit? It seems like the audit will come in after everything will have to be done. We cannot wait for it to come into, digest it, then implement. We might have to implement it, then rip it out if the audit shows a severe issue. + +**Micah** + +* Tricky part with 3074 is going to be getting everyone on board with it. Most are skeptical at first, and the tricky part will be getting the mindshare with every core developer to convince them that it is okay. + +**Tim** + +* The goal of the audit is in part to show that its safe and alleviate/satisfy some concerns. + +**Ansgar** + +* This EIP has the potential to make a big difference on the application side, so if there is some chance to include 3074 we should try to take it, assuming the concerns are alleviated in time. + +**Tim** + +* To wrap this up, I will talk to client teams specifically about 3074 and get an idea if we can technically/time wise include this with London, assuming audit is good. + +**James** + +* This fits into the if we had a fork in the Fall it would fit in easily, but to get it into July could be cutting it close. But having it push back to mid next year would severely impact usability, so it fits into the not wanting to wait a year but is tough to get into July. + +**Lightclient** + +* We have talked to a few dozen of the most prevalent defi eth UX tool teams, pretty much every single one of them is positive on it and will be raising money to show they are serious about it. + +------------------------------------------- +## Attendees +- Tomasz Stancsak (Nethermind) +- Tim Beiko +- Jochen +- Rai +- Micah Zoltu +- Karim Agha +- Trenton Van Epps +- hexzorro +- Dusan +- lightclient +- Mikhail Kalinin +- William Morriss +- James Hancock +- Pooja Ranjan +- Ansgar Dietrichs +- Paul D +- Gary Schulte +- Marek M +- Mojtaba Tefagh +- James Prestwich +- Kev +- Alex Vlasov +- John +- Martin Holst Swende +- Danno Ferrin +- Peter Szilagyi +- Tukasz Rozmej +- Sajida Zouarhi +- Vitalik +- Dragan Rakita +--------------------------------------- +## Next Meeting +April 23, 2021 diff --git a/All Core Devs Meetings/Meeting 112.md b/All Core Devs Meetings/Meeting 112.md new file mode 100644 index 00000000..a9c34608 --- /dev/null +++ b/All Core Devs Meetings/Meeting 112.md @@ -0,0 +1,387 @@ +# All Core Devs Meeting 112 +### Meeting Date/Time:Friday, 30 Apr 2021 +### Meeting Duration: 1:35:21 +### [GitHub Agenda:London Updates](https://github.com/ethereum/pm/issues/302) +### [Audio/Video of the meeting](https://youtu.be/_QLDhNMwoe4) +### Moderator: Tim Beiko +### Notes:David Schirmer + +## Decisions Made +| Decision Item | Description | Video ref | +| ------------- | ----------- | --------- | +| **1** | 3529 being included in London | [4:40](https://youtu.be/_QLDhNMwoe4?t=271) | +| **2** |3238 Being re-written this week | [40:30](https://youtu.be/_QLDhNMwoe4?t=2433) | +| **3** |3451 considered for London inclusion | [44:26](https://youtu.be/_QLDhNMwoe4?t=2666) | +| **4** | Baikal devnet launched | [1:16:00](https://youtu.be/_QLDhNMwoe4?t=4611) | +| **5** | New London infrastructure call | [1:24:00](https://youtu.be/_QLDhNMwoe4?t=5074) | + + +#### Moderator: +* Tim Beiko: +Hello everyone, Welcome to All Core Devs number 112. Yeah we have a lot of London stuff on the agenda today as well as a few new EIP’s. First thing we had is under last call we spent some time discussing EIP 3403 and Martin and Vitalik said they had a new proposal for it and give it to something you wanted to do in London. We said we wanted to review this call and hopefully make a decision about it today so that has actually been formally proposed it see EIP 3529. + +##### 3529 being included in London + +* Vitalik: + Sure the core idea behind EIP is basically that instead of completely removing gas refunds it reduces gas refunds if in most cases from 15000 to 4800 and the two exceptions for that are the self-destruct refunds which are still completely removed and the refund for a storage slot going to 0 when it started at 0 1 and so I got increase the one before and I think it said 19900 so the core idea behind this is basically that core idea behind this is basically that we reduce refunds to such a level that in order to get the refunds you has to spend the same amount of gas on that trade on that same slot at some point earlier in the transaction. Basically it means that like there's no way to get more gas out of a particular storage slot in a transaction that you put in and so gas tokens stop working and also the maximum amount of extra gas that you get out of execution is all so much lower. So still satisfies both of the original objectives of removing or restricting refunds but it has the benefit that it still maintains a very substantial and said actually clear storage and not just replace values with 1 instead of 0. +* Beiko: + Got it thanks anyone have thoughts/comments on the EIP if they wanted to share. So no strong opinions, I guess, on the last call it seems like people were generally favorable towards this idea and into the London is that something that people would still want to do I don’t know some of the teams have thoughts on this. +* Piper Merriam: + Yes, in favor yes, I was just saying not a client team but in favour yes. + +* Tomasz Stanczak: + I would try to avoid it for London unless Martin is generally suggesting this is needed for security. I like for just for the sake of removing this token. I don't see the need for that now but if that's important for security of the gas, sorry, the low gas limit elasticity and this unpredictable size is then I would go for it. +* Martin Swende: + So I can't really say how important this is for block electricity and how bad can service issues could be with the doubled capacity. I don’t want to Cry Wolf too much I do think it's a concern I am familiar with gas naturally. I assume that there would be denial of service issues for the client I think. It's good to do this in like conservative gesture but I also want to basically set up my personal main motivation why see I this is important is because I think it's good if we get rid of this gas token because I believe that gas tokens are used to just mend little bit instead of picking some cheap transaction and this drives up the transaction prices and closest bad UX for users and that's really my primary motivation. I'm for it I don't want to speak for the gas team though. Peter what do you think? +* Peter: Unintelligible +Tim: + Peter you sound kind of like a robot your audio is choppy. +* Vitalik: + I'm still getting Echoes from someone in there by the way. +* Rai: + it's Peter I think. +* Tomasz: + I think Greg is the only person not muted. +* James Hancock: + Doesn't gas tokens also contribute to state bloat more so the longer that they're there when we don't need them wouldn't that be adding additional pressure to the state that we don't need? +* Martin: +yes that is one of them, it’s a UX problem, I think gas driving the gas prices up but it’s also a state management problem. +* Piper: +Seems like we've got two decent reasons here for going ahead and including it in London so I just wanted to add for some. It's enough just including transaction because you can always sell gas tokens so that’s another problem so that miners can just decide not to include transactions and this is actually a self-reinforcing problem because then blocked congestion just goes up and that just drives gas prices up. Its actually it is worthwhile to mint gas tokens because you can extract more value from the network end of the same time you'll raising the gas prices so your gas tokens become even more valuable so it’s kind of a perverted thing to leave that in. All in all, I think the biggest problem (unintelligible) block elasticity having the potentially 2x blocks and it hard for me to reason about it. For example if somebody were to ask can we raise the estimate and we always have to think about the worst case scenario and since refunds allow us to 2x the gas limit it doesn’t make sense I mean we can’t raise it to 20 million simply because that would mean 40 million .Its much easier to have these conversations about raising the gas limit if you know that gas limit means a certain thing and not potentially 2x. I guess last but not least what I wanted to mention was traffic but ill just go one step further and see that I think we’ve hinted at it quite a bit over the past years that the Ethereum network isn’t that particularly safe from a determined denial-of-service attack and Berlin included EIP’s that made those attacks very un-probable. We’ve got included snapshots which again made those attacks super un-probable. Details which we’ll hopefully do in about two weeks but in essence we’ve been trying to push towards making block sizes deterministic and state access more meaningful for a specific reason so it's it wasn't really just randomly trying to make things. Personally, I would really strongly suggest we go towards this but we can get you guys more information in two weeks and then you can make up your minds. + +* Thomas: + There is one thing that I wonder if you take into account. So currency of lots of gas minting happens and 0 gas prices as the miners fill the blocks with gas minting and also any gas miners have to do that to low gas prices right which means that's in the end of time so if it went after we introduced to get EIP 1559 witches in London anyway we'll have base fee so actually minting gas will be no longer viable because you’ll be burning the gas so miners will no longer be able to simply fill the blocks with gas minting because they would have to pay the base fee and they would generally raise the base fee so that would contribute to them the revenue is falling probably and also I’m not even sure if there is enough of the gas stored in the gas tokens to provide any substantially long attack on the blocks. So I think it might not be necessary at all because of the EIP 1559 and basically it will simply die out. +* Piper: +Thomas still having trouble understanding the perspective here you're wanting to just do this later is that the intention? +* Thomas: +No I think that it may not be required if the motivation is that. +Piper: +What about state bloat? +* Thomas: +If gas minting stops be viable after EIP1559 because of the burning of the base fee then we don’t have state bloat because it simply doesn’t happen anymore. +* Piper: +What about cleaning up the EVM in general and just being able to reason about the gas limit better and that gas tokens are an unfortunate side effects of refunds and that refunds aren't an effective mechanism and that getting rid of them in this context in the EVM makes the EVM easier to understand and reason about. +* Thomasz: +Well that was not raised as a motivation so far obviously for the future implementation that sounds good but I think there will be lots of other cleaning and changes that will happen as part of the merge that we cannot really predict that this particular change is slightly rushed for London. This particular change taking to count all the other work on stateless Ethereum. + +* Piper: +Last question your opposition is it strong enough that you want to change all of our minds or you voicing your opposition but you're okay with things going whichever way? +* Thomasz: + No it's slightly more towards neutral as I said like I would listen if you if your think this is for the security reasons that's definitely I'll be convinced. what I'm not convinced by is that is if it's targeting the gas token then I'm not convinced that all the this is needed also because of the basic if it's for security purposes and if it's kind of analyze then we can show like how it leads to that security issues then then definitely I would like to included. +* Peter: + I'm I really think that we should get rid of the merge we should do change this incremental. +* Piper: + It sounds like we don't have a strong point of contention here there is definitely a worst-case denial-of-service scenario here where it is possible for this to muck with the total amount of gas used in a block and I don't think that anybody made this specifically to get rid of gas tokens it's more focused on the affect the gas tokens can have and then there's a bunch of other beneficial side effects so anyways just contextualizing this discussion it sounds like we don't have a strong disagreement here and that we do have people who are strongly in favor of it +* Tim: +Danny I think you had your mic unmuted for a while? +* Danny: +so there is a relatively new issue with the miners minting 0 fee gas tokens and yes that would be changed after 1559 because they would have to burn but that's a relatively new exploit and that's actually probably pretty dangerous exploit but I don't think it actually changes the other mechanics of 1559, would be potentially profitable for like use case feels low for people to be sold. +* Vitalik: + Even today when miners are making zero fee transactions they’re not really zero fee. They are paying opportunity cost so they're willing to pay opportunity costs down they will be willing to pay the base fee when its low. +James: + That line will exist somewhere even if it might be less or more than we think it might be when the base fee happened. I think that the economic argument against the base fee being reason not to make (unintelligible) since I don't think that'll hold up very well. +* Beiko: + I'm curious if I don't know anyone on the base team or open Ethereum team has an opinion on this? +* Rai: + I like Piper's point about just keeping in mind and the relative ranking of the reasons for and against that the primary 1 is that might eclipse all others is just to think about the reasoning of the max block size estimate and then beyond that if the set on that then you can kind of use the other is the tiebreakers. +* Beiko: + anyone from open Ethereum and if not though I’ll go to the raise hands. +* Jochen: +I think it's only me from the open Ethereum today. it's really that simple of a change it would not make to worried. +* Beiko: +Ansgar and then Thomas +* Ansgar Dietrichs: +yes so I was basically just curious if you like do we all expect that the that would EIP would have a significant impact on the safe gas limit or naively to me that it seems like at least to the extent that we are limited by peak throughput. EIP would allow for up to a 2x increase of the same gas limit because we would only kind of like a peak of 2x and not 4x and of course we might be limited by other things. I’m just wondering because to me if it really makes a big difference for the safe gas limit then I personally would be strongly in favor of it but if it is more complicated and maybe doesn’t have any impact on the gas limit then. I’m curious if people generally agree that it has an impact on the safe gas limit of if there are some reasons why it might not? +* Beiko: +Thomas? +* Thomasz: + just to come back to some arguments are when we were talking about all right when we were talking about the cleaning the EVM on so this particular EIP doesn't clean the EVM on because it just changes the parameters to reduce it which means that cleaning of the EVM doesn't happen at all with this DCIP so Piper maybe. +* Piper: + It's a precursor to cleaning the reason that the EIP was changed to not fully remove them is so that we don't introduce a perverse incentive for smart contract devs to 1 instead of zero. The intent is that when we move to state expiry, we fully remove refunds this was sort of a compromise to make sure that the smart contract developer correctly align +* Thomasz: + so I think I can really talk about this particular EIP and it doesn't clean EVM and it's like we had this conversation about this step toward something with the EIP 2315 weren't actually later doesn't really make that much change let's just two steps and the difficulty they also assume that if the miners mint a lot of gas tokens nowadays then those attacks on the block with the double block limit it is not only opportunity cost because of raising the base fee their revenue will fall significantly after such an attack because of the bay fee growing their opportunity cost will be huge in the case of such an attack and attack should dissipate quite quickly that's my intuition. +* Vitalik: +So one way to contextualize this discussion I think is that there is an EIP maybe let’s say for example a 30% chance that the block size barrier issue is important enough stuff is like a really important and maybe a 70% chance it doesn't matter much there's also a 30% chance that getting rid of gas tokens is something that's going to be really valuable and not save you 70% sure I have not heard any arguments that it's going to that this could be actively harmful right so two 30% chances of solving significant security issues is still worth the fairly small number of lines of code that the EIP contains at least I think. +* Beiko: +Peter I know you came up you came off mute a minute or two did you have something like the add. +* Peter: +Yes I just wanted this discussion to move tangentially over into management and miners minting gas tokens but that’s not the priority. I’m looking to keep the size of the block deterministic and not allow doubling of it all of the sudden so that’s the primary goal here. Piper: +yeah it's worth saying that you can do that without gas tokens. +* Thomaz: + I mean you would have to have a significant storage to unlock to execute such an attack for a longer time I mean I see clearly massive support for this one so it’s totally fine. So just to claim the arguments against why am I even talking about it and potentially even suggesting it. there are two reasons so I think they'll be some additional phasing and testing efforts that which may lead to potential delay of London making it slightly harder to start Network upgrade to introduce. The second one was historically the changes to the gas calculations were potential risky for the consensus split which I think this is also the kind of risk that we want to avoid. So these were the only all new arguments I had against this is just a question with the proposed reasoning in favour of is correctly calculated the detail calculated but I agree with the statement that there is a 30% risk of the this a negative consequences then yes it should be introduced however this exists now but here is the statement that the EIP 1559 because of block elasticity can cause some damage but it's also it's in the end of case of longer attack is to resolvable by miners decreasing the block sizes and so sorry I won't be making more statements about this one. it's it really makes sense also lots of things that you say here. +* Beiko: +thanks for sharing Alex is your hand up to related to this EIP. +* Alex B: + yes just a quick question maybe this has been discussed since the refund for self destruct is removed that means at least I believe the chi and the GST 2 gas tokens they would make no sense to be self destructive anymore and I wonder if I'm assuming to see EIP today except it today and between now and the London hardfork itself will the time be enough for people to actually I guess they still will keep using the gas tokens but at some point maybe they will start to destroy them in order to reclaim the refunds before the hardfork happens and I wonder if this actually going to happen and whether the time is enough or will end up having a lot of stuck gas tokens because they are not economical to be retained anymore and whether the goal is that state expire going to deal with those remnants +* Martin: + I am I don't know burn rates of these I would like you expect them to be more not minted anymore but mainly only burned from when we decide to go see but I don’t know the burn rate and it’s not necessarily still what they will stay on forever. It might be worthwhile for someone to just pay and get rid of them I don’t know that depends how many there are I think that Marius checked into this a couple. There is some public chi token tracker and if I recall correctly, it’s on the order of three or six million such contracts right now. If anyone else has more info about how many there are and what the burn rate is feel free to jump in + +* Piper: +other part of your question about like kind of do we expect state experts to clean this up yes and in theory state expiry makes it not matter at all if they constructed or not and it shouldn't make a difference either way. +* Alex B: + Yeah I don't have anything to say it was really just a question to you to understand this part +* Beiko: + got it the other argument that you mention Thomas was around testing so I guess just two things I would want to check as you know in terms of implementation would all client teams feel like this is something they can Implement and in terms of testing they know is this something we think we can test properly over the coming weeks and we feel comfortable including it in London? +* Martin: +so if I can just speak quickly of testing we have performed so we have the test written for 1283 and 2200 which did test for gas changes to have quite a lot reference test covering modifications to we have particular Fosters written to try out various combinations of storage changes and doing calls and source changes which were written or reused when we wants to EIP 929 so I think the pretty good and pretty easy to just use the one that go to have with a new rules and thought I don't think there are any explicit new testimonies +* Beiko: +got it thanks and does anyone I guess on the client side think like they could not implement this in time for London timeline that we had. +* Artem: +I don’t think that it is hard to implement. +* Beiko: +then in the chat Micah has a comment about like the Chi mint said it would take 80 to 320 days to get rid of them is that based on like that the historical how much get the burns per day. +* Micah: +Yeah just looking at that first one likes it looks like on any given day the economy burns 5K to 20K, very rough ballpark. +* Beiko: + So at 20k that’s 80 days from now basically July 19th which is you know five days after when we said we wanted to fork on maintenance. you assumed I guess you know if people really wanted to burn them they could probably have the rates go up. +* James: +are there are days where it is higher than that to that happens or so it could be like that that being the general that isn’t the limit. +* Micah: + keep in mind that the people who are burning these are almost exclusively boss and the bots and 1 inch users and fairly stable the bots come and go in terms of their volume based on what opportunities are available and so those days where there's lots of cheaper usually because there's some EVM opportunity the bots really trying to leverage heaveily and so I don't know that the broader Community actually has much control over the burn rate it's more like when the opportunity presents itself they burn and when it does so they don't burn like they're just burning all the time for fun. +* Beiko: + got it well yeah. +* Peter: +I guess your opportunity cost will go up as the gas tokens go down in value eventually it will be worthwhile to always burn. +* Micah: + I'm not convinced for that because they bots generally will always burn Chi because their goal is to get there or they're like high-value transactions when they're doing gas working with and they’re putting gas prices at thousand they want that to have the lowest gas possible and they're willing to pay whatever the charge going rate is tokens for that transaction like the cost of doesn’t matter to them what matters is getting their gas down I Peter: + I got it so you by opportunity cost if the chain is idle I mean there aren’t insane trades to be made then there are essentially the bots will be idle. +* Micah: +Exactly they’re not going to be doing anything they aren’t going to be doing anything. +* Beiko: +Ansgar you have your hand up? +* Ansgar: +I just quickly wanted to ask and this could be like naïve or anything but basically one possible alternative to EIP that I just to articulate was what if after EIP1559 you don’t have the refund. Basically, the refunded gas still counts against the block limit so basically refunds can’t have any elasticity kind of properties anymore. So you have a transaction with 200k gas usage but like 100k refunds then instead of like only counting 100K block space it accounts for the full 200k and it has to pay the tip part of the transaction fee for the full 200k in order to make miners indifferent of including it but it still gets a discount on the base fee. It would only have to pay like, in this case, 100k on the base fee that would be just in case people are still kind of feeling like maybe they don’t want to think about this EIP more and only included in the future block after. This might be a very simple change that would get rid of the elasticity problems while keeping most of the properties of refund savings intact. I could be missing something here but maybe this would be practical kind of interim solution while we discussed like a good long term one. +* Peter: +I don’t see a problem with nuking out gas tokens. I don’t see that as a problem needing a solution. +* Piper: +Yeah it doesn’t seem like we need an alternative solution but if we do we can look at this. route. +* Vitalik: +I feel like we’ve been looking at routes for long enough and we need to just decide. +* James: +This is the fourth iteration of EIP already. Which has been great the progression has been awesome. +* Micah: +What do you think our timeline on state expiry is since that seems to be a interrelated. Are we talking 2 years or 6 months after the merge. +* Vitalik: +1.5 to 2, I don’t know. +* Bieko: +So another thing worth noting is we have to make a call I think today otherwise we’re going to be pushing back the London timeline. Yes, there might be another feature fork after London but we also said on the last call if the merge was ready we would do the merge before this feature fork and given that this is a solution that helps with the block time consistency and this is also something we’ll want for Eth2 it might actually be the last chance we get to make the block times a but more predictable before we have to merge. If this is something you know we need and you know that it solves two or possibly more 30% risks I would favor including something today because otherwise we just might miss the London deadline and if we missed the London deadline then we might miss the merge also and this is probably not the type of change you want to include right when you’re doing the actual consensus swap +* Piper: + so maybe the open question here is would anybody else like to argue against including this in London. + + ##### 3238 Being re-written this week + +* Beiko: +So if there is no opposition we include it. Last chance for anyone to step up. Okay great so included. I know this has been like pretty long discussion but it was the biggest thing on the agenda other things should be should be simpler to get through so moving on at the next one so just a difficulty bomb delay we agreed on the previous call to have it around December 1st but I asked EIP to be modified and in the champion is on vacation so I don't know what's the best way to go about this do we can somebody else submit a PR to the eve should we just have a set different difficulty bomb EIP that we sets the date to December 1st I just want to make sure we actually don't like forget about this and come to the spot where we need to actually set it on mainnet and we don't have a number +* Micah: + Is this person permanently on vacation or temporarily on vacation? +* James: + I pinged them, they didn't answer which is reasonable because on vacation but I don't know +* Pooja: +it seems like he's unwell and that's the reason he is unavailable to answer. +* Danny: +I would expect maybe to not be able to get in touch with them for now and maybe an easy solution is to copy the EIP with the new data and actually put him in the co-author on the new one. +* Micah: +yeah I think that would make sense just have a new EIP so we can merge to it it's obviously I give credit as an author and targeted December 1st does anyone want to do that? +* James: + Is the number already been calculated for what we would need to do for that. So that needs to be done? +* James: + So you need to calculate basically by how much do we put back the bomb so that it goes off around December 1st +* Beiko: + if no one is no one's interested I'm happy to take a stab at it and work with folks to get the number. +* James: +I can look at it and then I'll ping you if I start if I have troubles before next week maybe before it's over next week I'll look at it and if I'm having troubles on make sure +* Beiko: +let's do that so James. +* James: +by the end of next week like Friday or the week after. +* Beiko: +Okay sounds good and Alex you have a comment in chat about December not being too ambitious. We have discussed this on previous call and the reason was we expect to either have the merge or Shanghai ready before December we still don't know which one will come first and we wanted to keep the bomb as a forcing function to do that. +* James: + Basically we would have the fork before then. + + ##### 3451 considered for London inclusion + +* Beiko: +Exactly firmly before then and we have also agreed nobody wants to have a fork over the holidays so it’s like if you put it on January 1st then it’s basically the same thing because we’re not realistically going to fork on December 20th or something like that. Okay so you will follow up with a new EIP that supersedes the previous EIP. next up was EIP 3541 so we were discussing a bit on the cordev’s chats this morning. Alex do you want to give a quick overview of the EIP. +* Pawel: +We agreed I will try to do the overview of that if that's fine with everyone and so the bigger picture of that is Ethereum object format which is proposed as EIP 3540 and this introduces some kind of container for EVM code so it adds some structure to the EVM code. it's not it's not a sequence of bites anymore and important things about this one is that it starts with some magic sequence of bits which is a prefix of this container and the second field there is the version number so the first feature provides is versioning of the EVM code so it has some benefits over previous proposal that workers getting the same thing which doesn't require the change to Dad because all of the information about the person would be keep inside the bit code and second most important thing about that we were together with this object format we also introduce validation of the code and all the other codes deployed on the blockchain starting from this EIP I mean EIP that introduces the object format will be will be validated and we have some guarantees that in the state the object that is in the state is already valid. One more thing about Ethereum object format the version first which also is part of the EIP is that in the first version we want to introduce code and date of Separation so this is kind of alternative to the beginning data instruction proposed in other places. I think I should mention that the exact feature set of the first version is up to discussion and tuning so that's about this Ethereum object format as a goal we’re aiming for and now what is proposed for London. For London we propose starting with that to have these guarantees about contact code validation and deploy time we discovered a way of doing that in the like the most Backward Compatible Way by doing the deployment of this into two hardforks and because we need to come up with this this Magic sequence of bits by creating EOF prefix and in the first hardfork what you want to do is the freeze the first byte of that so no longer contract starting with this byte are allowed to be deployed after the 1st hardfork and the first harfork which is proposed for London which is EIP3541 and after this happened will be able to find the sequence the remaining sequence of this prefix because mostly that the search space is freezed at this point. Hopefully I more or less explained the situation and did that so that's why we wanted to include the first change as soon as possible so later deployment with the full feature set it's not blocked by some other dependents. EIP3541 we think it's relatively simple it's mostly at when you create a contract you need to check if that the codes to be deployed doesn't start for this special byte this is not used correctly and if this byte has this value you failed the contact creation and that's mostly to change that has to be implemented. Lastly I did some I guess like proof of concept implementation and based on that we generated consensus test in the format of this official repo and that is all from me on this subject if anyone else from the from the people that worked on that wants to add something + +* Vitalik: +Yeah, just one quick question or request for confirmation this EIP it that youre proposed that your proposed like that you are surely soon is does not require any kind of agreement on what the structure of this and structure EVM code is actually going to be. + +* Pawel: +Yes I think Martin has some valuable comment about that today on the chat. It doesn't have to be even this you have for much of the other options that can go to go later on. +* Vitalik: + right my strategy longer-term strategic concern is that we are going to or at least I think we wants to do code verticalization at some point and code verticalization introduces some new criteria in terms of like how we were in what way it's good to optimize the structure of EVM and that’s something that the structure format should be designed and so just even if just for that reason it's probably good to not rush into it like this making decisions on the code format that we can't go back on to quickly. + +* Martin: +Yeah so it is good points and this first step is a good first step if you want any kind of structure no matter what they look like. +* Piper: + got it I’m definitely in support of this. I’m wondering if you guys have given much thought into how this ends up with interplay on test nets and things and whether or not you guys see us ending up with like meeting different prefixes for testnets or things like that +* Martin: + yeah so it's just a matter of after this has been rolled out on all testnets that we know of we just find the best prefix and at that point we can choose to say well we extended with three bytes or a even four bytes because someone created 4 million contracts on ropsten or or we may choose to say let’s screw ropsten over and start a new testnet because we can choose later on if you want to make it longer magic or we can live with some testnet being wonky and we can ask if there are any public or private networks that have any anything if they have any concerns about particular validators it's something we need to solve later +* Piper: +I don’t see it as a strong concern, I was just wondering if there’d been consideration put towards it thanks. +* Beiko: +Paul you've had your hand up for a while? +* Paul: + in general I'm in favor of the big picture the code verticalization that the Vitalik mentioned it brings some Global properties then there has to be design interactions with these various features that are coming up including code verticalization the other one is in particular I want to say there are code verticalizations is local if it's everything is local and all decisions are made locally but when we have Global properties like you know versioning and offset some things like this than this has to be resolved soon and the other interaction we have with this one is address extension to 32 bytes which gives so this could be become redundant I don't know but I would like some may be mentioned somewhere about how this interacts with address extension to 32 bytes which gives us versioning for free so this could become redundant. I don’t know but I would like some mentioned somewhere about how this interacts with address extensions to 32 bytes that’s it +* Beiko: +Alex you had your hand up? +* Alex B. +Yea I wanted to respond to what Vitalik said regarding localization and meralrization was actually considered. The current proposed format which is not even proposed for London but the currently proposed format has headers in front so that would mean that the first shank would have the headers and one of the reasons we prefer deploy time I mean contract creation time vs. execution time validation is exactly code mercalrization because if you would have execution time validation you would always need to have all the chunk so which render mercalization moot. Regarding the address extension address extension was actually one of the motivations to this whole work with the seat expire proposal an idea is that it would be easy to disallow old legacy code for new addresses all together there is a good interaction with these proposals but as it was stated address extension and state expiry might take upwards of two years and it would be nice to make some progress on this topic before that +* Beiko: + I'll be curious to hear from the different client teams seems like obviously we just barely got another EIP in the London I understand this is a small change, but it feels like we are coming out in small change after small changes im curious what did the different client teams feel with regards to including this in London is it smaller nothing valuable enough and I guess future proof enough that it's worth it? So maybe I can go does anyone have a strong preference either way or are people mostly neutral. +* Martin: + So I have been apart a little bit in defining this this so I this partial but I am a proponent don't know if peter is as well +* Rai: +can it be a nice to have like is it something that we I really need for a London or we could at least have it out of the backlog like a signal to the client's height of teams OK work on these other ones and then if there's time then work on this one +* Beiko: + I don't think so because assuming we don't want to delay London we basically with you to choose blocks on the next call and ideally we probably want to be pretty finalized in the implementation before then. So I suspect we probably need to tell people either it's in today and you have two weeks to implement it or it's not in unless we delay London then but I think we've signaled pretty strongly that we don't want to do that +* Micah: +Do any clients think that implementing this will be challenging in any way. +* Rai: +I think it should probably be fine. +* Martin: + I believe so too because I suppose a lot of other changes there are only very few places where creation actually happens so this change is really localized. +* Peter: + Yeah I guess so the other thing that in order to continue defining and working on these specs you kind of need to reserve so I guess I'm kind of put some urgency to get it into London because if there is a follow up EIP this one is enabled for London. +* Beiko: +Any thoughts? +* Thomasz: +No comments I didn’t have enough time to analyze this in details. +* Rai: +So is it the end of the world if it doesn't go into London I agree that it is a is a good step and it makes sense but could it not just going to the next one. +* James: + I think the nuance there is that for most of the EIPs and next one means like where the features would happen would be over just going to move to the next Fork but for to do that for this one we're actually pushing it two forks further because we still have to have these interim steps +* Rai: Yeah I understand my question still stands. +* Paul: + I think that if it doesn’t get into London then a case could be made later that this is redundant with address extension because address extension gives us versioning for free so I vote in favor of getting it in the London otherwise it's going be interactions I think. It’s good anyway even if there’s redundancy. It’s good to decouple addresses from the byte code and then just have a piece of byte code you know and how to execute it without having an address so I think it's useful anyway so I vote for EIP London but I know that my opinion doesn't mean anything. +* Rai: +you can still decide to decouple them down the line you can understand that address extension can get it get you it for free and chose not to leverage that. +* Alex: +if you allow comment on the address extension it means that intrinsically you can introduce new versions on each epoch and you would tie the allowed versions to epochs I guess that's the way it would work but the problem is you mentioned is that you need to know the address in order to execute the code you cannot execute a code outside of knowing the address +* Micah: +it doesn't have to be tied to the epoch address extension leave some space for bytes that are unused Reserve future use with you one of those don't we don't have to piggyback on that stuff. +* Alex: + Yeah I guess it would be less complexity if it just title epochs right +* Micah: Yea sure that would definitely be an option. +* Beiko: +Ohh, Greg you had some comments. +* Greg: Okay I’m muted I haven't had time to follow this one in any detail I'm partly concerned because there was a lot of discussion about this and Martin probably remembers with the way back in the EIP615 which required something and all kinds of issues involving a code that creates other code that needs to go on and that code can't be changed contracts that aren't really code there actually data it just there were a lot of issues that I don't know for sure whether this one is going to bump in to those sorts of issues and also to remind Christians attitude about begin data some point he just said I don't care how you do it but there has to be a way for solidity to hide the data that it needs to have without playing the tricks it has to play to force the date there's to be are not accessible as code he has to do ugly things so I'd be a little happier actually if I was sure that they didn't run into those same problems and could actually get a little bit more fleshed-out so that solve the beginning data problem at least +* Martin: +yes so you are yeah I also recall all those discussions way back with wait time and a lot of people were trying to devise a scheme to have both is kind of opt-in validation saying hey I want to play by these rules and also these kind of validation or certification rules that meant sitting a one-day by these truths and also these kind of validation or certification rules that meant that's in EVM could know about the run this code I don't have to do any jump test analysis because it’s already been validated but we never could get there because we have to modify the accounts in the state we should modify the state we run into problems with above how does this kind of change if you have this new kind and create something else and it becomes tainted or does it before the same old rules or whatever yes there were all these problems that's why I'm so optimistic about this first step because it's just I think it's really clever step that solves all these issues and as for Christian they begin and also Martin’s proposal about the format this is kind of the next step. Those ideas implement I mean this stuff makes those things possible in a good way that’s my opinion about that. +* Greg: +It makes them possible, but the next steps could get so delayed that the way our forks are getting laid out and the way the merge is interacting and the uncertainties there that we could make this step and then not be able to make the next step for like another year or more +* Beiko: + that’s true of every EIP though to be clear so I think we're kind of a spot where no matter what whatever is not in London yes could be delayed another year and we had a similar conversation all about 3074 and it seem like people were comfortable that you know we can aim to have another fork after London if the merge happens first it happens first but yeah we don't have this certainty I think that's just like our situation right now but this kind of makes it possible. Micah you have your hand up? +* Micah: +I think what Greg is alluding to is that we can if we don't know what's the second half of this looks like and we might not know what the second half looks like for years and so we could put this in now and then by the time we actually know what the second half looks like realize it doesn't actually line up the way we want and see where is an alternative path would be we don't do the first hardfork until we know what the second hardfork looks like so we can guarantee that they will come one after another like hardfork and hardfork rather than hard fork in two years and then hardfork +* Beiko: +Alex +* Alex: + a couple of comments maybe to Greg if I understand correctly maybe you you're also concerned whether what's Christians opinion on test and the Christian has been actually following an interacting with this proposal so he's fully aware and of The Proposal There's an actual implementation in solidity and by proposal I mean 3540 which is not planned for London into a comment to you Micah we do have a good idea what would be the next step which is 3540 but as Martin said on the chat even if he end up not doing 3540 which I hope it will this first step that could be used to introduce you know something like the backend date +* Beiko: +There’s a comment also in the chat I’ll just highlight from Ansgar that the EIP is low risk because it only reduces functionality and could always be reverted in the future if we don't want it seems like people are generally in favor like would anyone have a a strong objection to put in to London? okay well I guess if no one disagrees yep we included in London Last chance the voice your disagreement +* Greg: + when's the last chance for us to actually make the decision? +* Beiko: + last call literally. +* Greg: + we’ve learned at the last chance to make a decision is very much later than that. +* Piper: +let's just make the decision. +* James: +Let’s skip the comment on previous politics. +* Greg: +So this isn't a political thing it's like there's some uncertainty here and can this way to one meeting for people to look at it a little bit more closely to know if there really are objections or not or is waiting one more meeting going to make it too late to make the decision well +* Martin: +Sorry if I'm jumping in so I would propose instead if it seems like we are leaning towards it's that we decide to do it and then at the next meeting if the opposition has grown stronger let's revisit that decision and potentially put it out again but in the meantime I think it's good to signal with doing that we want to do this and implementers should implement it Etc +* Beiko: +so maybe one thing we can do based on that yeah so basically Thomas has a comment that the next meeting is 3 days before the client should be locked for London so maybe what we can do instead is you know make it considered for inclusion for London we're going to have a conversation about like the devnets right after this so potentially added to the next devnets and if people feel like in in the next two weeks we've got it implemented it's on you know a devnet or close to beyond the devnet and there’s no objections that's come out then we officially move it into the hard fork and otherwise we don't but at least we can move forward with the implementations and whatnot and I agree to some comment to the chats about you know that EIP is not even addressed it's the first time it's brought up on coredevs so there’s some uneasiness about including it directly so at the very least we can move it the CFI it'll be there we can add it to the devnets and if that's if that's ready by the next call if we can decide to include it in London but we don't have to make that call right now and it'll be implemented in clients +* Alexey A.: +may I just comment something I just joined specifically to make his comment about this EIP this specific suggestion about banning the contracts with this particular starting code we could have presented it like maybe months ago but the reason it wasn't presented month ago was specifically what people now are criticizing it for his like because we don't know what will come next but actually this whole month has been spent Pavil and Valask to try to present in these two piece there's a lot of text there what will happen that's unfortunate. I took some time and you know we could have just put in the is very short phrase like after the fork we do this and that it would have been done months ago but you know unfortunately this time has been spent on trying to flush out what will happen next and you know it's unfortunate the got us to this very tight spot +* Martin: + So I think Alex talked about it and you were not present would you now like to voice your official opinion. +* Alexey: +Yes I am for it, it’s a very useful thing and of course I am I also suggested the idea about the this first step because when it gets through it it's basically to wait you to stop procrastinating +of like what mikah is suggesting is that okay let's figure out will happen next all the details about what happened next this is what we try to do before for years in fact and it never work because you know we never made the first step but this first step is basically gets rid of all procrastination because when it's done and its provable you can figure out the way to work around of anything that happened before the fork so in terms of like who wants to deploy this and that and we can always choose the magic which will defeat any adversary that will try to stop us from doing what we're doing. + +##### Baikal devnet launched + +* Beiko: +I guess based on you know Martin and Alexey comments and a bunch of comments in the chat about you know the bad process of including it today does anyone disagree with making it's considered for inclusion today I didn't get to the devnets which we need to basically restart anyways because of this other refund EIP and making Final Call on the next call two weeks from now so that actually leaves times for people to digest it to bring up objections and whatnot and that's obviously not just people in this call also like everyone that I think if we included it today there's probably a lot of people who would be like surprised by it because it just showed up a few days ago and that its schedule for London yeah so any objections to make a get CFI and adding it to the next iteration of the devnets? okay I guess we'll go with that then I guess kind of brings us to the devnets so I know there’s been a lot of work done on alut in the past two weeks anyone want to give a quick summary of where things are at with that I know there was a lot of discussion even this morning prior to call +* Jochem B: +hey guys I'm Jochen from the javascript team and I’ve worked in the past week here by sending a lot of transactions to this testnet and I noticed that when I start syncing this network there were only 20 transactions or something and most of them were just falsehood transactions or legacy transactions. I found some works and as some general comments I think the testnet needs a lot more attention because there are only 20 transactions up to the point where I started sending some things in the next testnet. We also need to prevent these precompile accounts because that is not the case for the testnet and I also think that we should if you are going to use a click again then we should use multiple signins because I just noticed this afternoon that I cannot send access list transactions for some reason. Well I can send them but they do not get mined and that might be because bazel is not including those who do not want to do these things and that’s not very nice because I want to test that this access list also work with the new gas prices and stuff. These three points which I wanted to raise. +* Alexey: +I just wanted to say I just wanted to say really thank you for doing what you have been doing. It’s great to just find all these issues. +* Jochem: +Thank you, it’s a lot of fun to try to nuke this testnet. +* Rai: +Reach out to me offline and we'll discuss the basic issue with access list. +* Beiko: +right great job thank you for doing that like it's really valuable to have people poking at it so yeah because we have two new EIP’s that we want to test the refund one and 3541 I guess we should restart a new devnet with those and then yeah ideally following the two suggestions here where we pre-fund the precompile addresses and you have more than one signer how did people feel about that +* Alexey: +I mean that's definitely needs to be done and then now essentially do we want to wait till somebody Implement these two things or do we just want to do one by one. +* Beiko: +we should probably have both in I suspect if we can’t get both in it’s kind of a signal that we probably can't get both into London I'm so I would favor leaving the current one up waiting until you know someone has the to be two EIPs implemented and then the start a new network with all of the EIPs that were in the previous one and the two we agreed today. +* Alexey: +But if we need multiple signers then we will need multiple clients implementing these right? +* Martin: +Well not necessarily we just need to make it so that we can have multiple signers and then we can run them. If guest is the first one then we can run three guest signers. Some of them whenever is ready we can just give them one of the keys. +* Beiko: +Okay so I guess we can probably coordinate offline for the details of that but at a high level multiple signers pre funded the pre-compiles having I think it would be 5 EIP’s like whatever was in a new two today the only EIP that would be in London but not in the devnet is the difficulty bomb one yeah I think light client had propose a name for it on the Discord so baikal I think was the second fault line that we had after elute so we can use that as a name I'll put together a spec for its today and just share that. +* Alexey: +Can I just have another suggestion for to simplify the this this fork and I just I just had another thought I know we had this conversation before and we can take him to take it offline but I do believe that we do not need to reset the difficulty bomb but simply remove it because both of the reasons why we keep resetting it do not apply right now because first reason was that the minors with a hybrid POS approach would stole the a migration which is not the case because the merge will happen regardless of what they want and second reason was to prevent statis for the development which is exactly the opposite of what was happening right now so I don't actually see very big reason to keep pushing the fork +* Beiko: + Can we discuss that on the next call +* Alexey: +Yes because we can do it offline because I can see like Afris is on holiday like we can’t calculate the let me know we can but it's a bit of like why do we create more complexity for ourselves this just remove it. +* Beiko: + so we discussed it I think it was two three calls ago and people were strongly in favor of keeping it but yeah if we don’t have time in the next six minutes to go over this we want to discuss it again in the next call it's fine because it's like a small technical change and we don't need it for the devnets does that make sense +* Alexey: + Yes thank you but I just wanted to bring up that Varis comment this needs to be reopen the discussion +* Beiko: +Let’s try to be as async as possible because this can easily take up 90 minutes. Two weeks from now if we did it and just in terms of process if you can bring this up on the next call just open an issue on the Ethereum pm repo and I’ll make sure to link that issue in the next call. +I guess I just back to the devnet so I can put together a speck today and I guess I'll follow up. So back to the devnet I can put together a spec today and follow it up so ill post it but clients will need tome to implement these two EIPs so I suspect we probably won't get it up next week but maybe the week after. So I'll make sure to follow up like a week from now to see what the status for the different teams are and you know what we can do with regards to the starting the devnet and I guess two more things we had on the agenda kind of more announcements than anything else but next week at the same time at this meeting starts we plan to have the infrastructure Readiness break out room for London so there's been a lot of talk on the Discord about having infrastructure providers ready to support London and you know having clients kind of enable them with stuff like the json rpc aps and whatnot so if I guess you are an infrastructure provider that is affected by London this would be kind of the right place to show up with your concerns or questions. I'll post a link in the chat here it's also on the pm repo yeah so just hopefully we can get a different teams working on infrastructure we don't need like all of the client devs to be there it’s not like a mandatory call or anything but just wanted to highlight it so people are aware of it and I know you've been working on that. So do you have any other thoughts you wanted to share. +* Trenton: +That was the big thing and I can see their components that similar to how EIP using client's Readiness are track typically in that same place that will also be tracking no libraries tooling and other infrastructure providers leading up to the fork. You know for which EIP’s they have implemented and their general readiness. I will drop the link in the chat. + +##### New London infrastructure call + +* Beiko: +Last quick announcement was we discussed on the last call potentially picking blocks for London today that feels still a bit premature I guess given today’s discussion but if we do want to have a client freeze on the next call then we should pick some blocks by then I'm so if people want to look at the dates and proposal blocks over the next two weeks that would be really valuable so we can assuming nothing changes just agree to them on the next call and I have clients add that into their configs and that's all I had any everything else anyone wants to bring up in the last 2 minutes. Great well thanks everybody appreciate you all coming out here. +------------------------------------------- +## Attendees +- Thomasz Stancsak (Nethermind) +- TimeBeiko +- James Hancock +- Rai +- Pooja Ranjan +- Trenton Van Epps +- Lightclient +- Martin Holst Swende (Open Ethereum) +- Danny +- Marek Moraczynski +- Pawel Byilca +- Vitalik +- Paul D +- Alex Vlasov +- Alex B. +- Ansgar Dietrichs +- Jochem Brouwer +- Dankrad Feist +- Piper Merriam +- Sam Wilson +- Greg Calvin +- Peter Szilagyi +- Lukasz Rozmej +- Jason Carver +- Gary Schulte +- Micah +- Jochen + + +--------------------------------------- +## Next Meeting +May 14, 2021 diff --git a/All Core Devs Meetings/Meeting 113.md b/All Core Devs Meetings/Meeting 113.md new file mode 100644 index 00000000..d7b1ea54 --- /dev/null +++ b/All Core Devs Meetings/Meeting 113.md @@ -0,0 +1,922 @@ + + + +# All Core Devs Meeting 113 + + ### Meeting Date/Time: **May** **14th**, **2021**, **14:00** **UTC** + ### Meeting Duration: **90** **mins** + ### [GitHub Agenda](https://github.com/ethereum/pm/issues/309) + ### [Audio/Video of the meeting](https://youtu.be/H_T2nNrTuWQ) + ### Moderator: **Tim Beiko** + ### Notes: **Kenneth Luster** + +----------------------------------------------- + + # Contents + + +- 1.[London Updates](#1-london-updates) + - i [Baikal Status & Next steps](#baikal-status--next-steps-840) + - ii [EIP-3541](#eip-3541--1310) + - iii [EIP-3554 (EIP-3238 alternative)](#eip-3554-difficulty-bomb--1408) + - iv [JSON RPC Spec Naming](#json-rpc-naming-convention--1752) + - v [Block Numbers](#block-number-discussion--3100) +- 2.[Other Discussion Items](#2-other-discussion-items) + - vi [Merge/Rayonism updates](#merge-and-rayonism-update-5326) + - vii [1559 UI Call Announcement](#1559-ui-call-announcement--5732) + + - [Next Meeting Date/Time](next-meeting-datetime) + - [Attendees](attendees) + - [Zoom chat](zoom-chat) + + +----------------------------------------------- + + + + # **Summary**: + + ## **DECISIONS & ACTION ITEMS** + | Decision Item | Description | Video ref | + | --------------- | ------------- | ---------- | + | **113.1** | **Baikal Devnet** will be around till the 1st Testnet is forked | [8:40](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=520s) | + | **113.2** | **EIP-3541**,will be added to **London's Spec.**| [13:10](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=587s) | + | **113.3** | **EIP-3554: Difficulty Bomb** Delay to December 1st, 2021- the difficulty bomb calculation should be reviewed 3-4 weeks down the line. | [14:08](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=848s) | + | **113.4** | **JSON RPC** Naming Convention for the various fields that **EIP-1559** will be the same as mentioned in the EIP.| [17:52](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1072s) | + | **113.5** | **Block Numbers and Dates** restated | [31:00](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1860s) | + + + # 1. **London Updates** + + ## **Baikal Status & Next steps** [8:40](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=520s) + + + **Tim Beiko** + + And, we are live. So good morning or evening, everybody. Welcome to the All Core Devs number #113. We have mostly London stuff on the agenda today. There's been a lot of work on that, over the past couple of weeks. No background today. No, I guess I can kind of blur it or put my Ethereum background if people prefer that or back in the Blockchain, cool okay. So, for London first thing. So, every team I think was thinking to **Baikal** this week, which was the new Devnet. I don't know if someone wants to give a quick summary of where things are at with, the network. + + + **Marek Moraczynski** + + I can give you **Baikal Status**. So, instance we have five **nodes** two guest, two **Netherminds** and one **Besu**. They are all in sync. As far as I know to the book F is in sync too, I'm not sure about **Open Ethereum** + in **Nethermind**. We implemented three fonts, the last **1559** doesn’t need changes and **EIP - 3541** , all Clients seem to be working fine, but it will be good to test it in the same way as Jochem from the **Ethereum JS Team** that tested the other **Network**. So, you all can feel free to do that, that's all, I think. + + + **Tim Beiko** + + Yeah, is anyone from **Open Ethereum** on the call to give a quick update of where they are? I thought I saw, I'm not sure that they posted a **bootnode**? Yeah, anyone from the team wants to share where you're at? + + + **Dusan** + + Yeah, yeah, we have the updated issue on a defect, we still are missing the guest three fonts, **EIP** implements for the **Baikal** also we are not able to see at the moment. + + + **Tim Beiko** + + Okay, this is the last one you have to implement. + + + **Dusan** + + I am + + + ## **EIP-3541** [13:10](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=587s) + + + **Tim Beiko** + + Okay Got it. + So, what did people feel makes sense in terms of next steps for **Baikal**? My personal preference is probably to keep it up and running you basically until the **fork** and the reason for that is it gives basically tool, tooling and whatnot. The you know, a network that they can use that's already up, if they want to play with **1559** or stuff like that. + + + **Tim Beiko** + + Does anyone disagree with that? + Do people think there's other things we should do, with the **Network**? + + + **Martin Swende** + + I think it sounds good. I don't know how much, how many transactions. has been sent over it. I am, I personally have not done anything. It would, yeah, it would be good to keep it up to. So, so other people can experiment more with the, their codes, and going up and down on the gas limit, there were some changes made on the **1559 Spec.** regarding to how much the gas limit, well the mechanics where how gas can vary up and down. So, that will be good at that also is tested, but I'm not sure if we have, if that has been covered, I suspect not. + + + **Tim Beiko** + + Got it so, yeah, I think I agree that makes sense. I know I've had built a **tool** that we could use to spam. The networks we built when where **Developing 1559** , I suspect we should be able to use that on **Baikal** as well assuming there's an address with enough **Eth**. So, in general, just keeping the network up, obviously letting **Open Ethereum**, the time to sync up to it. Having both, manual transactions on it and people playing around with it and then trying to make sure we test the limits of the gas limit up and down. That seems reasonable. Anything else on **Baikal**? + + **Tim Beiko** + + Okay, so next up on the agenda, I had the **EIP 3541**, which is the **EIP** by **Axic** which has been implemented in **Baikal**. We didn't want to make a decision about inclusion in **London** last time, because it was kind of the first time that it was brought up on the call. I'm curious how the people feel about, including it in **London** now. It seems everybody's had it implemented, so yeah. Any yeah thoughts, objections, support. + + + **Martin Swende** + + I’m in support + + + **Artem Vorotnikov** + + Let’s include it + + + **Tim Beiko** + + Cool + + ## **EIP-3554: Difficulty Bomb** [14:08](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=848s) + + + **Tim Beiko** + + Anyone disagree with that? Okay, I feel much better because when we take stuff out at the last minute, that's usually a bit risky so let's include **3541** into **London**. I'll update the spec right after this call. + Similarly, over so two calls ago I think we agreed to move back the **Difficulty Bomb** to uh December 1st, roughly rather than Q2 which was originally proposed in **EIP 3238**. James has been working on an alternative **EIP** so **3554** which pushes back the **Difficulty Bomb** so uh I think the first, the first increase would happen I believe around December 7th you said James. + + + **James** + + Yep + + + **Tim Beiko** + + So, you want to take a minute to walk us through it? + Like, I know you worked on some back tests for it to make sure it lined up rite, do you want to? + + + **James** + + Yeah, yeah so the there’s a script in the **EIP** itself you can run to check at this and it looks at the difficulty adjust coefficient based on the current Epic what it would be that's kind of pushing up the difficulty so the block time increases, and I went back and looked at the last three times that we first saw the **Difficulty Bomb** go off and all of them were right as it hit 0.1 on this ratio, which if we were to do this 9,700,000 than 0.1 is reached on the December 7th, which is when the first time that Epic of every 10,000 blocks, the Epics switches over on December 7th. So, it looks like it's pretty good I use, yeah, I don't know if anyone else looked at it, but I went onto as many avenues as I thought to double-check and so at this point, I'm pretty confident about it. The only risk is if the difficulty on the network changes significantly then when that 0.1 ratio happens could happen earlier or later. + + + **Tim Beiko** + + Yeah, I just, I evolved and looked at the numbers you know the current **EIP** that we added in near glacier, which is going to go out soon and basically, we're adding an extra 700,000 blocks to that **EIP**, which is roughly four months so July plus four months was December. + So that was my very low-tech way of eyeballing it. Did I, I think I saw **Geth** has already **Geth** has a PR open for this? + + + **Tim Beiko** + + Uhm, I don't have a + + + **Martin Swende** + + Uhm yeah + + + **Martin Swende** + + We actually, we merged the original number, and we have a PR for the second, so we have 9.5, but we had an open PR for 9.7 and + + + **Tim Beiko** + + Okay, does anyone else have thoughts about this? + Yeah, sorry there was a comment by James in the chat that like yeah July plus four months is November, but the bomb was going off end of July not beginning of July it's basically end of November not November 1st. Cool does anyone, is everyone okay basically moving this into **London** and updating a **3238** to have instead of **3554**? + + + **Tim Beiko** + + No objections? + + + **Martin Swende** + + Yes + + + ## **JSON-RPC Naming Convention** [17:52](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1072s) + + + **Tim Beiko** + + Yep, last call. Okay so, in **London** this is going very quickly now to something I think might take a little more time on the call. + **JSON RPC** naming I was hoping we could resolve this async, but it seems like it's an impossible problem. + Basically + + + **James** + + Tim, can I say one thing bomb first? + + + **Tim Beiko** + + Yep + + + **Tim Beiko** + + Go for it, yeah. + + + **James** + + Which is, I think if there's some way to schedule this, we should come back in two months and rerun and have someone rerun those numbers to check that the ratio doesn't change at all. + So, like four or five **All Core Dev’s** from now. + + + **Tim Beiko** + + Yes, I will absolutely uh do that yeah. + + + **James** + + Sweet + + + **Tim Beiko** + + Yeah good + + + **Tim Beiko** + + Yeah, good call cool. **JSON RPC** naming, so I’ll try to summarize where things are at, and hopefully we can come to a decision on it now and the main reason why it would be really good to come to a decision on it now is we're building these **Testnets** for **Infrastructure providers** and basically the naming of the fields is the main thing that's blocking people from playing around with this and obviously they can support it you know then changing names in the future but that's kind of a bad experience. So, I think it was two weeks ago the **Geth Team** put up, put up a **Gist** talking about basically the **JSON RPC** renaming and the Header Field renaming, we got their pretty quick consensus on how we would rename the headers. + But for the **JSON RPC** the argument from **Geth** was, we should use kind of variable names that were shorter than the ones in the **EIP** so that **EIP** uses max priority fee per gas and max fee per gas, and stuff that's kind of aligned with the other naming conventions that are used in **JSON RPC**. + The two that were proposed were gas tip cap, and gas fee cap which obviously aligns with gas, limit gas, use gas price then we kind of had this long conversation on discord with a vote, and it seemed people liked base fee per gas too to specify the base fee, priority fee per gas for the priority fee, and fee cap for gas for the fee cap. + One problem with that is that priority fee per gas, doesn't make it clear it's a maximum value. So, it's not actually the value that you pay but it's maximum that you're willing to pay so the obvious suggestion there is you changed up the max priority fee per gas. Then you're basically back to a spot where two of the three terms have the same name as the **EIP**. + It would be weird to also not switch back to just using the **EIP** obviously **Geth** suggestions were moving away from using the terms of the **EIP** and the one concern that people seem to have with **Geth** suggestions or at least the biggest one was people didn't like the fee term instead of that we could use gas price cap. One challenge with gas price cap is it's obviously very close to gas price and it might be more error prone, and people also don't like the tip term and so an easy fix there is gas priority cap so that's kind of where things are at. + I don't know, yeah if people have opinions or thoughts this is the time. + + + **Martin Swende** + + Yeah, sorry for asking this question rite when you summarize everything but is there anywhere a kind of concrete summary it's not suggestion of yeah, what the current or what's the let most recent proposal + + + **Tim Beiko** + + Yeah + + + **Martin Swende** + + Is? + + + **Tim Beiko** + + I just posted it on **GitHub**. I added a comment to **Peter's Gist** to yesterday to **summarize** it so I think as I understand it not everybody agrees on this obviously, but base cheaper gas seems universally agreed upon the two that I think could work for the other fields would be gas price cap, and gas priority cap. + + + **LightClient** + + So, I don't know this is kind of hard to really bike shed the specifics of the naming just for the voice. + + + **Tim Beiko** + + It's kind of hard with text as well. + + + **LightClient** + + Yeah, I personally prefer to not have the per gas postfix I'd rather have the gas prefix and then describe. I think that lends itself to shorter names and it's similar to how we already describe the gas price. + + + **Martin Swende** + + I was, I was leaning towards that earlier because of the following, the reasoning that gas price oh that meant per gas but then I read Micah wrote that. Yeah, it's a different thing because with the gas price, it's kind of obvious because of the connotations with price that it's you know the price you pay per unit whereas for the other I don't actually I think it's more clear if it's per gas than if it's a gas prefix yeah so, I, I'm leaning I'm personally more in favor per gas as it is more explicit. + + + **Artem Vorotnikov** + + I'm sorry, so this is just about the naming right now. + + + **Martin Swende** + + Yes + + + **Tim Beiko** + + Yes, it might seem like it's a waste of time, but we tried for. + + + **Artem Vorotnikov** + + But I think, I think nobody gives a shit. + + + **Martin Swende** + + I think you are wrong there, there are people + + + **James** + + You are very wrong + + + **Tim Beiko** + + My experience is people have pretty strong opinions about it and are not willing to like, or at least it's hard to get the consensus on it. Async, yeah + + + **Martin Swende** + + Yeah, the thing, the thing to kind of bear in mind is that we can make this choice once and it's going to be a pain in the ass to change it later and if we do a bad choice, it means the UX is going to suck and it's going to be confusing and people are going to shoot themselves in the foot they're going to not understand that this is actually not the value. The, the cost for me is going to be multiplied by 25,000 because it's per gas and it's not an absolute so if we can avoid that I do think it's important. + + + **Tim Beiko** + + Yeah, and one thing I think Micah was the one who mentioned that on discord a lot of apps will you know just pass through these parameters that our users like they'll take whatever's from **JSON RPC** literally exposed that so yeah, I agree if we can have stuff that's more descriptive that probably makes sense. + + + **Rai** + + Matt, did you have another reason to prefer the gas prefixed ones other than uh the shortness and consistency, I guess? + + + **LightClient** + + No I think those are my main, this is the main reason, you know the like if we start doing max priority fee per gas, now you're having you know 50% or more just describe you're trying to set up what this even means you're saying max, you're saying per gas, you're saying priority fee that's the thing whereas you could just say gas fee cap, or gas tip cap, and so now it's a much I find that easier to reasoning about it and I, I don't really agree so much with Mica's reasoning that prices what's giving it the per unit of gas. I think that's it I think gas what's it saying that this is per gas because you could have TX price and that would not be you know per gas though that would be per TX. + So, I think I'm one of the few people at this point still on the gas prefix train. I'm not gonna die on the Hill but I, I prefer it maybe my preferences on founded because I have spent the last few months staring at these names and the thought of having to type two X more characters to do something is probably not something that we should be using decide how everyone else is going to interact with it but those are my thoughts. + + + **Micah Zoltu** + + I will personally buy you a text editor that has auto complete. + + + **LightClient** + + I thought you were personally going to hire someone for me. It's just to fill out the remaining characters, + + + **Laughing** + + + **Micah Zoltu** + + Excuse me. I need to swap places with you Okay, go ahead. + + + **Rai** + + He will, he will write to a macro so that those names are just one Keystroke that. + + + **LightClient** + + We can fill a grant out for that, I think. + + + **Laughing** + + + **Tim Beiko** + + So, I guess aside from my client does anybody else like strongly in favor of the gas? Cause I, I, I think Martin, Peter's not on the call but he was also in favor of that. + + + **Martin Swende** + + Yes + + + **Martin Swende** + + Yeah, I was just gonna say uhm, so I don't speak on behalf of the whole **Geth Team** uhm, Personally + + + **Tim Beiko** + + So, nobody else is willing to defend gas, in that case it feels like there's also more clarity and just we would be using the same we'd be using basically the same terms as the **EIP** rite we would have max priority fee per gas, fee cap per gas and base fee per gas. that would just yeah, so we basically do not need additional names for **JSON RPC**. + Does anyone oppose that last chance? If not, I will let the folks working under the spec for **JSON RPC** know, oh somebody speaking **Ansgar** is he is on the call. + + + **Ansgar Dietrichs** + + Oh yeah, I uhm I think I'm weakly kind of in agreement with LightClient but I that I don't have like any strong opinion. I personally don't think like I also have like of problem with priority fee, but I think the proper place discuss that would just be the EPI itself so. + + + **Tim Beiko** + + And luckily, regarding after the merge we will need to do some changes to **1559** so we can reopen all these cans of worms. + + + **Laughing** + + + **Micha Zoltu** + + Regarding priority fee that like the word priority there we've gone between, I think, six different words in the **EIP**. trying to find a solution if someone has something novel and new and we can give it a try everything is problematic. I think the core reason why we're struggling is because that's that particular value means two different things to different people so if you are gas warring then it is the thing that gets you to the front of the line. If you are just a regular user with **1559** it is the thing that gets you into the block. + So, it's serving dual purposes sort of and so finding a name that satisfies both is very hard and so we ended upkeep changing it to kind of just swap between naming it for the favoring of the one thing, and then we name it to favor the other, and back, and forth. + If anybody come up with a word that handles both, please share it. + + + ## **Block number discussion** [31:00](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1860s) + + **Tim Beiko** + + And we will use that word any too but yeah, I guess okay let's just stick to using the terms that are in the **EIP** and expose that in the **JSON RPC**, and hopefully we'll have the adjacent **JSON RPC** back ready within the next week or so. + Anything else on **JSON RPC**? + Okay I guess the last thing I had on **London** is trying to figure out how people feel with regards to timing for the upgrade. I believe everybody aside from **Open Ethereum** has the **EIP**’s fully implemented a couple what is it months or calls ago? + We had kind of this, this tentative timeline where we would basically try to agree to a client freeze today which is I think where we were at so that teams would have another two weeks to release client that's London compatible, and then we could have our first **Testnet Fork** on June 9th and the first **Mainnet Fork** on July 14th. + How do people generally feel about that schedule? + Did it feel, something that's realistic, is it something that we want to push back a bit? + + + **Tim Beiko** + + Yeah, any thoughts there? + + + **Martin Sweden** + + I mean, I think that it is a bit optimistic, and I think that maybe I have the feeling that this might be most **YOLO, Hard Fork** we've done so far but yes still I think it maybe we should just bite the bullet and do it anyway because we have this when we need to get the next **Hard Fork** out and we've been working on **1559** for a long time but I think the big problem at this for the **Geth Team** is that well I mean the consensus changes are one thing but there's you know a lot of things that need to be touched in the transaction pool logic, it's a lot of touch changes needed to be done for the miner, and where various other subsystems. So, it's big upgrade and we're not going to be able to do it Client freezes anytime soon I think because yeah there still even if we have the base functionality. We, we don't even have that much but even if it did there was there'd be another two or three PR’S follow up PR’S to add this other stuff. + + + **Tim Beiko** + + Unless + + + **Martin Sweden** + + I think that we can live with the dates, I think but yeah, I'm just throwing it out there that it’s we need to do a lot of testing. + + + **Tim Beiko** + + Is this something where like changing it by two weeks would you know would help a lot too or is it something where you know in a perfect world. You'd have two extra months to do the testing, and I guess that there. Sorry, the reason I asked that this date was mostly set because of the **Difficulty Bomb** there's been kind of an increase in **Hashrate** on the network. + So, I suspect we probably have you know a few weeks of leeway if that makes a big difference for **Clients** + + + **Martin Sweden** + + So + + + **Tim Beiko** + + We definitely don't have like months of leeway so that's kind of it. + + + **Martin Sweden** + + So yeah, for me personally my I always think that **Testnets** ultimately are there to test to prepare for Mainnet. So, I don't think we should post hone **Testnet deployments** I think if anything we should do them sooner so that we have more time to actually test everything on the **Testnet** before it hits **Mainnet**, but I know that other people feel differently about **Testnets**. + + + **Tim Beiko** + + Got it, how do other client teams feel when they’ll assume that they are mined on **Open Ethereum** Tooling throughput yet? + + + **Asz Stanczak** + + We are generally okay with the timings yeah, I think I agree that we want to say that the community to move with the **Tooling** connection experimentation, and if they have the tests earlier, the solving the better, I think that's date was announced for a while and we haven't changed it in a month or so if we see any problems whatsoever in the **Testnets** then we should review and consider then to have **Mainnet** bit further down but for now I would stick to this, mid-July date. + + + **Rai** + + Yeah, I agree that I don't think we should be postponing the **Testnets**. Also, I don't know whether we're ready for a code freeze, yet we definitely have the meat of the **EIP**’s all in, but similar *ancillary logic, like mining, and transaction pool** we need double check. + + + **Tim Beiko** + + **Open Ethereum**? + + + **Dusan** + + Yeah. I agree with last statement. Yeah, we’re not fully prepared for the freeze, freezing and we're already a bit late for that but in general for the July 14th I think that will be a problem. + + + **Tim Beiko** + + Okay and so rite now basically the first **Testnet** would be on June 9th, which is 3-ish weeks huh is that rite? + Yeah, Three and a half weeks. + You know Martin seemed to feel that keeping it close is better. Does everybody also agree with that? Because we could also push to **Testnet** that back one week and or if that made a difference but then it's like we get less time on **Testnets** before we go on **Mainnet**. Would be I guess if people want to push back the **Testnets** to get more time for **Client Freeze**. Now's the time to speak up otherwise, we can keep the first one on July 9th sorry, June 9th. + + + **Martin Sweden** + + Which is the first one? + + + **Tim Beiko** + + So, I had **Ropsten** the first one but then this is just because that's what we did for **Berlin**. + So **Ropsten, Gorli, Rinkeby** we can absolutely change them if there is a reason too. + + + **Artem Vorotnikov** + + And the main, the Mainnet date would be? + + + **Tim Beiko** + + The **Mainnet** date is July 14th as of now, so the **Ropsten Testnet** would be live before like four huh six weeks before the hard fork then it would be five weeks then, like, oh sorry, five weeks, then four weeks, then three weeks, between the last **Testnet** and **Mainnet** and obviously I see if anything goes wrong on the **Testnets** and whatnot, we can push back that yeah, but assuming everything goes smoothly, that would be the schedule. + + + **Martin Sweden** + + Think that sounds okay. + + + **Tim Beiko** + + Yeah, James has a comment like, if we push **Mainnet** two weeks back we could get five weeks on the **Testnet** yeah, okay. + So, uhm so basically let's do that. I had proposed some blocks for those dates in the **GitHub** issue yeah so, I just if people want to like to put them in the **Clients** now but basically a block on + **June 9th** on **Ropsten** would be **10399301** + **June 16th** on **Gorli** would be **4979794** + **June 23rd** on **Rinkeby** would be **8813188** + The **Mainnet Fork Block** on **July 14th** would be **12833000** + Uhm unless anything is wrong with those blocks, I double-check them yesterday. + I propose we go with those, and this way **Clients** can start putting them in whenever they're ready and working on their release. + Does that make sense? + + + **James** + + This might be + + + **Yuga** + + Yeah + + + **James** + + Harping on to an earlier conversation but if we just did the **Testnet Blocks**, but then didn't set the **Mainnet Block** until we know a little more about how the **Testnets** goes or do we want to do them all kind of now-ish? + + + **Yuga** + + I guess like one question I'd love to get a sense of is like if like we run the Testnet and it's like we need to push Mainnet out by a week would that be a big deal or like is that kind of okay. + + + **Tim Beiko** + + So, I think a handful of weeks is okay and the only challenge it's basically the same challenge in why you want to hard code all the blocks at first some users might download a version which has **London** enabled only for **Testnets** that they think they upload. They think they've upgraded, but there actually isn't like a block number in for **Mainnet**. So, you kind of get the similar thing where if we do push back the **Fork Block**. Uhm it's not the end of the world but you risk having some people think that they've upgraded they don't read the blog posts or the announcements and whatnot and then they're on a version which has the wrong **Fork Block** for **Mainnet**. It's not you know it's not something I think we should do unless we find like a major issue, or we realize you know we're absolutely not ready but it's also not impossible. + + + **James** + + Yeah, and the other end is if we have the **Mainnet Blocks** in and everyone's installed the **Clients**, and then we need to delay two weeks, then there could be an important part of the network that's splitting two weeks earlier than the other one if they didn't if everyone has to change the client that they have. + + + **Asz Stanczak** + + Yeah. We usually avoid a hard coding, Mainnet blocks together with the Testnet blocks. So, we add Testnet box numbers first and then we release after the first Testnet’s going successfully or at least the next version with Mainnet set like any historically the same Mainnet block number changing, until the last weeks and we didn't want that to risk that like not switching felt for us less risky than switching and the wrong block and trying to revert it. + + + **Tim Beiko** + + Yeah, that's totally something we can do for **London**. If people are more comfortable with that we can wait until the **Rospten fork** and then we can keep the current block like tentatively and if everything goes well use that one but yeah, if people want to wait before we hard coded in **Clients** yeah to see that the **Rospten** and maybe like the **Gorli Fork** goes smoothly. + Do people prefer that? + + + **Martin Swende** + + Yeah, that's probably what we, I mean that’s what probably, + that's what we've done historically in **Geth** as well. I think. + + + **Tim Beiko** + + Okay + + + ## **Speeding up transactions by clients/wallets** [43:42](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=2622s) + + **Tim Beiko** + + Okay let's use let's basically use the current Testnet blocks that where proposed. I see Thomas has a comment about the main block that to add one more zero so anyways we can kind of bike shed around that one offline but yeah, let's use the current **Testnet** blocks for **Rospten, Gorli, or Rinkeby**, assuming the **fork** goes well. We'll basically have a fork rite after the we'll have an **All Core Dev** call rite after the **Rospten fork** we can decide there if we feel comfortable setting the **Mainnet Block**. + + + **Tim Beiko** + + Yeah, Cool + + + **Micah Zoltu** + + Before we move on out of the **London stuff**. + So, someone brought up that in order for wallets to correctly do **transaction speed up**. They need to know what the Clients are going to accept and gossip for transaction speed up there is value in us kind of coming to some general consensus on what the requirement what each client requires for speeding up a transaction, speeding up the transaction and being replaced by fee basically so, I guess the first question is do Clients have the various Clients decided what you guys are going to do for that yet? + + + **Asz Stanczak** + + Yeah, we'll calculating the miner's fee I'm in like the payment to the miner as the selection process for which transactions to evict and which to keep. + + + **Micah Zoltu** + + Okay So, so you calculate how much is going to the minor specifically, and then you sort by that, and then you kick out the Okay. + + + **Tim Beiko** + + Anyone else? + + + **Ansgar Dietrichs** + + I think that for the **Geth Implementation** specifically, that LightClient and I helped with, we had like of a little bit of an internal debate like I personally kind of prefer the like specifically for replacement not for general eviction, but just for replacement from the same sender, same nonce. + I think there are two alternative approaches you can either just you can either basically enforce like a bump of both the fee cap and the tip, or you can basically just acquire a bump of the tip as long as of course the tip is remains smaller equal to the fee cap and I, I think both basically work it should just be like playing should just do the same thing because otherwise the pool gets fractured which is not ideal. + + + **Micah Zoltu** + + Doesn’t the letter if you just bump the miner’s portion doesn't that allow someone to spam a transaction that they know won't get mined because you set your advantage that your fee cap to zero, and then you can just bump the miner fee over and over again. + + + **Ansgar Dietrichs** + + So, we + + + **Rai** + + We don't allow transactions with a tip greater than a fee cap. + + + **LightClient** + + MemPool + + + **Ansgar Dietrichs** + + Exactly. So, so basically like because, because we are already it's basically the same situation that we have today without **1559** we enforce like a minimum Eth tip or like a minimum today of course minimum gas price and, and so basically it is like there is a minimum of how costly like the first bump is and then each subsequent bump will be more costly. + Similarly, today if the gas prices I don’t know, 60 gwei or like a 100 gwei or something and your transaction right now as a guest price of one then you can pump a couple of times before you'll get close to the inclusion zone and this basically property will kind of be the same afterwards, basically like the but given that like your total fee cap must be higher than the tip basically every time you bumped the tip you kind of also have to keep up the total cap you're getting closer and closer to inclusion and so it's basically the same as it does today. + + + **Micah Zoltu** + + Okay. I think the key there is that you do not gossip transactions and have a tip higher than the cap. Is that correct? Under any situation? + + + **Ansgar Dietrichs** + + That's correct. + + + **Unknown Speaker** + + Yes + + + **Ansgar Dietrichs** + + I think technically they are includable in the book, but at least we **Geth Implementation** right now would not gossip them. + + + **Micah Zoltu** + + Okay, whatever we decide on, we, I definitely do think we should make that available to wallets as soon as possible so once each of the teams has decided what your strategy is going to be please share it somewhere. It can be in like in the **1559** the channel, **R&D Discord** or somewhere they can get those correlated and out to wallets because they will need to kind of use the lowest common denominator strategy for dumping if they want to be able to gossip across the whole network it's like, whatever the most strict client is what the left follow. + + + **Tim Beiko** + + So, yeah on that Trent I don't know if Trent's on the call. Ahh yes, he is, Trent is going to be working on sort of a cheat sheet for **Wallets**, regarding **1559** so if people can just yeah just thumb that into discord Trent and I can definitely keep track of the responses and share it out with wallets. + + + **Ansgar Dietrichs** + + Maybe as a follow up on this specifically on the replacement there was anything left to discuss but like as a follow up. I think a couple of months ago we talked about the kind of like the rules around include in general as well and I kind of flipped and looked into that as well. And I think while it's not consensus it's critical it's also valuable to have those this be in sync as well between different Clients otherwise again there's this structured situation where different Clients keep different transactions in the **Mempool** and different ones then there's just like really inefficient for the because you might Re-Gossip some transactions a lot an so and so I just wanted to kind of ask what the best process would be to maybe offline or something just double check that Clients ideally to the same and if they don't kind of maybe come to agreement or something yeah. + Like what would be the best way of just kind of reaching out to our **Clients**. + + + **Tim Beiko** + + Maybe we can discuss this on **Discord** and on the **1559 Dev Channel**, I think some folks are actually discussing this right now I, and Ansgar I shared the writeup you had done of that kind of explains in more detail which you basically went over just now. So perhaps it's useful to like have people look at that and explain how they differ or like don't differ from it yeah, we can definitely document the differences and yeah + + + **Ansgar Dietrichs** + + Okay Sounds good I'll look over it again today and just make sure it's still in sync with the fork Geth is the Geth’s limitation at least just doing today. + + + **LightClient** + + If I can also just make one last comment the way it sounds like several Clients have implemented is like I think the most correct way you're using whatever the effective gas price the transaction is so you're subtracting the base fee and then determine service turned out some of the miner is going to earn and that's sort of is like what the network deems is the best transaction but since the base fee is constantly moving that's needs to be recalculated each transaction and it's not a linear relationship as you began to get to the point where transactions become invalid their fee cap the basically goes past the fee cap you would need to start removing those transactions you have to recalculate this ordered list every block whereas if you use the fee cap which is not changing as your order list then you don't need to reorder all transactions every block and the way that we're doing that with Geth is there's a heap of transactions so you will only re-heap once the heap has been has seen some number of new transactions and structurally needs to be re-heap so it's not clear if that it's not clear to me if we can allow the resorting on every block generally I'm trying to avoid any degradation of performance and so I can run a benchmark to see to compare how those would look but that's just the main difference between those approaches. + + + **Ansgar Dietrichs** + + And so I would take issue with saying that this would be the most correct way of doing it because I think, the main kind of consideration that went into recommending the fee cap instead of the current effective tip basically as a criterium it’s just that we expect like most normal conditions like the vast majority of the **Mempool** and ****1559** ** to below the count base fee because like this only on average like one block worth of transactions kind of above the Includability zone basically and so like especially for eviction we be most interested about the, the most least valuable transactions those will always almost always be below include Includability the effective tip rite now will be always be zero for them just sorting by tip generally doesn't work as well because usually like the effective tip that you end up paying will be much lower than the tip if the tip is large but you’ll lower Includability because you will barely be able to get in so like effective tip will be small and so I think actually like the fee cap is the most correct way for sorting and not the effective tip and not the tip but again I think it's probably better a benefit for our client and it's not consensus critical and so it's not kind of like critical to have that be in sync and time for the Testnet but yeah. + + + **Tim Beiko** + + It's also something we can update once **1559** is live rite so obviously we want like the best behavior that we know of now but once we actually see usage on the network and how the **Mempool** is working, we can definitely change yeah how transactions are started based on that. + + + **Ansgar Dietrichs** + + Yes, I think that's correct + + +# **2. Other Discussion Items** + + + ## **Merge** And **Rayonism** Update [53:26](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3206s) + + **Tim Beiko** + + Anything else anybody wanted to bring you on **1559** or **London** in general? + If not LightClient had asked for an update on a **Rayonism** and word the merges at, I see we have Danny on the call. + I know you've been on top of that there and I know also a lot of the actual **Client Teams** have been working on this. + So, does anyone want to kind of walk through maybe the past couple of weeks of what happened with **Rayonism** and where the work related to the merge is at. + + + **Danny** + + Yeah, I can give a quick high level uhm so there was the **Rayonism, Nocturne Devnet ** that launched I believe a couple of days ago uh there's a block Explorer up and a Fork Monitor up. If you want to check it out. I believe all **12 Client** Combinations are working on that which means there's kind of four **Eth2 Clients** and the three **Eth1 Clients** and you can mix and match all of them they're all running there and running validators which is very exciting this was definitely a major success but also definitely kind of in this prototyping zone we did not test the Fork Transition and we are not testing like historic facing which is two critical things uhm definitely in the wrapping up phase that we validated all the things that we wanted to but now I think it's time for production engineering on the finale things for London and the **Altera** Fork at the same time we're working on specifying a couple of last things and greatly enhancing testing on the spec for the merge spec based off of some of the stuff we did here and some of the stuff that we've just been committed to do and so I think the idea is to shift back towards some other production engineering and get the merge specs and the next iteration and then once we get **Altair** in London releases out a shift back into some production engineering here uhm so Devnet’s up. Devnet went really well, Devnet will probably go down early next week, and we have shift back into other things post **Altair** post London we'll do some more **multi-client** Testnet stuff and probably have much more of a conversation here on all the things and I guess over the next like couple of **All Core Devs** we can talk more about planning and stuff. The client teams here if ya’ll need to ask please do and otherwise I can help with any questions if anybody has or wants to dig deeper on that. Cool another thing to note is we're doing some bike shedding if you'd like to jump in on **API, Transport, and Format**. + + + **Micah Zoltu** + + For all those that enjoyed the naming discussions so much go join consensus client name on the discussion. + + + **Danny** + + And a quick announcement. + I think this was shared in another channel, but the **Merge Calls** will now be on the same week as the **Eth2 Calls** on **Thursday** at the same time we're going to be doing a three week break rather than two-week break uh this is to try to help with some of the folks that are up pretty late for **Eth2 Call** and the **Merge Call** to kind of stack them together, so we’ll do Merge than immediately enter **Eth2 Calls**. Each two calls usually aren't very long usually like 30 or 45 minutes depending on what's going on there shouldn't be too bad so we're gonna try that out. + + + **Tim Beiko** + + Thanks for the update any Clients have anything to add. Okay that's Oh Trent you had the + + + **Danny** + + I just want to say a huge shout out to Proto + + + **Tim Beiko** + + Yes + + + **Danny** + + Proto like incepted this Rayonism idea and like did a lot of work if you were involved in that that **Proto** has been out there at night making this thing happen, I mean same to all the engineers but thanks, **Proto** + + + ## **1559 UI Call Announcement** [57:32](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3452s) + + + **Tim Beiko** + + Sweet yeah Trent, you had you wanted to talk about the **1559 UI Call**. + + + **Trenton Van Epps** + + Yeah Thanks Tim. I know you mentioned it earlier but I can just go over it again and reiterate what you already said which is similar to the London readiness call we had a week or so ago we're going to be doing something similar but focused on people that work on wallets and interfaces this would be like **MetaMask, Argent, Rainbow, Status**, things like that so if anybody's listening to the call and you work on a wallet please reach out we're going to do two things which is put together a cheat sheet of basically what you need to know and hopefully keep it updated as things become more solidified and to solicit resources that Dev's could look to and then there will be a call I think about in a week two weeks from now we don't have a time yet but we'll try to pick a slot that works for everybody that's interested in being involved with this we'll just go over what people have been thinking about so far with regards to how they're presenting these new transaction choices to users and hopefully get people on the same page with what the best practices are so yeah like I said please reach out and just let me know if you'd like to be added to that I'll be sending out an email probably early next week to figure out a time. + + + **Tim Beiko** + + Great + + + **Trenton Van Epps** + + That's it + + + ## **Core Dev Apprenticeship Program** [59:03](https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3543s) + + + **Tim Beiko** + + In the chat, there was one more comment so Piper has been working on an **All Core Dev Apprenticeship Program** to get folks who want to start working on Core Development for Ethereum to work on it over the summer and receive a stipend for that work there's a blog post that went out on the Ethereum blog yesterday so if you just go to **blog.Ethereum.org** it's the most recent post it's called **Core Dev Apprenticeship** asks if anyone listening is interested in that there's all the information in the post about how to apply and **Piper** can answer all of your questions about the program. + + + **Tim Beiko** + + Cool, anything else anybody wanted to discuss or, yeah, give it a shout too. + + + **James** + + I wanted to say one thing. + + + **Tim Beiko** + + Go + + + **James** + + That, so I've been slowly handing things over to Tim over the last couple of months for the **Hardfork Coordination Rule** role. I've done it for about a year almost two years It feels like, and I'll be moving into some other things this will probably this will be like my last call as that I'll be leaving the **EF** as well I don't know exactly what I'm going to do next but part of it's probably going to be **EIP** stuff cause I keep getting drawn into it and I like working with you guys so it has been a pleasure. + + + **Martin Swende** + + It's been a pleasure having you + + + **Tim Beiko** + + Yeah + + + **Asz Stanczak** + + Thanks James + + + **Tim Beiko** + + Thanks for all your work. + + + **Rai** + + Thanks James + + + **Tim Beiko** + + Yeah, and yes there's definitely more than enough work on the **EIP** side if you're not sure what to do. + + + **James** + + I'm gonna I'm going to try and wait at least four weeks before jumping into things, but I can already tell that'd be I'm, I'm excited about stuff so. + + + **Tim Beiko** + + So yeah, that’s a good call to take some time off. + Cool, anything else anybody wanted to bring up? + Okay well, thanks everybody. + I can't believe we finished half an hour early given everything that was on the agenda. + So yeah, I appreciate it I will see you all in two weeks. + + + **Multiple Participants** + + Thanks Everyone + Thank you + Cheers + Thanks + + + + ## Date and Time for the next meeting + + **May 28th, 2021, 14:00 UTC** + + + ## Attendees + + - **TIM BIEKO** + - **TRENTON VAN EPPS** + - **POOJA RANJAN** + - **JAMES HANCOCK** + - **MARTIN SWENDE** + - **SASAWEBUP** + - **ANSGAR DIETRICHS** + - **ALEX STOKES** + - **TRENTON VAM EPPS** + - **PRESTWICH** + - **ASZ STANCZAK** + - **KENNETH LUSTER** + - **LIGHTCLIENT** + - **JOCHEN** + - **ARTEM VOVTNIKOV** + - **ALEX B. (AXIC)** + - **GARY SCHULTE** + - **MAREK MORACZYNSKI** + - **SAJIDA ZOUARHI** + - **MICAH ZOLTU** + - **DANKRAD FEIST** + - **PAWEL BYLICA** + - **KEVAUNDRAV WEDDERBUM** + - **LUKASZ ROZMEJ** + - **YUGA** + - **PUAL D** + - **RAI (RATAN SUR)** + - **DUSAN\ALEX VIASOV** + - **JOHN** + - **DANNY** + - **ALEX VIASOV** + - **DUSAN** + + + + ## Links discussed in the call (zoom chat) + - **Ansgar Mempool write up:** https://hackmd.io/@adietrichs/1559-transaction-sorting-part2 + - https://gist.github.com/karalabe/1565e0bc1be6895ad85e2a0116367ba6 + - https://gist.github.com/karalabe/1565e0bc1be6895ad85e2a0116367ba6#gistcomment-3740453 + - https://github.com/ethereum/pm/issues/245#issuecomment-832122309 + - https://github.com/ethereum/pm/issues/245#issuecomment-832122309 + - https://blog.ethereum.org/2021/05/13/core-dev-apprenticeship/ diff --git a/All Core Devs Meetings/Meeting 114.md b/All Core Devs Meetings/Meeting 114.md new file mode 100644 index 00000000..619bcf90 --- /dev/null +++ b/All Core Devs Meetings/Meeting 114.md @@ -0,0 +1,613 @@ +# All Core Devs Meeting 114 +### Meeting Date/Time:Friday, 28 May 2021 +### Meeting Duration: 1:37:45 +### [GitHub Agenda:London Updates](https://github.com/ethereum/pm/issues/321) +### [Audio/Video of the meeting](https://www.youtube.com/watch?v=7MSYLbn-Xro&ab_channel=EthereumFoundation) +### Moderator: Tim Beiko +### Notes: David Schirmer + +## Agenda Items +| Agenda Item | Description | Video ref | +| ------------- | ----------- | --------- | +| **1** | Malicious bloated 1559 transactions ethereum/go-ethereum#22963 | [9:14](https://youtu.be/7MSYLbn-Xro?t=554) | +| **2** | Baikal status & next steps | [41:55](https://youtu.be/7MSYLbn-Xro?t=2515) | +| **3** | Testnet Fork Blocks confirmation | [52:20](https://youtu.be/7MSYLbn-Xro?t=3140) | +| **4** | Ropsten Stress Test | [55:41](https://youtu.be/7MSYLbn-Xro?t=3339) | +| **5** | Gas: DoS protection & decoupling worst/average performance | [1:03:45](https://youtu.be/7MSYLbn-Xro?t=3821) | +| **6** | EIP-3584 | [1:19:00](https://youtu.be/7MSYLbn-Xro?t=4741) | +| **7** | Gas API Call | [1:02:11](https://youtu.be/7MSYLbn-Xro?t=3731) | + +## Decisions Made +| Decision Item | Decision Made | +| ------------- | ------------| +| **1** | Clients agreed adding consensus rules:maxFeePerGas < 2²⁵⁶,maxPriorityFeePerGas < 2²⁵⁶,maxFeePerGas >= maxPriorityFeePerGas,sender.balance >= gasLimit * maxFeePerGas (Cap on consensu level and mempool level) | +| **2** | Calaveras, a new devnet will be spun to test new changes. | +| **3** | Assuming the testnet forks smoothly, then a fork block for mainnet will be purposed. | +| **4** | Once the the fork on the testnet is complete, stress testing will continue. | +| **5** | Alexey's proposal to seperate the transaction pool for the clients to diffuse execution bombs. Promising area of research. Moved to Eth R&D to discuss further. | +| **6** | Addition of access lists to blocks may complicate things. Continued discussion on Eth R&D channel. | +| **7** | Call scheduled for June 5, 14:00 UTC to discuss the impact on gas price oracles. | + +### Malicious bloated 1559 transactions +Tim Beiko: +* Hi everybody welcome to All Core Devs 114 and a couple things on the agenda today lots of London updates that if we have time there are two others on the list. First, on the list Martin you identify an issue with 1559 for you yesterday for me do you want to take you may be just a few minutes to kind of walk through like at a high-level what it was and I know that Micah I don't know if he is on the call had put together a set of like potential ways we can fix it. + +Martin Swende: +* yes so the issue is that a transaction and has two new fields which are integers and like any other integers they are arbitrary large integers the same is true for the previous, you're going to gas price but in practice, you can not set a high gas price however these two new fields, we only use one of them and the price that you are paying is based on the minimum of the two. It is perfectly fine to set a 1MB large integer into the other fields unless you can create some type of nasty transactions which do not cost more and you do not pay for the extra size. This is ugly. It turned out nethermine also C++ implementations that hyperbase is working on are actually capping to 256 bit which is not according to the specification which would lead to consensus issues if one of the seeders included this in the block. It's a consensus problem already. Are there any questions? + + +Tim: + * looks like everyone gets in. A few options on the table. One, these fields can be 256 bits or less or some other value. 64 bit is probably too small but we can talk about that. Another option is that we can say that the premium must be less than the max because it doesn’t make sense to have your premium higher than your max. Then we could separately say that if the max gas times the gas limit, you must have that much in your account. The last option, Is there any reason we don’t just say throw out the entire protocol, numbers will always be 256 bits or less. Do we have any use for anything bigger than 256 bits? + +Vitalik: +* I think there wasn’t really a good reason behind doing something like that. The beacon chain protocol is constraining everything to 64 bits anyway. I do think that an implementation of that scale will take longer than London if we go down that direction we might need to make it a special feature or exclusively for the U5050 and then do something more comprehensive for the next fork. + + Tim +* Yea sure, Ansgar you had your hand up? + +Ansgar: + * I was wondering what the arguments are for or against requiring that the balance of the sender is sufficient to cover the peak gas because in some sense it feels like the right thing to do regardless because somehow it’s a mis- specified transaction if you can’t pay for it. + +Micah: +* So I think the point of contention around that is. So the first rule would be that the max fee for gas should be more than the max priority fee for gas. The total should be larger than the part of the total right? I think people are agreeing on that. The other one says that the sender balance should be larger than the gas times the max fee for the gas whereas the actual cost is the gas limit times the effective fee for gas which is one of the two and it might not be the max fee for gas. It might be only the priority fee for gas. This rule adds a harder requirement than the actual execution. It requires you to have more money on your account than you will be charged. So that’s the non-intuitiveness of the last rule but I think it still makes sense to have that. + +Ansgar: +* Would it also simplify meme pool handling because otherwise, you might let a transaction into your meme pool that at the time of inclusion it all of the sudden can’t pay for itself. + +Micah: +* I think that it will simplify many things and I don’t really see a downside. + +Rai: +* Yea there are edge cases around like the sender’s transactions varying to accommodate then not accommodate the max fee but it does sound like an edge case. + +Alexey: +* I would say that it makes sense to limit these numbers by the formula binding it to the account balance but I don't think it should be the only protection. I think there should be two places where protection his place because they kind of like you know there I think that we should at least for these fields just introduced a limit on how big they are and then if we want to on top of that we can introduce a limit with the balance because it is much easier to reason with the first limit than to reason about the second limit. + +Micah: +* So let's talk about the consensus changes. Were you talking about non-consensus changes Alexey? + +Alexey: +* No, I assume that EIP 1559 has to be modified to address these issues and I suggest that if we modify it we introduce two rules in it. The first rule would be a consensus rule that would require that this specific field for now we don't want to mess up the whole and go through the whole protocol. These specific fields and potential all of the new fields that EIP 1559 needs to be limited by 256 bits but then on top of that a non-consensus requirement to not allow those transactions that can’t pay for themselves but I can see that it is not normative. It's some sort of an optimization thing. + +Martin: +* In my opinion of the implementation it doesn’t really matter what we do because if someone can mine a block where the gas limit times the max fee for gas is larger than the sender’s balance. It does matter if the clients have implemented this in consensus or not. We need to be really careful here in what is consensus and what is not. + +Alexey: +* I would suggest a change in consensus which just simply limits the size of the numbers because it’s the simplest thing you can reason about. + +Martin: +* I mean it’s pretty simple to reason about in my opinion whether the sender balance is above the gas limit times max fee for gas. Also, the max fee for gas is larger than the priority fee for gas. If those two, max fee for gas and priority fee for gas, places a constraint on priority fee of gas. The other rule places a constraint on the max fee for gas which implicitly makes them both need to be less than 256 bits because the balance cannot be above 256 bits. in my suggestion but I put on the agenda I posted the four rules but the top two rules are implicit by the top two rules. They can be made early, a cheap early check before accessing the state or they could be omitted if you follow the other two rules. + +Tim: +* Am I right in saying at least you the clients already have those first checks right like I believe nethermine said and I think open ethereum they said they're already doing this check so it's almost like they have this rule even though it's not explicitly documented. + +Martin: +* yes and that's yeah that's that's what I said, it's a consensus issue right now. + +Lucasz: +* To clarify we're not doing the check we are just using the type that won't accept anything else so we will have an error when we try, like, something bigger? + +Martin: +* Yea except you won’t accept the transaction so the effect is that you will reject it. + +Tim: +* So is anyone actually opposed to putting all four rules? I'm just because this way we can know it seems like the first two are fairly trivial and the last two are kind of needed and just for maximum kind of clarity and Alignment between implementation. Is everybody fine just put it in those four checks at the ones that Martin had at the ACD so max fee for gas and priority fee smaller than 256 bits then Max fee be bigger than the max priority and the balance bigger than gas limit times max fee for gas. + +Vtalik: +* Both times you say bigger do you mean bigger or equal right? + +Tim: +* Yes sorry bigger or equal. + +Alexey: +* I would put the first three as the consensus rule. The 4 one which is to Center balancing I don't know if we should because it will be enforced anyway right in a different way. + +Martin: +* No it won't, so if we don't explicitly specify it. + +Vitalik: +* Right the idea is that that if you have a transaction with an insanely high max fee and you don’t have enough money for it then if we do nothing the transaction would be valid if the base fee is just not high so it doesn’t get that high but if the base fee does get that high then the transaction would get excluded whereas this proposal it would always be excluded. + +Tim: +* Yeah and if we did not have the last ones are there weird things that's like MEV could do for example like I have a transaction in spot one that then sends money to an account that had like one of these funky transactions which is right after the transaction kind of only becomes valid I don't know. + +Thomasz: + * yeah, I just wanted to write this so I guess if any check that happens in the transaction pool is up to the clients, and it’s not part of the consensus. If we are checking if it is greater than max fee per gas times gas limit since it’s not necessary and it's done just before the transactions are executed, it how I understand it. This rule is proposed to verify just before the transaction is executed. I'm sorry I’m starting to think I need more info on why we would need to enforce this extra. it might be that you want to keep that transaction alive does it attack the transaction pool if we base it on the max fee per gas? you could push a lot of transactions with a high max fee per gas for the accounts that have a low balance and it would just stay in the pool for a long time because we think that they are valuable right? + +Rai: +* Well is anyone prioritizing not by effective max fee because the way we do the eviction is that if you did send those you know really high max fee per gas low balances it’s not going to get evicted. + + Thomasz: +* Yea but affects your effective max fee per gas is one of your components. + +Rai: +* Okay got it. + +Ansgar: +* How about not enforcing it on the consensus side but still recommending that clients don't accept those transactions in the meme pool. + +Martin: +* I think we should discuss the consensus rules first. So with the consensus rule. We can also discuss the transaction poll and how they should behave and what we recommend but we should try to determine what consensus rules we should have. + +Lightclient: +* So we agreed on the first three requirements that you posted and we are on the fourth one now or are there still questions about the first three? + +Thomasz: +* I agree with the first three and I would prevent the fourth from being installed for now. + +Micah: +* we don't actually don’t need the third one; the first two would be sufficient, just a 256-bit check. If we are looking for the absolute minimum change? + +Martin: +* I think if we need to change this at this point if we have two checks or three checks I don't think it matters a lot and I think it's if we add this check which makes sense is it sensible to check I think yeah I don't think that adds more overhead. I would prefer to have all three of them. + +Lightclient: +* Do you prefer to have the fourth one as well? + +Martin: +* personally yes. + +Lightclient: +* so it seems like the last check is mainly to avoid some free call data. This is bound to 256 bits. I don't know what the average size of the cap will be. Maybe it's like 16 bits or something but the rest of it could potentially be free call data? + +Martin: +* I totally exploit that because when we were going MEV stuff every bit counts since you want to minimize your gas cost and so I would definitely bit pack stuff into the max fee per gas because if you are doing MEV stuff you want to be first your max fee is whatever just bit pack that with all of your data. + +Micah: +* But you can’t access it on-chain can you? + +Lightclient: +* You can’t access it now but in the future, there might be opt code to access it. I was assuming that we would add opt code for that be I would protect it. + +Alexey: +* I would also add, for the moment, for example, there is an implicit consensus rule regarding the gas price where gas limit multiplied by gas price and it happens Because by the rules first before the transaction get executed at has to purchase the gas, the balance gets subtracted by this amount and then if there's anything left then this gets returned and then it's important because during the execution of transaction if the transaction observe the standard balance it will be without this Ether that has been used for purchase so what now if we introduce the rule that standard balance more gas limit multiplied by max gas fee then we purchased the gas using affective or whatever, I don't know, remember basically we are using the different formula. It is a bit more confusing if you observe the balance. I think if we reduce this requirement we should also change the way that the gas is purchased. It will have to be purchased in the amount of gas limit multi because then that would make sense if its a consensus rule. + +Micah: +* So just another weak argument for number four. Generally when building things that are security-critical I’m a big fan of having as many insertions as possible. I'm just it makes it easier to think about software and reason about it if you know that there are certain constraints in place. In this case it doesn't make sense to allow the sender balance to be less than the max and users and wallets whatnot can easily implement that and make sure that's true. It gives us one more thing that we can assume while we are writing the software and working on it. This isn’t a strong argument. Engineering tends to be easier if you have more constraints because there's less to think about. + +Alexey: +* What I am suggesting is that if we do introduce this constraint we also change the logic of the gas purchase then it will be consistent. If we require already this is the balance is enough why don't you just purchased that much gas and then that would automatically be implementing their restriction will be consistent so you don't have to have like differences in terms of how much you purchase the gas for and then there's another constraint and so I think it will be cleaner to do that. + +Martin: +* What you're describing sounds to me like a very large change in the mechanics of EIP 1559? + +Alexey: +* I think it's in the same level of changes as the fourth constraint and I think this is actually the equivalent in terms of complexity because all you need to change the high gas function I mean another client is a different name but essentially by changing this function to go to purchase gas differently your implicitly introducing in the restriction number four. + +Lightclient: +* I think we will need to change how we refund gas? The actual cost I think is the same thing. I'm not sure which would be more complicated, my intuition would be that the check itself, just have a one line check somewhere and then do the gas purchasing as it is but their equivalent. + +Micah: +* if we did that it would have intentionally affect somewhere down the line if we ever implemented transaction type to let you execute as the EOA because it such transactions you hypothetically would be able to check your balance from the EOA context and so that would have an impact like you want to sweep your account for example being able to do that will be much harder if you fee per gas with what the base fee is going to be in your future block. Again this is just trying to future proof things because we have talked about adding that type of transaction at some point. + +Ansgar: +* Isn't the only reason that the right now is your charged in advance full gas limit just to insure that you are actually able to pay and so i don't see a reason to do that with a full tcap because you know you will only be charged the actual base fee. It seems artificial. + +Lightclient: +* I think the only reason to do it is to avoid people using this as free call data and its not useful right now because there's no way to access it but it might be in the future + +Micah: +* so you could use this for something like if you were bitpacking this for something that looks at call data, like a layer 2 solution, that's using call data for its mechanism for data storage. + +Martin: +* Basically you cannot put 1 mb in it. We will fix that but you could still pack 256 bits into a field that will not be validated and will just tag along with the original transaction. If we bound it by the balance there is a lot less freedom which is why it's my favorite. I also did not think it was good when Micah suggested it. It doesn't matter to the actual calls. I will not die on the hill but I prefer the fourth rule as well. + + +Micah: +* I am in favor of capping it. + +Lucasz: +* Martin and Micah, do you want to cap it on a meme pool level or consensus level? So reject blocks? + +Martin: +* Yea I am speaking about the consensus rules here. + +Tim: +* It's worth noting that if we cap just in the meme pool especially now with stuff like MEV geth most of the miners will be modifying the memepool implementation so I doubt that it will make a big difference into what actually gets into the block if someone really wants it there. + +LightClient: +* Where's the argument for not doing this check? + +Asngar: +* Well you could have a situation where lets say I am sending two transactions and they both are barely able to pay for the transaction but then I maybe fam the first one and it goes in a diaprise and basically slightly misspecified the second one and I couldn’t be able to pay at the highest price but if the base fee is low enough I can pay for it so what's the reason It should not just go in. It seems like it's not a malicious transaction, it's a transaction that does everything right, it just is mispecified for the highest possible price. I think this could organically happen. + +Lightclient: +* I just think that would be an incredibly uncommon experience this is likely to be exploited by many people on a regular basis if there is no check. + +Martin: +* yeah I mean it kind of feels like a scenario where an auction and you make a high bid and it turns out you didn't have to pay the highest bid you have to pay the second highest. So you made it and no one called your bluff but the bid shouldn't have been accepted in the first place because you made a bid you couldn't cover for. That's one way to see it maybe. + +Peter: +* How easy would it be to remove this kind of requirement? In the future if we say like EIP 1159 is new and we want to be a restrictive as possible in the beginning but if someone really feels strongly about it , they could just create an EIP for future to remove this change? + +Micah: +* It is generally easier to remove constraints than to add them if we want. + +Peter: +* That might be the most practical way forward then. + +Peter: +* So Thomasz I think you were weakly against the fourth one. Would you be okay if we went with it and then remove it if we see a valid reason too? + +Thomasz: +* Yea weakly against means that if others think that it's worth doing then I am totally fine. + +Peter: +* Is anyone strongly against going with the fourth set of constraints by Martin that I won't try to describe again i'll make mistakes with the ACD, sorry on the issue for this call. + +Lucasz: +* So the fourth constraint as Alexey explained it? So when we are reserving gas for the transaction before the actual execution. We would just reserve more or how would that work? + +Alexey: +* No, you would still reserve what you reserve now but additionally, you also require that the sender balance equals more. I think what we are trying to agree on is not what I suggested but I think what Martin originally went with. So there will be different numbers that you have to reserve and we that you have to restrict. + +Martin: +* it'll just be one assertion added to the code that just asserts that this is true at this point. Technically two assertions if we go with all four. The minimum change for any client is just two assertions to be added. Of course, we will specify and get it into the EIP if we agree on that so we can where they go + +Micah: +* Thinking if there are any potential issues or if we liked don't specify when the session happens exactly. + +Martin: +* Yea as you are processing a block you process the transactions one by one and for every transaction, you check the validity of one of the existing constraints, the intrinsic gas must be more than any intrinsic gas times the gas price and this is just two more of these rules that validated during block processing. + +Lucasz: +* You would have to validate it during block production to complete the transaction. + + +Martin: +* Yea, you try to add it to the book that you are trying to build you would do the same thing. + +Ansgar: +* Would clients be expected to make sure that these transactions could never make it into the meme pool? + +Martin: +* Yes for one perspective. I think the clients have a pretty clear distinction between what is consensus and what is meme pool. Those two don’t really share the same rules. The meme pool, for example, can be more restrictive about things. It can throw gas costs of 1, 0, or 3 and whereas the consensus you have to just accept the zero fee transactions. Otherwise you break consensus. + +Ansgar: +* Makes sense. + +Tim: +* Okay, so it seems like we are good for all four rules. Does anyone want to voice a final disagreement on that and if we decide that the fourth rule isn’t needed in the future then we can submit an EIP to remove it. Obviously, there are more checks, more code, more complexity. That’s the trade-off there. Okay so no blockers, let's go with the four consensus rules. Does anyone want to submit a PR against 1559 either today or monday. + +Martin: +* I can. + +Tim: +* So Martin once your PR is there we will need an author to merge it. Either Vatalik or I can ping Andel. Once that PR is merged I will make sure to update London with the latest commit + +Vitalik: +* Sounds Good. + +### Update on Baikal + +Tim: +* Cool. Anything else on that issue? Okay if not, the next thing I had was a quick update on Baikal. Testing on it in the past couple of days. There seems to be some issues with some signers not including all transactions. I don't know if anyone wants to recap the testing that has happened and the issues seen? + +Martin: +* Nobody knows what's going on right. + +Tim: +* from the outside it seems like I think it was Karim and somebody else who were trying to kind of spam the network with transactions. + +Karim: +* Yes. I can say I did some spamming on the test net. I sent some 1559 and legacy transactions. I tried to send the transactions directly. It seems that it was forking fine sometimes nethermine would not completely fill the block and geth and I was sending directly the transactions. So I tried today another thing to send the transactions to another node. It seems better. Geth fills the block. I think when I did the test nethermine was not running. So I don't really know what the status for nethmine, yesterday mine had some issues. Maybe nethermine will have more context but for me, geth, it seems okay. I can continue, I have a test to do with different feecaps. So I will try to do tests the next day. + +Thomasz: +* Ya I see that the transaction pool is misbehaving slightly. We are in the process of merging some Immediate Solutions to the transaction pulling nethermine and also I like rewriting some basic transaction pools so we see a lot more instability still being caused internally as well but we had more discussion today and yesterday about it. + +Tim: +* Was Open Ethereum on Baikal yet? + +Sunce86: +* Not officially. + +Tim: +* Did you see any issues? Nodes working fine? + +Sunce86: +* Until today we were in a sync. + +Tim: +* On the turbo Geth side. Did you see anything special? + +Alexey: +* No, we just had one issue where I forgot to activate the EIP with the refunds. I think it's interesting it occurred five days ago so I think that was the first transaction that had this kind of thing but now it is fixed. It's called Arragon now, not turbo geth by the way. + +Tim: +* Given there are still some finalizations that we need to do with Baikal and we will have some changes in 1559. What do people feel are the best ways to test those changes? Do we want to hard fork baikal? Do we want to write a test for it and test on the proper test nets. We could spin up a new devnet? It's a bit more complicated to do that. How do people think we should go about it with a test on 1559 and the issues on Baikal? + +Peter: +* So I don’t think you want to fork Baikal because that means you need to define the fork and essentially you have something of EIP or four pools on top of four pools. If the consensus was changed like Baikal was always meant to be and just nuke it. + +Tim: +* Is it possible to just restart Baikal and we just change the genesis but like all the current infrastructure stays there? Or should we have another network with another name to make it simpler? Or people comfortable to say we implement these things and add references and then fork Ropsten. That’s the part I don’t have an intuition for. + +Peter: +* We change the EIP rules so we need to do another round of testing. + +Tim: +* So in that case should we create a third one? + +Peter: +* I don’t know who created Baikal and how much effort it was so I cannot say. I personally don’t mind just nuking the chain and starting a genesis. + +Martin: +* It’s pretty simple. + +Tim: +* To create a new one? Does anyone oppose creating a new one? + +Alexey: +* Will it have the same name? + +Tim: +* Does that make a difference to people? + +Alexey: +* I suppose it would create confusion if it's the same name. + +Tim: +* Will we have to change those things if we create a new network or will they work if we just reset the genesis. + +Alexey: +* Do we have transactions that are breaking those rules? + +Martin: +* I would suspect that we have transactions that are breaking the fourth rule. Possibly the third rule as well. I know someone mentioned experimenting stuffing crap into these two fields, I think jochen. + +Alexey: +* One option would be to shut it down right now, Baikal, and examine whether we had any rule breakers and if we didn’t then we simply restart it with the new rules and if we did then we would have to start a new network. + +Martin: +* We don’t have to stop it. If we just add these checks and try to sync then it should be parallel. + +Alexey: +* Right optimistically we should try to salvage Baikal and if it doesn't work then we launch a new one. + +Tim: +* I like that approach. Does anyone disagree with that? + +Alexey: +* Na makes sense. + +Tim: +* Okay so we add the new checks to 1559, try to sync Baikal and if it works, great, we will just update the Baikal spec with the new commit. If it doesn’t work then we will start Calavaras. Which will be a copy of Baikal but with the 1559 changes. Nobody please break Baikal in the next couple hours. + +Martin: +* It is not possible to do the first two rules unless you can hack the seeder. I’m fairly certain they have already broken rule three or four. + + +### Test net block confirmation + +Tim: +* Okay, let’s try that. It’s worth it after the call, we can ping him in discord and ask him directly. Obviously, this invalidates the testnet blocks we had potential for June 9th. I’m curious what the teams feel, what’s the best approach to get to testnets. Do we want to do this and then potentially set the blocks on the next call if the people want to look at potential blocks that are a week or two from now. I don’t know what is easier in terms of, obviously, knowing that these changes work and managing the relief cycle for the clients? Do we know enough to set a new block for clients to test? + +Martin: +* My five cents is that I still have some work to do. It's going to be pretty hard to get everything done in time but on the other hand I am personally not too scared about breaking or screwing up the testnets a little bit and we will be fine but other people may have other opinions. + +Alexey: +* we are planning to modify, heavily modify, the transaction pool I know that these people would say this is not critical for the testnet but still has some work to do there and the result of that, if we do go ahead with this testnet, it's likely that the test the first testnet that we will fork will not look anything like the main net running the code which is very different regarding the transaction pool and i don't know if anyone is actually planning to test transaction pool changes. Is there any plan for that? Or are we going to just do some unit tests and other things? + +### Ropsten Stress Test + +Tim: +* So one thing that we should do at the very least on the test nets is have a kind of time where we spam them and send a very high number of transactions and you know make sure that that works like the base fee is working and the clients and miners are including transactions. Beyond that, I don’t know. + +Alexey: +* Do we have tools for this sort of thing? Somebody said, Karim i think, he is spamming. Do you use some sort of tools that you created, could they be used in the test net? Is it easy to interpret the results? + +Tim: +* so yes we have the tool. Basically, we have to just send a very large number of transactions and I think the only constraint on it is we need a large Ether balance on that Network. So for Ropsten specifically I think we would need to find you know some Ropsten whale. + +Alexey: +* Ropsten whale no? + +Tim: +* I have been looking for one this week so I will ask him. + +Alexey: +* I think he might have some Ropsten Ether. + +Tim: +* It’s because the base fee increases exponentially like the amount that you are going to burn during this for one hour is very high. Aside from that I don’t think we have any tool in the infrastructure to test more complex things for the transaction pool. + +Alexey: +* Another thing, on the old testnet, I would suggest, it is easier to test something like this on a very constrained Network where you have a very low gas limit because then you get to exponential thing quicker. At the same time, using public testnets which are heavily used, you don't really want to constrict those too early. + +Tim: +* I agree with that. My preference there is to do Ropsten and then Gordie so we can do this spam test on Ropsten which doesn't have a ton of real world usage. Then we can do a small one on Gordie, but that one does have actual users. I think we should try to go from the most artificial to the most used network. + +Alexey: +* With this spam test we might price everybody out for whatever the duration of time. + +Tim: +* For like an hour yea. So clearly Geth/Aragon still has some work to do. I don't know whether the other teams feel? Thoamsz, you mentions you had some transaction pool work that you are doing. So it's out of it seems to me like people will probably be much more confident in the state of things two weeks from now. You know want it to be realistic, do we want to just have it in two weeks and potentially kind of a set a block that fairly I guess close from the next call. The constraint being we don't want the difficulty bomb to go off on main net and we want some f large amount of time that. A more concrete way of asking this question is if we come to the next call and things are generally good with regard to the clients. Is like a few days sufficient for clients to put in for blocks and put out a release right? like if we come to the next call we decide we’re forking in the test net like 10 days and can clients have a release just like 2-3 days out after that so that we can then kind of advertise those releases and an empty bucket update? Okay, I am not hearing any objections. So let's do that then. Let's take the next two weeks to do the changes to the spec and test things on the dev nets, make sure that the transaction pool stuff is done. On the next call, assuming everything goes right, we can pick a block. It doesn’t have to be too far out in the future for the first testnet and teams should probably expect that we will be putting out the release with the fork blocks in a couple of days after the next call. Okay, anything else that people wanted to discuss for London specifically. Okay, if not, we do have some extra time. Alexey, I know you had something you wanted to talk about in regards to gas pricing and denial of service protections? + +Martin: +* Really quick just the gas API call? You want to mention it. + +### Gas API Call #328 + +Tim: +* Sure, I was going to at the end. So this week, we had a discussion with wallet providers to discuss UI changes to 1559, Setting defaults for users. One thing that came up during that call is that all of the wallet providers rely on the gas estimator Oracles/API like ETH gas station and how they implement their predictions for gas prices will matter a lot. So I'm organizing a call next week, next Friday, at all core devs time with these gas API providers to discuss the best way to provide these estimates post 1559. The link is shared in the agenda. Obviously, anyone that is affected is welcome to join. + +Trenton: +* Just send me an email of anyone that would be interested in joining. + +Tim: +* Lightclient found the transaction that invalidates the fourth rule on Baikal already. So we will have to restart it. So in that case, we will go with Calaveras. I will put the spec together today as soon as the new PR on 1559 has been merged and we can use that and stand it up next week. Anything else on London? Okay, Alexey over to you. + +### Gas: DoS protection and decoupling worst/average performance + + +Alexey: +* Okay so I'm going to just introduce this topic briefly so just to make sure the people are aware of it. It's not normal to call for action but it's just for your awareness that is what we are planning to experiment with. To explain that when I talked about the coupling all the worst-case performance and average performance that's what I would mean. so context is essentially the question about the safe gas limit, how is it determined, and like where are you know he's at the correct/ good way of determining the limit. So as far as I understand the current limit for the safe gas price is determined by a couple of things or one of them is is kind DOS limit. what is the worst time to run the worst construction transaction that would consume this entire limit and as we saw in the book was recently published is that they used to be some really I mean really simple but very potent transaction that could cause a very large run times and so it was mostly based on the state access which was under price so now even if we do reduce the state's access cost and we also put filters and all the things even if we do that . The other bottleneck that will appear on the surface is the precompiles for example the precompiles will be the second target because they currently priced, like when we do the pricing, precompiles sometimes, I think there are certain mega gas per second, keep in mind, I don't know what that number is right now maybe 25 maybe 40. The repricing of the precompiles actually using this kind of made-up number, which is kind of the targets, of a safe gas limits and 40 or 25 I don't know. the different precompiles were computed using different targets. What it means is that yeah that could be like a worst-case transaction which then targets those precompiles like there's lots and lots and lots precompiles and even if you optimize the state access you know you're going to hit those things. It doesn’t mean that we are completely constrained by those things well actually I think it would not we're not. So my idea is that we are going to experiment with and I want to make people aware is to try to stop, I call them execution bombs. If you think about an analogy, that is the transaction that carries that explosive load that is really hard to execute. Currently, it is not getting stopped anywhere because the transactions are usually not executed on the way to the miner node and it goes straight into the core and just explodes there essentially. like a different type of node have different implications for that. For example, miners may start mining empty blocks. The other nodes might stop processing things and stuff like this. The idea that I am going to look at is to try to stop those transactions bombs before they reach the core +and stuff before they reach the core to form some kind of protective perimeter around the nodes to stop those things. So that implies that the things that will form the protective perimeter will need to be able to verify or check or try to figure out whether this particular transaction is actually going to explode. So that's the main idea and the way we are going to experiment with this is according to our architecture plan, at some point we will split out the transaction pool component using some interfaces and we going to experiment with the transaction pool component actually trying to execute transaction and capture those execution bombs and try to defuse them before they reach the core. The reason why you have to get it separated is because you might want to have multiple transaction pools around you a node so it's one of them, kind of is being slow down, by the bomb the other ones that still working and so forth as to create a little flexible architecture, flexible deployment to do that. That is the crux of the idea and I just wanted to introduce it and to let you know that we are going to try to experiment with that and if anybody wants to contribute to that is obviously welcome. + +Lucasz: +* I have a comment. If you are protecting transaction pools you are not really protecting the chain much because someone can just mine this block and put it directly on the chain. + +Alexey: +* I didn't go into details on that but so you can go deeper so you can notice that there are two cases that you have to think about. First case is where attack is basically performed by no miner basically just putting the transactions in the pool and a second type of attack is made by the miner itself and so you need to look at them differently and they might need to have different protections because I think generally the miners are incentivized to bomb the other miners around unless they are the majority but the people who are not mining and they might have completely different incentives. + +Dankrad: +* Isn’t gas our protection against these DOS attacks? + +Alexey: +* Yes, however gas is a very blunt instrument unless we sort it. We actually do need to modify the gas cost quite frequently and a lot of times we have to keep the real reason for gas modification some kind of secret because we cannot disclose that this was because of some kind of violence. So you either you have to change it in a rush, it was done in 2016, where you have to keep it as a secret kind of or maybe an open secret and then try to introduce the change on a different pretext. So what I'm suggesting is it does not negate their role of gas is the protection but it creates an additional layer of protection which allows us to be but more relaxed about you know those vulnerabilities. If they are found we can actually fix them in a good time and we can we could be kind of much more professional about it rather than trying to let you know having secrets and stuff like this. + +Dankrad: +* I guess I feel like it's very lucky that you can find some sort of metrics that work well but the concern is what about the transactions that are caught by your metrics but are not malicious they just happen to have very high resource consumption + +Alexey: +* I would like to listen to the other couple of people that raised their hands if you don't mind and we can return to this discussion as well. + +Martin: +* I think it is an interesting idea and I think it is worth pursuing. I don’t think it changes the threat model and I think because I do think that miners have an incentive to bomb other miners because once you start building on top of the block it is hard to just throw it away and try another bucket. Once you have spent a minute importing that block, why not build upon it. So it doesn't change anything intrinsically but it will be interesting to see what you come up with. + +Mikhail: +* I was going to ask about what metrics have you thought of? The first obvious thing is the execution time of the transaction but it is too subjective. What else could we use, like CPU cycles, could be difficult to measure for transactions. So what are your thoughts on this? + +Alexey: +* My thoughts go very far on this and so in the first approximation we could simply use some kind of time out and some other limits. When we execute the transactions inside the perimeter we can use some kind of physical constraints in terms of like how long is it allowed to run like how much state access is it performing and aport the transactions once its is past these limits but on the second iteration which might come later I actually was trying to run some through preprocessing on the transaction to try to figure out before even running it on the specifics state. To try to figure out if it's possible to figure out to predict whether it's going to hit a lot of State for example that would catch those DOS attacks that were published on the log and lots of others including the ones that we also found which were targeting specific Aragon. All of those attacks that I could think about, all these classes of attacks that could be eliminated by static analysis interpretation but obviously that is a bit further away into iteration 2 or 3 on this project. + +Mikhail: +* I was asking because after the merge we will be operating in a time-restricted London this matters more probably than it matters today. I mean it could be like one of the simple but effective protection for block proposers so they see that the block is not being proposed or it takes a lot of time to execute a block they are about to publish. + +Alexey: +* Yes and you just reminded me of another connection I made yesterday. So I think this goes in a similar direction of another train that is going on right now which is MEV and the flash bots trying to democratize the MEV system and apparently the way that it works is that you have independent flash bot runners who are running essentially their own transaction pools to try to construct the bundles and so and the separation of transaction pools from the core has already happened it's already happening in flashbots - MEV world and I think it's only natural to basically just follow that as well. So I think I'm suggesting what is happening in flashbots and he's already steps in the similar directions. I also think that these special checks for the transaction to go to the pools will increase the latency of the transaction propagation especially for transactions is a bit strange it's a bit hard to find out what it does and it takes some time obviously as it hops through those perimeters it will take time but that's okay because straight forward transactions would go very quickly because you can simply see that they're fine but the strange ones they will go slower so the pool and they will end up with the miners later But if the miner wants to take a risk with taking those things I can simply do it it via flashbots. So it furthers the idea of two lanes. A fast lane where people take risks and a slower lane where the public lives which has all of these protections where the bombs are getting intercepted and things like this. So this is the vision of the future I have when considering the problem with the flash bots. Thanks for listening and I appreciate the feedback so far. + +### EIP 3584 + +11crypt: +* Hey guys I have co-written this draft EIP with Piper which is circulating the transaction level access list which the clients will be generating for EIP 2930 and what we are doing is trying to collect this access list on block level. So we want to present this EIP and get some feedback again. This is nothing we want to put on the table as of now. So if you look at the EIP. So the EIP says that the block level access list is posted between access addresses which basically its address is accessed and this transaction number and is consumed this slot in that list of source index transaction number. So that is the sort of access list we are trying to build on the block level and a construction of and sort of civilizing it And what we are saying that for this list to have any meaning as an index and do the block so that other miners can look at it or the people want to validate the blocks, they can sort of do any optimization because you can look at this blocklist and you can something access and then you can sort of create the computation chains on the transactions, so there are a couple ways the access list can be used, and for any of them can be valid. What we are trying to say is that need for access list need to be included and for that, we will have to have an economical definition on how to serialize access list and how to hash them and in our proposal, we are saying that it can be as of now straightforward something as of now or hash it with 256 and have sort of have something you or I encoding the constriction for something list. We are just civilizing it in a normal way or we are sort of forming a merkle tree out of it to hash but we are also saying you know we are saying two things what is the construction byte for hashing and what is the civilization format so civilization format could also be SSD and hashing could also be Merkel that is any way to go about, but the thing is that we have access path for authorization list for these access-list over the time without changing anything in the block list. + +Piper: +* Yeah, so at a high level the details of serialization and hashing things like that are up for grabs, the general thing we’re going to propose here is putting this in a subsequent hard fork to get a new field into the header it represents the economical hash of this access list so that there is a mechanism for us to begin to start essentially experimenting with block witnesses at a verifiable level. So that’s the general gist not the upcoming hard fork i’d be looking at adding this after that hard fork to get a new header field for this. Anyone have feedback for this? + + +Martin: +* So if I’m correct it is not just about taking the access list and um collecting them into a big heap it’s about taking the generated access list that were found during this execution is that correct? + +Piper: +* That is correct, yes, this is the referenced at 2930 is that we already have an access list format. + +Martin: +* So my concern here is that something which is not explicitly spelled out is that you want to use this in some way to form verification for witnesses, what i think is problematic is in the rules of 2929 if you call something in that scope it accesses something then something else and it reverts out of that scope then that access list touch becomes undone so it doesn’t actually leave a footprint on the global access list, but if you want to execute this in a status way and use to generate global things as soon as you enter to this new scope which will revert you will find yourself in a scenario where it makes an axis to which you would also have the data bc it wasn’t present in the global list and it becomes kinda hard to execute from that point on bc even though you can infer that yes, probably this goes over yes you can’t really verify it bc you don’t have all the data, I think that might be a problem with this. + +Piper: +* So, I think maybe there’s a miscommunication here in my mental model of this construction, storage slots that were accessed in call frames that ended up reverting would need to be included in the access list. + +Martin: +* Ah, but then you are talking either added a new type of global access or modifying an existing ones, which means that this would cannot utilize existing framework 2929. + +Piper: +* Got it. That wasn’t clear to me so that is something we will have to dig into. How hard of a blocker is that for you? Is that significant? + +Martin: +* It is actually significant--and the reason we made it this way, it could have been simpler to just have if someone tries to access something, or someone actually does, and it runs out of gas,and we just let it sit there, that would have been easier instead of having this journal. That means you could have called into something bc you already paid interest that could try to touch something else costing 2600 doesn’t have the gas for it 100 or something and it reverts back and now all of a sudden you only paid 100 gas but you called successfully made but you didn’t pay extra 2600 for call that failed but somehow you still have this other thing in the access list so that would have been a back door to put anything and access this stuff. + +Piper: +* Got it, so the naive approach to this would be to have a separate tracking for this. Anyways, this is just me solution-ing off the top of my head. Yeah, but I can see that this does not easily piggy-back on the existing framework so thanks for letting me know. + +Micah: +* Yeah, do we need to add this to the block header or can we just add a key in the state route essentially? + +Piper: +* I’m not following what you mean by that. + +Martin: +* I think that is the proposal.I think Micah suggested that we just add the state route for that thing, is what you meant Piper. + +Piper: +* Yes, I am not suggesting we add a big payload of data to the block header, I’m suggesting we add a 32-byte hash of some sort to the block header it represents economical serialized form of the access list so that out of banned somebody could receive an access +list or a witness and from the witness can construct an access list and then from that verify that yes this is the access list for a specific block. Otherwise, you have creeping vectors if you start trying to build anything that really matters off of witnesses because right now you cant verify that is the witness until you actually do the execution. There is also a benefit to listing clients right now, if access lists become something that is circulated independently and clients prefetch these and create load States and speed up execution of blocks. + +Alexey: +* I have comment on this but it's a bit more general is that what I started to look at is you know with the Advent our access list previously different rules for computation of gas cost for different operations which I never really liked, but now I'm still kind of how to how I would fix this and that is start writing it soon so we are trying to introduce them called TEVM via is basically like a i version of EVM that would be would not have those things like self destruct list access list all sorts of other things which kind of goes on top. Which essentially is that if you look at EVM now, it has a sort of clear EVM and a certain number of resources: it's got stack, it's got memory, it's got IO, storage operations and that is fine. Those things are manipulated by op code but then on top of that you have a ever growing femoral structures which are also modified in the behavior and not really treated as source of EVM.For example,access list, self destruct list and now there's a proposal to have another one and there all different caches of state which affects the cost of the store. So my idea is to propose a different modification to the EVM in the form of TEVM which brings those femoral structures into the lights which means that assigns a specific resource is like I see it has associative memory which has its proper op codes to operate them rather than building them as kind of an add on to the logic. Actually have them cleanly implemented in the virtual machine. I know it's not a small project but I don't like those add ons. The more addons we bring the harder, the less cleanly specified it actually becomes. I have noted in some of my talks last year most consensus issues are actually happening in these addons not in the EVM itself because these addons are harder to specify. + +Piper: +* I don't think this has any effect on execution. There's no proposal in here that would modify execution in any way. This would be the result of the execution, so it's more like metadata about execution + +Alexey: +* So if it does not modify execution then Martin's comments about backdooring the access list are invalid. + +Piper: +* I think martins argument was that in our proposal we suggested that this framework was already in place with clients and what Martin wanted to find out was that what clients are doing is not sufficient for what we are wanting to do and so we can’t piggyback on top of that. + +Alexey: +* So in this case I agree if what you are suggesting is not in any way affecting the behavior of EVM then of course it's not an addon but it's simply something else that you are adding to the consensus field. + +Piper: +* yes it is more akin to like the bloom filter. + +Micah: +* The bloom filter does affect the EVM, so it's in the EVM that you have to record those metadata that you have to record as part of the EVM execution. So in your EVM module is where you're gathering the data and so I think that's where Alexey is getting at. Those types of add ons were adding stuff to the EVM execution and we are getting a lot of complexity issues from the clients. + +Piper: +* Correct but this is overhead. This should have the complexity that things like , this doesn't affect gas prices, this doesn't affect execution. This is pure record keeping during execution and then afterward you serialize it , hash it and stick it in the header similarly to how you record bloom filter entries and stick them in the header. There was a question in chat but I lost it about why but i didn't see it. + +Micah: +* It was probably mine. So I was just asking before I disconnected. I was asking why not include this in the state root. Have a well defined path that everyone knows. If you go to this path in the state root you will find the access list. Is there a situation where we would want to be able to validate the access list in a scenario where we do not have the ability to validate the state root? + +Piper: +* Yes, so if you are fetching state on demand and you don't have that state. There is maybe an argument here that says if you can fetch data on demand then you can fetch state on demand. Suppose someone hands you a witness and you want to verify it that they didn't give you the wrong witness or that they didn't give you an extra or they're not griefing you in some way. Having the access list is a mechanism for verifying that it doesn't buddle the protocol to the witness format itself. That's kind of why we do the access list. + +Micah: +* When that person hands you the witness bundle would it include a proof of the access list along with all the proof of the state they are giving you right? + +Piper: +* So I will say that my preference towards the header is mere convenience but I recognize that changing the block header is complex and so I am willing to put that into whether we us SSC or do we use a tree to hash the access list. I am not married to one or the other . this is more about, I am curious if I am going to get resistance from this since we don't have statelessness. Do some one want to make the argument we shouldn't do this because we can't use it yet but my argument here is that we need to start using the stuff so that we can start understanding how we pass witnesses around and start being able to build things ahead of the ability to do formal protocol support statelessness. + +Tim: +* so we just have a minute left so I guess it probably makes sense to move this conversation. + +Piper: +* Totally we do not have to resolve this. I am glad I got to put this out for everyone and the link is out there so let me know what you think. + +Micah: +* What channel? + +Piper: +* Witness is a good one and R and D. + +Tim: +* Cool, so just to summarize for the viewers the London stuff in case that was not in the recording. So we are updating the EIP1559 spec, we are going to start the new dev net Calaveras which will have the fix from 1559. Focus on that over the next two weeks have the client teams finish their implementations especially regarding the transaction pool and then in the next two week depending on how that went we can figure out about the public test net and when we want to fork those. That's about it. Thanks everyone for joining and I will see you in two weeks. + +------------------------------------------- +## Attendees +- Martin Van Swende +- Micah Zoltu +- Tim Beiko +- Rai +- Trenton Van Epps +- Karim T. +- Gary Schulte +- Piper Merriam +- ECH-Pooja +- Lightclient +- Ansgar Dietrichs +- Mikhial Kalinin +- o_O +Sam Wilson +- Marek Moraczyynski +- Artem V. +- Eugene D. +- Sunce86 +- Alex Viasov +- Thomasz S +- Dankrad Feist +- Lucasz R. +- jochen +- Boris Petrov +- Alexey A +- Gary Schulte +- 11cyrpt +- Pamel B. +- Peter S +- Vitalik + + +--------------------------------------- +## Next Meeting +June 11, 2021 diff --git a/Fee-Market/mainnet-readiness.md b/Fee-Market/mainnet-readiness.md index 1cde4184..1976fc8b 100644 --- a/Fee-Market/mainnet-readiness.md +++ b/Fee-Market/mainnet-readiness.md @@ -1,27 +1,17 @@ # EIP-1559 Mainnet Readiness Checklist -This document is meant to capture various tasks that need to be completed before EIP-1559 is ready to be considered for mainnet deployement. This list is a work in progress and tries to aggregate known requirements. More things may be added in the future and checking every box is not a guarantee of mainnet deployement. - -Tasks that are normally part of the "AllCoreDevs process" are not listed. In other words, this list is what should ideally be done _before_ moving EIP-1559 through the regular network upgrade process. This list is not exhaustive. A full list of 1559 resources is available [here](https://hackmd.io/@timbeiko/1559-resources). +This document was originally meant to capture various tasks that need to be completed before EIP-1559 is ready to be considered for mainnet deployement. EIP-1559 is now included in the London upgrade, so this document should serve as a historical reference. Sections may be updated if they are still the main source of truth for the efforts. ## Implementation ### Client Implementation Status -- [ ] **Geth** - - [WIP implementation led by Vulcanize](https://github.com/vulcanize/go-ethereum/tree/1559_test) -- [ ] **Besu** - - [WIP implementation](https://github.com/hyperledger/besu/labels/EIP-1559) -- [ ] **Nethermind** - - [WIP implementation](https://github.com/NethermindEth/nethermind/pull/2341) -- [ ] **Open Ethereum** - - ⭐️ [Hiring an implementer](https://boards.greenhouse.io/gnosis/jobs/4978262002?t=addc4e802) ⭐️ -- [ ] **TurboGeth** - - N/A + +- See the [London specification](https://github.com/ethereum/eth1.0-specs/blob/master/network-upgrades/mainnet-upgrades/london.md#client-readiness-checklist). ### Client-level Open Issues -- [ ] DoS risk on the Ethereum mainnet - - Discussed in the [AllCoreDevs call #77](https://github.com/ethereum/pm/blob/master/All%20Core%20Devs%20Meetings/Meeting%2077.md#eip-1559) and [#97](https://github.com/ethereum/pm/pull/214/files?short_path=4d89329#diff-4d893291250cf226c77e67ad708be6f2) EIP-1559's elastic block size effectively doubles the potential effect of a DoS attack on mainnet. Solutions to this are outside the scope of this EIP and include things like [snapshot sync](https://blog.ethereum.org/2020/07/17/ask-about-geth-snapshot-acceleration/) and [EIP-2929](https://eips.ethereum.org/EIPS/eip-2929). +- [x] DoS risk on the Ethereum mainnet + - Discussed in the [AllCoreDevs call #77](https://github.com/ethereum/pm/blob/master/All%20Core%20Devs%20Meetings/Meeting%2077.md#eip-1559) and [#97](https://github.com/ethereum/pm/pull/214/files?short_path=4d89329#diff-4d893291250cf226c77e67ad708be6f2) EIP-1559's elastic block size effectively doubles the potential effect of a DoS attack on mainnet. Solutions to this are outside the scope of this EIP and include things like [snapshot sync](https://blog.ethereum.org/2020/07/17/ask-about-geth-snapshot-acceleration/) and [EIP-2929](https://eips.ethereum.org/EIPS/eip-2929), which was deployed in Berlin. - [Write up](https://notes.ethereum.org/@vbuterin/eip_1559_spikes) by Vitalik about why this is perhaps solved once EIP-2929 is live. - Because EIP-1559's `BASE FEE` rises based on block gas utilisation, a DoS on the network would either have an exponentially increasing cost (assuming no collaboration from miners and other transactions are allowed to go through), compared to a constant cost today, or it would be costly to miners (who would need to pay the `BASE FEE` or censor the chain long enough to drop it to 0), compared to effectively free today. - [X] Performance overhead for clients @@ -30,6 +20,7 @@ Tasks that are normally part of the "AllCoreDevs process" are not listed. In oth - [Large State Performance Testing](https://hackmd.io/@timbeiko/1559-perf-test) - [X] Transaction Pool Management - Good approaches to transaction pool management have been put forward. [First write up](https://hackmd.io/@adietrichs/1559-transaction-sorting), [Second write up](https://hackmd.io/@adietrichs/1559-transaction-sorting-part2). + - [Alternative approach suggested by @zsfelfoldi](https://gist.github.com/zsfelfoldi/9607ad248707a925b701f49787904fd6) - [x] Transaction Encoding/Decoding - EIP-1559 transactions will be encoded using [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718), by adding 1559-style transactions as a new type of transaction. - [X] Legacy transaction management in transaction pool @@ -43,26 +34,21 @@ Tasks that are normally part of the "AllCoreDevs process" are not listed. In oth ### Testing -#### EIPs & Reference Tests - -- [ ] Reference / Consensus Tests - - See https://github.com/ethereum/tests/issues/789 -- [ ] EIPs that return block or transaction data need to be updated to support EIP-1559/2718 style transactions, specifically: - - [ ] `eth_getTransactionByBlockNumberAndIndex` - - [ ] `eth_getTransactionByBlockHashAndIndex` - - [ ] `eth_getTransactionByHash` - - [ ] `eth_getTransactionReceipt` - - [ ] `eth_getUncleByBlockNumberAndIndex` - - [ ] `eth_getBlockByHash` ([EIP-3041](https://eips.ethereum.org/EIPS/eip-3041)) - - [ ] `eth_getBlockByNumber` ([EIP-3044](https://eips.ethereum.org/EIPS/eip-3044)) - - [ ] `eth_getUncleByBlockHashAndIndex` ([EIP-3045](https://eips.ethereum.org/EIPS/eip-3045)) - - [ ] `eth_getUncleByBlockNumberAndIndex` ([EIP-3046](https://eips.ethereum.org/EIPS/eip-3046)) +#### Reference Tests + +- [x] Reference / Consensus Tests + - In progress, see https://github.com/ethereum/tests/issues/789 + +#### JSON RPC Support + +- [x] EIPs that return block or transaction data need to be updated to support EIP-1559/2718 style transactions. Updates are being made to the JSON RPC specification [here](https://github.com/ethereum/eth1.0-specs/pull/47). #### Community testing -- [ ] JSON-RPC or equivalent commands that applications and tooling can use to interact with EIP-1559 +- [x] JSON-RPC or equivalent commands that applications and tooling can use to interact with EIP-1559 - [x] [EIP-1559 Toolbox](http://eip1559-tx.ops.pegasys.tech/) -- [ ] Public testnet that applications and tooling can use to test EIP-1559. +- [x] Public testnet that applications and tooling can use to test EIP-1559. + - [x] See the [Baikal devnet](https://github.com/ethereum/eth1.0-specs/blob/master/network-upgrades/client-integration-testnets/baikal.md) ### Testnets diff --git a/Meeting 113.md b/Meeting 113.md new file mode 100644 index 00000000..db00b62e --- /dev/null +++ b/Meeting 113.md @@ -0,0 +1,640 @@ +# All Core Devs Meeting 113 +### Meeting Date/Time: **May** **14th**, **2021**, **14:00** **UTC** +### Meeting Duration: **90** **mins** +### [GitHub Agenda](https://github.com/ethereum/pm/issues/309) +### [Audio/Video of the meeting](https://youtu.be/H_T2nNrTuWQ) +### Moderator: **Tim Beiko** +### Notes: **Kenneth Luster** + + + # Decisions Made + | Decision Item | Description | Video ref | + | ------------- | ----------- | --------- | + | **1** | Intoduction Musical Intro Presentation | [00:00] (https://youtu.be/H_T2nNrTuWQ) + | **2** | Start and Baikal discussion | [8:40] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=520s) + | **3** | EIP-3541 discussion | [13:10] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=587s) + | **4** | EIP-3554 discussion | [14:08] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=848s) + | **5** | JSON-RPC naming convention discussion | [17:52] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1072s) + | **6** | Block number discussion | [31:00] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1860s) + | **7** | Speeding up transactions by clients/wallets | [43:42] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=2622s) + | **8** | Merge and Rayonism update | [53:26] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3206s) + | **9** | 1559 UI call announcement | [57:32] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3452s) + | **10** | Core dev apprenticeship program | [59:03] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3543s) + + +**Tim Beiko** +And we are live, so good morning or evening, everybody. Welcome to the All Core Devs number #113. We have mostly London stuff on the agenda today. There's been a lot of work on that, over the past couple of weeks. No background today. No, I guess I can kind of blur it or put my Ethereum background if people prefer that or back in the blockchain, cool okay. So, for London first thing. So, every team I think was thinking to baikal this week, which was the new dev net. I don't know if someone wants to give a quick summary of where things are at with, the network. + +**Marek Moraczynski** +I can give you baikal status. So, instance we have five nodes two guest, two netherminds and one besu. They are all in sync. As far as I know to the book F is in sync too, I'm not sure about open Ethereum +in nethermind. We implemented three fonts, the last 1559 doesn’t need changes and EIP 3541, all clients seem to be working fine, but it will be good to test it in the same way as Jochem from the Ethereum JS Team that tested the other network. So, you all can feel free to do that, that's all, I think. + +**Tim Beiko** +Yeah. Is anyone from Open Ethereum on the call to give a quick update of where they are? I thought I saw, I'm not sure that they posted a boot node? Yeah, anyone from the team wants to share where you're at? + +**Dusan** +Yeah, yeah, we have the updated issue on a defect, we still are missing the guest three fonts, EIP implements for the baikal also we are not able to see at the moment. + +**Tim Beiko** +Okay. this is the last one you have to implement. + +**Dusan** +I am + +**Tim Beiko** +Okay. Got it. So, what did people feel makes sense in terms of next steps for Baikal? My personal preference is probably to keep it up and running you basically until the fork and the reason for that is it gives basically tool, tooling and whatnot. The you know, a network that they can use that's already up, if they want to play with 1559 or stuff like that. + +**Tim Beiko** +Does anyone disagree with that? +Do people think there's other things we should do, with the network? + +**Martin Swende** +I think it sounds good. I don't know how much, how many transactions. has been sent over it. I am, I personally have not done anything. It would, yeah, it would be good to keep it up to. So, so other people can experiment more with the, their codes, and going up and down on the gas limit, there were some changes made on the 1559 spec. regarding to how much the gas limit, well the mechanics where how gas can vary up and down. So, that will be good at that also is tested, but I'm not sure if we have, if that has been covered, I suspect not. + +**Tim Beiko** +Got it. So, yeah, I think I agree that makes sense. I know I've had built a tool that we could use to spam. The networks we built when where developing 1559, I suspect we should be able to use that on baikal as well assuming there's an address with enough Eth. So, in general, just keeping the network up, obviously letting Open Ethereum, the time to sync up to it. Having both, manual transactions on it and people playing around with it and then trying to make sure we test the limits of the gas limit up and down. That seems reasonable. Anything else on baikal? Okay, so next up on the agenda, I had the EIP 3541, which is the EIP by Axic which has been implemented in Baikal. We didn't want to make a decision about inclusion in London last time, because it was kind of the first time that it was brought up on the call. I'm curious how the people feel about, including it in London now. It seems everybody's had it implemented, so yeah. Any yeah thoughts, objections, support. + +**Martin Swende** +I’m in support + +**Artem Vorotnikov** +Let’s include it + +**Tim Beiko** +Cool + +**Tim Beiko** +Anyone disagree with that? Okay, I feel much better because when we take stuff out at the last minute, that's usually a bit risky so let's include 3541 into London. I'll update the spec right after this call. +Similarly, over so two calls ago I think we agreed to move back the difficulty bomb to uh December 1st, roughly rather than Q 2 which was originally proposed in EIP 3238. James has been working on an alternative EIP so 3554 which pushes back the difficulty bomb so uh I think the first, the first increase would happen I believe around December 7th you said James. + +**James** +Yep + +**Tim Beiko** +So, you want to take a minute to walk us through it? +Like, I know you worked on some back tests for it to make sure it lined up rite, do you wanna? + +**James** +Yeah, yeah so the there’s a script in the EIP itself you can run to check at this and it looks at the difficulty adjust coefficient based on the current Epic what it would be that's kind of pushing up the difficulty so the block time increases, and I went back and looked at the last three times that we first saw the difficulty bomb go off and all of them were right as it hit 0.1 on this ratio, which if we were to do this 9,700,000 than 0.1 is reached on the December 7th, which is when the first time that Epic of every 10,000 blocks, the Epics switches over on December 7th. So, it looks like it's pretty good I use, yeah, I don't know if anyone else looked at it, but I went onto as many avenues as I thought to double-check and so at this point, I'm pretty confident about it. The only risk is if the difficulty on the network changes significantly then when that 0.1 ratio happens could happen earlier or later. + +**Tim Beiko** +Yeah. I just, I evolved and looked at the numbers you know the current EIP that we added in near glacier, which is going to go out soon and basically, we're adding an extra 700,000 blocks to that EIP, which is roughly four months so July plus four months was December. +So that was my very low-tech way of eyeballing it. +Did I, I think I saw Geth has already Geth has a PR open for this? + +**Tim Beiko** +Uhm, I don't have a + +**Martin Swende** +Uhm yeah + +**Martin Swende** +We actually, we merged the original number, and we have a PR for the second, so we have 9.5, but we had an open PR for 9.7 and + +**Tim Beiko** +Okay, does anyone else have thoughts about this? +Yeah, sorry there was a comment by James in the chat that like yeah July plus four months is November, but the bomb was going off end of July not beginning of July it's basically end of November not November 1st. Cool does anyone, is everyone okay basically moving this into London and updating a 3238 to have instead of 3554? + +**Tim Beiko** +No objections? + +**Martin Swende** +Yes + +**Tim Beiko** +Yep, last call. +Okay so, in London this is going very quickly now to something I think might take a little more time on the call. +JSON RPC naming I was hoping we could resolve this async, but it seems like it's an impossible problem. +Basically + +**James** +Tim, can I say one thing bomb first? + +**Tim Beiko** +Yep + +**Tim Beiko** +Go for it, yeah. + +**James** +Which is, I think if there's some way to schedule this, we should come back in two months and rerun and have someone rerun those numbers to check that the ratio doesn't change at all. +So, like four or five All Core Dev from now. + +**Tim Beiko** +Yes, I will absolutely uh do that yeah. + +**James** +Sweet + +**Tim Beiko** +Yeah good + +**Tim Beiko** +Yeah, good call cool. JSON RPC naming, so I’ll try to summarize where things are at, and hopefully we can come to a decision on it now and the main reason why it would be really good to come to a decision on it now is we're building these testnets for infrastructure providers and basically the naming of the fields is the main thing that's blocking people from playing around with this and obviously they can support it you know then changing names in the future but that's kind of a bad experience. So, I think it was two weeks ago the Geth team put up, put up a Gist talking about basically the JSON RPC renaming and the header field renaming, we got their pretty quick consensus on how we would rename the headers. +But for the JSON RPC the argument from Geth was, we should use kind of variable names that were shorter than the ones in the EIP so that EIP uses max priority fee per gas and max fee per gas, and stuff that's kind of aligned with the other naming conventions that are used in JSON RPC. +The two that were proposed were gas tip cap, and gas fee cap which obviously aligns with gas, limit gas, use gas price then we kind of had this long conversation on discord with a vote, and it seemed people liked base fee per gas too to specify the base fee, priority fee per gas for the priority fee, and fee cap for gas for the fee cap. +One problem with that is that priority fee per gas, doesn't make it clear it's a maximum value. So, it's not actually the value that you pay but it's maximum that you're willing to pay so the obvious suggestion there is you changed up the max priority fee per gas. Then you're basically back to a spot where two of the three terms have the same name as the EIP. +It would be weird to also not switch back to just using the EIP obviously Geth suggestions was moving away from using the terms of the EIP and the one concern that people seem to have with Geth suggestions or at least the biggest one was people didn't like the fee term instead of that we could use gas price cap. One challenge with gas price cap is it's obviously very close to gas price and it might be more error prone and people also don't like the tip term and so an easy fix there is gas priority cap so that's kind of where things are at. +I don't know, yeah if people have opinions or thoughts this is the time. + +**Martin Swende** +Yeah, sorry for asking this question rite when you summarize everything but is there anywhere a kind of concrete summary it's not suggestion of yeah, what the current or what's the let most recent proposal + +**Tim Beiko** +Yeah + +**Martin Swende** +Is? + +**Tim Beiko** +I just posted it on GitHub. I added a comment to Peter's Gist to yesterday to summarize it so I think as I understand it not everybody agrees on this obviously, but base cheaper gas seems universally agreed upon the two that I think could work for the other fields would be gas price cap, and gas priority cap. + +**LightClient** +So, I don't know this is kind of hard to really bike shed the specifics of the naming just for the voice. + +**Tim Beiko** +It's kind of hard with text as well. + +**LightClient** +Yeah. I personally prefer to not have the per gas postfix I'd rather have the gas prefix and then describe. I think that lends itself to shorter names and it's similar to how we already describe the gas price. + +**Martin Swende** +I was, I was leaning towards that earlier because of the following, the reasoning that gas price oh that meant per gas but then I read Micah wrote that. Yeah, it's a different thing because with the gas price, it's kind of obvious because of the connotations with price that it's you know the price you pay per unit whereas for the other I don't actually I think it's more clear if it's per gas than if it's a gas prefix yeah so I, I'm leaning I'm personally more in favor per gas as it is more explicit. + +**Artem Vorotnikov** +I'm sorry, so this is just about the naming right now. + +**Martin Swende** +Yes + +**Tim Beiko** +Yes, it might seem like it's a waste of time, but we tried for. + +**Artem Vorotnikov** +But I think, I think nobody gives a shit. + +**Martin Swende** +I think you are wrong there, there are people + +**James** +You are very wrong + +**Tim Beiko** +My experience is people have pretty strong opinions about it and are not willing to like, or at least it's hard to get the consensus on it. Async, yeah + +**Martin Swende** +Yeah. The thing, the thing to kind of bear in mind is that we can make this choice once and it's going to be a pain in the ass to change it later and if we do a bad choice it means the UX is going to suck and it's going to be confusing and people are going to shoot themselves in the foot they're going to not understand that this is actually not the value. The, the cost for me is going to be multiplied by 25,000 because it's per gas and it's not an absolute so if we can avoid that I do think it's important. + +**Tim Beiko** +Yeah, and one thing I think Micah was the one who mentioned that on discord a lot of apps will you know just pass through these parameters that our users like they'll take whatever's from JSON RPC literally exposed that so yeah, I agree if we can have stuff that's more descriptive that probably makes sense. + +**Rai** +Matt, did you have another reason to prefer the gas prefixed ones other than uh the shortness and consistency, I guess? + +**LightClient** +No I think those are my main, this is the main reason, you know the like if we start doing max priority fee per gas, now you're having you know 50% or more just describe you're trying to set up what this even means you're saying max, you're saying per gas, you're saying priority fee that's the thing whereas you could just say gas fee cap, or gas tip cap, and so now it's a much I find that easier to reasoning about it and I, I don't really agree so much with Mica's reasoning that prices what's giving it the per unit of gas. I think that's it I think gas what's it saying that this is per gas because you could have TX price and that would not be you know per gas though that would be per TX. +So, I think I'm one of the few people at this point still on the gas prefix train. I'm not gonna die on the Hill but I, I prefer it maybe my preferences on founded because I have spent the last few months staring at these names and the thought of having to type two X more characters to do something is probably not something that we should be using decide how everyone else is going to interact with it but those are my thoughts. + +**Micah Zoltu** +I will personally buy you a text editor that has auto complete. + +**LightClient** +I thought you were personally going to hire someone for me. It's just to fill out the remaining characters, + +**Laughing** + +**Micah Zoltu** +Excuse me. I need to swap places with you Okay, go ahead. + +**Rai** +He will, he will write to a macro so that those names are just one Keystroke that. + +**LightClient** +We can fill a grant out for that, I think. + +**Laughing** + + **Tim Beiko** +So, I guess aside from my client does anybody else like strongly in favor of the gas? Cause I, I, I think Martin, Peter's not on the call but he was also in favor of that. + +**Martin Swende** +Yes + +**Martin Swende** +Yeah. I was just gonna say uhm, so I don't speak on behalf of the whole Geth Team uhm, Personally + +**Tim Beiko** +So, nobody else is willing to defend gas, in that case it feels like there's also more clarity and just we would be using the same we'd be using basically the same terms as the EIP rite we would have max priority fee per gas, fee cap per gas and base fee per gas. that would just yeah, so we basically do not need additional names for JSON RPC. +Does anyone oppose that last chance? If not, I will let the folks working under the spec for JSON RPC know, oh somebody speaking Ansgar is he is on the call. + +**Ansgar Dietrichs** +Oh yeah, I uhm I think I'm weakly kind of in agreement with LightClient but I that I don't have like any strong opinion. I personally don't think like I also have like of problem with priority fee but I think the proper place discuss that would just be the EPI itself so. + +**Tim Beiko** +And luckily, regarding after the merge we will need to do some changes to 1559 so we can reopen all these cans of worms. + +**Laughing** + +**Micha Zoltu** +Regarding priority fee that like the word priority there we've gone between, I think, six different words in the EIP. trying to find a solution if someone has something novel and new and we can give it a try everything is problematic. I think the core reason why we're struggling is because that's that particular value means two different things to different people so if you are gas warring then it is the thing that gets you to the front of the line. If you are just a regular user with 1559 it is the thing that gets you into the block. +So, it's serving dual purposes sort of and so finding a name that satisfies both is very hard and so we ended upkeep changing it to kind of just swap between naming it for the favoring of the one thing, and then we name it to favor the other, and back, and forth. +If anybody come up with a word that handles both, please share it. + +**Tim Beiko** +And we will use that word any too but yeah, I guess okay let's just stick to using the terms that are in the EIP, and expose that in the JSON RPC, and hopefully we'll have the adjacent JSON RPC back ready within the next week or so. +Anything else on JSON RPC? +Okay I guess the last thing I had on London is trying to figure out how people feel with regards to timing for the upgrade. I believe everybody aside from Open Ethereum has the EIP’s fully implemented a couple what is it months or calls ago? +We had kind of this, this tentative timeline where we would basically try to agree to a client freeze today which is I think where we were at so that teams would have another two weeks to release client that's London compatible, and then we could have our first testnet fork on June 9th and the first mainnet fork on July 14th. +How do people generally feel about that schedule? +Did it feel, something that's realistic, is it something that we want to push back a bit? +Tim Beiko +Yeah, any thoughts there? + +**Martin Sweden** +I mean, I think that it is a bit optimistic, and I think that maybe I have the feeling that this might be most YOLO, hard fork we've done so far but yes still I think it maybe we should just bite the bullet and do it anyway because we have this when we need to get the next hardfork out and we've been working on 1559 for a long time but I think the big problem at this for the Geth Team is that well I mean the consensus changes are one thing but there's you know a lot of things that need to be touched in the transaction pool logic, it's a lot of touch changes needed to be done for the miner, and where various other subsystems. So, it's big upgrade and we're not going to be able to do it Client freezes anytime soon I think because yeah there still even if we have the base functionality. We, we don't even have that much but even if it did there was there'd be another two or three PR’S follow up PR’S to add this other stuff. + +**Tim Beiko** +Unless + +**Martin Sweden** +I think that we can live with the dates, I think but yeah, I'm just throwing it out there that it’s we need to do a lot of testing. + +**Tim Beiko** +Is this something where like changing it by two weeks would you know would help a lot too or is it something where you know in a perfect world. You'd have two extra months to do the testing, and I guess that there. Sorry, the reason I asked that this date was mostly set because of the difficulty bomb there's been kind of an increase in hashrate on the network. +So, I suspect we probably have you know a few weeks of leeway if that makes a big difference for clients + +**Martin Sweden** +So + +**Tim Beiko** +we definitely don't have like months of leeway so that's kind of it. + +**Martin Sweden** +So yeah, for me personally my I always think that testnets ultimately are there to test to prepare for mainnet. So, I don't think we should post hone testnet deployments I think if anything we should do them sooner so that we have more time to actually test everything on the testnet before it hits mainnet, but I know that other people feel differently about testnets. + +**Tim Beiko** +Got it, how do other client teams feel when they’ll assume that they are mined on Open Ethereum Tooling throughput yet? + +**Asz Stanczak** +We are generally okay with the timings yeah, I think I agree that we want to say that the community to move with the Tooling connection experimentation, and if they have the tests earlier, the solving the better, I think that's date was announced for a while and we haven't changed it in a month or so if we see any problems whatsoever in the testnets then we should review and consider then to have mainnet bit further down but for now I would stick to this, mid-July date. + +**Rai** +Yeah, I agree that I don't think we should be postponing the testnets. Also, I don't know whether we're ready for a code freeze, yet we definitely have the meat of the EIP’s all in, but similar ancillary logic, like mining, and transaction pool we need double check. + +**Tim Beiko** +Opened Ethereum? + +**Dusan** +Yeah. I agree with last statement. Yeah, we’re not fully prepared for the freeze, freezing and we're already a bit late for that but in general for the July 14th I think that will be a problem. + +**Tim Beiko** +Okay and so rite now basically the first testnet would be on June 9th, which is 3-ish weeks huh is that rite? +Yeah, Three and a half weeks. +You know Martin seemed to feel that keeping it close is better. Does everybody also agree with that? Because we could also push to testnet that back one week and or if that made a difference but then it's like we get less time on testnets before we go on mainnet. Would be I guess if people want to push back the testnets to get more time for Client freeze. Now's the time to speak up otherwise, we can keep the first one on July 9th sorry, June 9th. + +**Martin Sweden** +Which is the first one? + +**Tim Beiko** +So, I had Rospten the first one but then this is just because that's what we did for Berlin. +So Rospten, Gorli, Rinkeby we can absolutely change them if there is a reason too. + +**Artem Vorotnikov** +And the main, the mainnet date would be? + +**Tim Beiko** +The mainnet date is July 14th as of now, so the Rospten testnet would be live before like four huh six weeks before the hard fork then it would be five weeks then, like, oh sorry, five weeks, then four weeks, then three weeks, between the last testnet and mainnet and obviously I see if anything goes wrong on the testnets and whatnot, we can push back that yeah, but assuming everything goes smoothly, that would be the schedule. + +**Martin Sweden** +Think that sounds okay. + +**Tim Beiko** +Yeah. James has a comment like, if we push mainnet two weeks back we could get five weeks on the testnet yeah, okay. +So, uhm so basically let's do that. I had proposed some blocks for those dates in the GitHub issue yeah so, I just if people want to like to put them in the clients now but basically a block on +June 9th on Rospten would be 10399301. +June 16th on Gorli would be 4979794. +June 23rd on Rinkeby would be 8813188. +The mainnet fork block on July 14th would be 12833000 +Uhm unless anything is wrong with those blocks, I double-check them yesterday. +I propose we go with those and this way clients can start putting them in whenever they're ready and working on their release. +Does that make sense? + +**James** +This might be + +**Yuga** +Yeah + +**James** +Harping on to an earlier conversation but if we just did the Testnet blocks, but then didn't set the mainnet block until we know a little more about how the testnets goes or do we want to do them all kind of now-ish? + +**Yuga** +I guess like one question I'd love to get a sense of is like if like we run the testnet and it's like we need to push mainnet out by a week would that be a big deal or like is that kind of okay. + +**Tim Beiko** +So, I think a handful of weeks is okay and the only challenge it's basically the same challenge in why you want to hard code all the blocks at first some users might download a version which has London enabled only for testnets that they think they upload. They think they've upgraded, but there actually isn't like a block number in for mainnet. So, you kind of get the similar thing where if we do push back the fork block. Uhm it's not the end of the world but you risk having some people think that they've upgraded they don't read the blog posts or the announcements and whatnot and then they're on a version which has the wrong fork block for mainnet. It's not you know it's not something I think we should do unless we find like a major issue or we realize you know we're absolutely not ready. but it's also not impossible. + +**James** +Yeah, and the other end is if we have the mainnet blocks in and everyone's installed the clients, and then we need to delay two weeks, then there could be an important part of the network that's splitting two weeks earlier than the other one if they didn't if everyone has to change their the client that they have. + +**Asz Stanczak** +Yeah. We usually avoid a hard coding, mainnet blocks together with the Testnet blocks. So, we add testnet box numbers first and then we release after the first testnet’s going successfully or at least the next version with mainnet set like any historically the same mainnet block number changing, until the last weeks and we didn't want that to risk that like not switching felt for us less risky than switching and the wrong block and trying to revert it. + +**Tim Beiko** +Yeah, that's totally something we can do for London. If people are more comfortable with that we can wait until the Rospten fork and then we can keep the current block like tentatively and if everything goes well use that one but yeah, if people want to wait before we hard coded in clients yeah to see that the Rospten and maybe like the Gorli fork goes smoothly. +Do people prefer that? + +**Martin Swende** +Yeah, that's probably what we, I mean that’s what probably, +that's what we've done historically in Geth as well. I think. + +**Tim Beiko** +Okay + +**Tim Beiko** +Okay let's use let's basically use the current testnet blocks that where proposed. I see Thomas has a comment about the main block that to add one more zero so anyways we can kind of bike shed around that one offline but yeah, let's use the current testnet blocks for Rospten, Gorli, or Rinkeby, assuming the fork goes well. We'll basically have a fork rite after the we'll have an All Core Dev call rite after the Rospten fork we can decide there if we feel comfortable setting the mainnet block. + +**Tim Beiko** +Yeah, Cool. + +**Micah Zoltu** +Before we move on out of the London stuff. +So, someone brought up that in order for wallets to correctly do transaction speed up. They need to know what the clients are going to accept and gusset for transaction speed up there is value in us kind of coming to some general consensus on what the requirement what each client requires for speeding up a transaction, speeding up the transaction and being replaced by fee basically so, I guess the first question is do clients have the various clients decided what you guys are going to do for that yet? + +**Asz Stanczak** +Yeah, we'll calculating the miner's fee I'm in like the payment to the miner as the selection process for which transactions to evict and which to keep. + +**Micah Zoltu** +Okay So, so you calculate how much is going to the minor specifically, and then you sort by that, and then you kick out the Okay. + +**Tim Beiko** +Anyone else? + +**Ansgar Dietrichs** +I think that for the Geth implementation specifically, that LightClient and I helped with, we had like of a little bit of an internal debate like I personally kind of prefer the like specifically for replacement not for general eviction, but just for replacement from the same sender, same nonce. +I think there are two alternative approaches you can either just you can either basically enforce like a bump of both the fee cap and the tip, or you can basically just acquire a bump of the tip as long as of course the tip is remains smaller equal to the fee cap and I, I think both basically work it should just be like playing should just do the same thing because otherwise the pool gets fractured which is not ideal. + +**Micah Zoltu** +Doesn’t the letter if you just bump the miner’s portion doesn't that allow someone to spam a transaction that they know won't get mined because you set your advantage that your fee cap to zero, and then you can just bump the miner fee over and over again. + +**Ansgar Dietrichs** +So, we + +**Rai** +We don't allow transactions with a tip greater than a fee cap. + +**LightClient** +Mem Pool + +**Ansgar Dietrichs** +Exactly. So, so basically like because, because we already it's basically the same situation that we have today without 1559 we enforce like a minimum Eth tip or like a minimum today of course minimum gas price and, and so basically it is like there is a minimum of how costly like the first bump is and then each subsequent bump will be more costly. +Similarly, today if the gas prices I don’t know, 60 gwei or like a 100 gwei or something and your transaction right now as a guest price of one then you can pump a couple of times before you'll get close to the inclusion zone and this basically property will kind of be the same afterwards, basically like the but given that like your total fee cap must be higher than the tip basically every time you bumped the tip you kind of also have to keep up the total cap you're getting closer and closer to inclusion and so it's basically the same as it does today. + +**Micah Zoltu** +Okay. I think the key there is that you do not gusset transactions and have a tip higher than the cap. Is that correct? Under any situation? + +**Ansgar Dietrichs** +That's correct. + +**Unknown Speaker** +Yes + +**Ansgar Dietrichs** +I think technically they are includable in the book, but at least we Geth implementation right now would not gusset them. + +**Micah Zoltu** +Okay, whatever we decide on, we, I definitely do think we should make that available to wallets as soon as possible so once each of the teams has decided what your strategy is going to be please share it somewhere. It can be in like in the 1559 the channel, R&D discord or somewhere they can get those correlated and out to wallets because they will need to kind of use the lowest common denominator strategy for dumping if they want to be able to gusset across the whole network it's like, whatever the most strict client is what the left follow. + +**Tim Beiko** +So, yeah on that Trent I don't know if Trent's on the call. Ahh yes, he is, Trent is going to be working on sort of a cheat sheet for wallets, regarding 1559 so if people can just yeah just thumb that into discord Trent and I can definitely keep track of the responses and share it out with wallets. + +**Ansgar Dietrichs** +Maybe as a follow up on this specifically on the replacement there was anything left to discuss but like as a follow up. I think a couple of months ago we talked about the kind of like the rules around include in general as well and I kind of flipped and looked into that as well. And I think while it's not consensus it's critical it's also valuable to have those this be in sync as well between different clients otherwise again there's this structured situation where different clients keep different transactions in the mem pool and different ones then there's just like really inefficient for the because you might regusset some transactions a lot an so and so I just wanted to kind of ask what the best process would be to maybe offline or something just double check that clients ideally to the same and if they don't kind of maybe come to agreement or something yeah. +Like what what would be the best way of just kind of reaching out to our clients. + +**Tim Bieko** +Maybe we can discuss this on discord and on the 1559 dev channel, I think some folks are actually discussing this right now I, and Ansgar I shared the writeup you had done of that kind of explains in more detail which you basically went over just now. So perhaps it's useful to like have people look at that and explain how they differ or like don't differ from it yeah, we can definitely document the differences and yeah + +**Ansgar Dietrichs** +Okay Sounds good I'll look over it again today and just make sure it's still in sync with the fork Geth is the Geth’s limitation at least just doing today. + +**LightClient** +If I can also just make one last comment the way it sounds like several clients have implemented is like I think the most correct way you're using whatever the effective gas price the transaction is so you're subtracting the base fee and then determine service turned out some of the miner is going to earn and that's sort of is like what the network deems is the best transaction but since the base fee is constantly moving that's needs to be recalculated each transaction and it's not a linear relationship as you began to get to the point where transactions become invalid their fee cap the basically goes past the fee cap you would need to start removing those transactions you have to recalculate this ordered list every block whereas if you use the fee cap which is not changing as your order list then you don't need to reorder all transactions every block and the way that we're doing that with Geth is there's a mem heap of transactions so you will only re-heap once the heap has been has seen some number of new transactions and structurally needs to be re-heap so it's not clear if that it's not clear to me if we can allow the resorting on every block generally I'm trying to avoid any degradation of performance and so I can run a benchmark to see to compare how those would look but that's just the main difference between those approaches. + +**Ansgar Dietrichs** +And so I would take issue with saying that this would be the most correct way of doing it because I think, the main kind of consideration that went into recommending the fee cap instead of the current effective tip basically as a criterium it’s just that we expect like most normal conditions like the vast majority of the men pool and 1559 to below the count base fee because like this only on average like one block worth of transactions kind of above the Includability zone basically and so like especially for eviction we be most interested about the, the most least valuable transactions those will always almost always be below include Includability the effective tip rite now will be always be zero for them just sorting by tip generally doesn't work as well because usually like the effective tip that you end up paying will be much lower than the tip if the tip is large but you’ll lower Includability because you will barely be able to get in so like effective tip will be small and so I think actually like the fee cap is the most correct way for sorting and not the effective tip and not the tip but again I think it's probably better a benefit for our client and it's not consensus critical and so it's not kind of like critical to have that be in sync and time for the testnet but yeah. + +**Tim Beiko** +It's also something we can update once 1559 is live rite so obviously we want like the best behavior that we know of now but once we actually see usage on the network and how the men pool is working, we can definitely change yeah how transactions are started based on that. + +**Ansgar Dietrichs** +Yes, I think that's correct + +**Tim Beiko** +Anything else anybody wanted to bring you on 1559 or London in general? +If not LightClient had asked for an update on a Rayonism and word the merges at, I see we have Danny on the call. +I know you've been on top of that there and I know also a lot of the actually client teams have been working on this. +So, does anyone want to kind of walk through maybe the past couple of weeks of what happened with Rayonism and where the work related to the merge is at. + +**Danny** +Yeah, I can give a quick high level uhm so there was the Rayonism, Nocturne Dev net that launched I believe a couple of days ago uh there's a block Explorer up and a fork monitor up. If you want to check it out. I believe all 12 client combinations are working on that which means there's kind of four eth 2 clients and the three eth 1 clients and you can mix and match all of them they're all running there and running validators which is very exciting this was definitely a major success but also definitely kind of in this prototyping zone we did not test the fork transition and we are not testing like historic facing which is two critical things uhm definitely in the wrapping up phase that we validated all the things that we wanted to but now I think it's time for production engineering on the finale things for London and the Altera fork at the same time we're working on specifying a couple of last things and greatly enhancing testing on the spec for the merge spec based off of some of the stuff we did here and some of the stuff that we've just been committed to do and so I think the idea is to shift back towards some other production engineering and get the merge specs and the next iteration and then once we get Altair in London releases out a shift back into some production engineering here uhm so Dev net’s up Dev net went really well, dev net will probably go down early next week and we have shift back into other things post Altair post London we'll do some more multi-client testnet stuff and probably have much more of a conversation here on all the things and I guess over the next like couple of All Core Devs we can talk more about planning and stuff. The client teams here if ya’ll need to ask please do and otherwise I can help with any questions if anybody has or wants to dig deeper on that. Cool another thing to note is we're doing some bike shedding if you'd like to jump in on API, transport, and format. + +**Micah Zoltu** +For all those that enjoyed the naming discussions so much go join consensus client name on the discussion. + +**Danny** +And a quick announcement. +I think this was shared in another channel, but the merge calls will now be on the same week as the Eth 2 calls on Thursday at the same time we're going to be doing a three week break rather than two-week break uh this is to try to help with some of the folks that are up pretty late for Eth 2 call and the merge call to kind of stack them together, so we’ll do merge than immediately enter Eth 2 calls. Each two calls usually aren't very long usually like 30 or 45 minutes depending on what's going on there shouldn't be too bad so we're gonna try that out. + +**Tim Beiko** +Thanks for the update any clients have anything to add. Okay that's Oh Trent you had the + +**Danny** +I just want to say a huge shout out to Prodo + +**Tim Beiko** +Yes + +**Danny** +Prodo like incepted this Rayonism idea and like did a lot of work if you were involved in that that **Prodov has been out there at night making this thing happen, I mean same to all the engineers but thanks, Prodo. + +**Tim Beiko** +Sweet yeah Trent, you had you wanted to talk about the 1559 UI call. + +**Trenton Von Epps** +Yeah. Thanks Tim. I know you mentioned it earlier but I can just go over it again and reiterate what you already said which is similar to the London readiness call we had a week or so ago we're going to be doing something similar but focused on people that work on wallets and interfaces this would be like MetaMask, Argent, Rainbow, Status, things like that so if anybody's listening to the call and you work on a wallet please reach out we're going to do two things which is put together a cheat sheet of basically what you need to know and hopefully keep it updated as things become more solidified and to solicit resources that Dev's could look to and then there will be a call I think about in a week two weeks from now we don't have a time yet but we'll try to pick a slot that works for everybody that's interested in being involved with this we'll just go over what people have been thinking about so far with regards to how they're presenting these new transaction choices to users and hopefully get people on the same page with what the best practices are so yeah like I said please reach out and just let me know if you'd like to be added to that I'll be sending out an email probably early next week to figure out a time. + +**Tim Beiko** +Great. + +vTrenton Von Epps** +That's it. + +**Tim Beiko** +In the chat, there was one more comment so Piper has been working on a Core Dev Apprenticeship program to get folks who want to start working on core development for Ethereum to work on it over the summer and receive a stipend for that work there's a blog post that went out on the Ethereum blog yesterday so if you just go to **blog.Ethereum.org** it's the most recent post it's called Core Dev Apprenticeship asks if anyone listening is interested in that there's all the information in the post about how to apply and Piper can answer all of your questions about the program. + +**Tim Beiko** +Cool, anything else anybody wanted to discuss or, yeah, give it a shout too. + +**James** +I wanted to say one thing. + +**Tim Beiko** +Go + +**James** +That, so I've been slowly handing things over to Tim over the last couple of months for the Hardfork coordination rule role. I've done it for about a year almost two years It feels like and I'll be moving into some other things this will probably this will be like my last call as that I'll be leaving the **EF** as well I don't know exactly what I'm going to do next but part of it's probably going to be EIP stuff cause I keep getting drawn into it and I like working with you guys so it has been a pleasure. + +**Martin Swende** +It's been a pleasure having you + +**Tim Beiko** +Yeah + +**Asz Stanczak** +Thanks James + +**Tim Beiko** +Thanks for all your work. + +**Rai** +Thanks James. + +**Tim Beiko** +Yeah, and yes there's definitely more than enough work on the EIP side if you're not sure what to do. + +**James** +I'm gonna I'm going to try and wait at least four weeks before jumping into things, but I can already tell that'd be I'm, I'm excited about stuff so. + +**Tim Beiko** +So yeah, that’s a good call to take some time off. +Cool, anything else anybody wanted to bring up? +Okay well, thanks everybody. +I can't believe we finished half an hour early given everything that was on the agenda. +So yeah, I appreciate it I will see you all in two weeks. + +**Multiple Participants** +Thanks Everyone +Thank you. +Cheers +Thanks + +**End of Meeting** + +# **Summary:** + +If we push Mainnet two weeks back, we could get five weeks on the Testnet. +I had proposed some blocks, for those dates in the GitHub issue, so I just if people want to put them in the clients now, +Basically a block on **Rospten, Gorli, Rinkeby**. +**June 9th** on **Rospten** would be **10399301** +**June 16th** on **Gorli** would be **4979794** +**June 23rd** on **Rinkeby** would be **8813188** +The **Mainnet fork block** on **July 14th** would be **12833000** unless anything is wrong with those blocks. +I propose, we go with those and this way clients can start putting them in whenever they're ready and working on their release. + + + +## **ACTIONS REQUIRED** + + + +## **DECISIONS MADE** +| Decision Item | Description | Video ref | +| ------------- | ----------- | --------- | +| **1** | Start and Baikal discussion | [8:40] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=520s) +| **2** | EIP-3541 discussion | [13:10] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=587s) +| **3** | EIP-3554 discussion | [14:08] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=848s) +| **4** | JSON-RPC naming convention discussion | [17:52] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1072s) +| **5** | Block number discussion | [31:00] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=1860s) +| **6** | Speeding up transactions by clients/wallets | [43:42] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=2622s) +| **7** | Merge and Rayonism update | [53:26] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3206s) +| **8** | 1559 UI call announcement | [57:32] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3452s) +| **9** | Core dev apprenticeship program | [59:03] (https://www.youtube.com/watch?v=H_T2nNrTuWQ&t=3543s) + + +--------------------------------------------------------------------------------- + + +Moderator: +**Tim BeiKo** + +# 1. Agenda Item1 +º Baikal discussion + +# 2. Agenda Item2 +º EIP-3541 discussion + +# 3. Agenda Item3 +º EIP-3554 discussion + +# 4. Agenda Item4 +º JSON-RPC naming convention discussion + +# 5. Agenda Item5 +º Block number discussion + +# 6. Agenda Item6 +º Speeding up transactions by clients/wallets + +# 7. Agenda Item7 +º Merge and Rayonism update + +# 8. Agenda Item8 +º 1559 UI call announcement + +# 9. Agenda Item9 +º Core dev apprenticeship program + + +## Date and Time for the next meeting +**May 28th, 2021 14:00 UTC** + + +## Attendees +**TIM BIEKO** +**TRENTONVANEPPS** +**POOJA RANJAN** +**JAMES HANCOCK** +**MARTIN SWENDE** +**SASAWEBUP** +**ANSGAR DIETRICHS** +**ALEX STOKES** +**TRENTON VAM EPPS** +**PRESTWICH** +**ASZ STANCZAK** +**KENNETH LUSTER** +**LIGHTCLIENT** +**JOCHEN** +**ARTEM VOVTNIKOV** +**ALEX B. (AXIC)** +**GARY GARY SCHULTE** +**MAREK. MORACZYNSKI** +**SAJIDA ZOUARHI** +**MICAH ZOLTU** +**DANKRAD FEIST** +**PAWEL BYLICA** +**KEVAUNDRAV WEDDERBUM** +**LUKASZ ROZMEJ** +**YUGA** +**PUAL D** +**RAI (RATAN SUR)** +**DUSAN\ALEX VIASOV** +**JOHN** +**DANNY** +**ALEX VIASOV** +**DUSAN** + + +## Links discussed in the call (zoom chat) +º https://gist.github.com/karalabe/1565e0bc1be6895ad85e2a0116367ba6 +º https://gist.github.com/karalabe/1565e0bc1be6895ad85e2a0116367ba6#gistcomment-3740453 +º https://github.com/ethereum/pm/issues/245#issuecomment-832122309 +º https://github.com/ethereum/pm/issues/245#issuecomment-832122309 +º **Ansgar Mempool write up:** https://hackmd.io/@adietrichs/1559-transaction-sorting-part2 +º https://blog.ethereum.org/2021/05/13/core-dev-apprenticeship/ + + + + + + diff --git a/Merge/Meeting 02.md b/Merge/Meeting 02.md new file mode 100644 index 00000000..26c7f04d --- /dev/null +++ b/Merge/Meeting 02.md @@ -0,0 +1,663 @@ +# Merge Implementers' Call #2 Notes + +### Meeting Date/Time: Thursday 2021/4/15 at 13:00 UTC +### Meeting Duration: 90 minutes +### [GitHub Agenda](https://github.com/ethereum/pm/issues/299) +### [Audio/Video of the meeting](https://youtu.be/ODcNpWiLASk) +### Moderator: Mikhail Kalinin +### Notes: Santhosh(Alen) + +# Agenda +- New terminology + - ethereum/eth2.0-specs#2319 + - https://hackmd.io/@n0ble/the-merge-terminology +- Execution-layer discussion + - Communication protocol + - Fork choice and chain management + - State and block sync + - Gas limit/target voting + - Slot clock ticks +- Consensus-layer discussion + - Improved transition process: set TRANSITION_TOTAL_DIFFICULTY at TRANSITION_EPOCH + - Consider max block size in relation to max size of ExecutionPayload (transactions max size is 16GB) + - Consider Union type for transaction list with a single OPAQUE_SELECTOR for first merge fork + - Consider eliminating uint256 requirement on beacon-chain side +- Rayonism updates ☀ +- Open discussions + +# Intro +**Mikhail Kalinin** +Welcome to the Merge Implementers' Call #2 + +**Mikhail Kalinin** +* So, while some ethereum core developers might be able to join this call, let's just go over the agenda and discuss some things that we can do without them. +* To begin, we have this new terminology uh the key replacement here is that we replaced the application term uh with the execution one, so there is the uh execution layer instead of the application layer this is to not confuse people with the smart contracts and applications using them so applications built on top of the mainnet that is the purpose of it. +* The term layer is arguably not the best one for execution and consensus because they are not really layered, and yes, we will think about it more here. +* I don't want to spend too much time on this, but it's probably best to name it subsystems or engines or whatever, and yeah, if people have any suggestions, just drop them in discord and we'll probably address it offline, so something on the terminology, any questions here. + +# Terminology + - Consensus (eth2), Execution (eth1), Application (dapps) + - Engine or Sub-system, not layer + - We can discuss this offline in Discord further on this. + +# Execution discussion + +**Mikhail Kalinin** +* I was just going through the main parts of the execution stuff and asking for any updates or understanding for possibly queries from uh ethereum developers so that's the initial concept so we can probably do this anyway so any questions to the like communication protocol. + +**Danny** +* Where can I find the most recent updated link? Rayonism is inspecting the contact protocol to ensure that you are maintaining Mikhail. + +**Mikhail Kalinin** +* Yeah right that's the one I have put the link to the previous one uh the new link the ram is blink is put to the top of the previous document so that's like the latest one anyway the yeah anyway this is json rpc for ionism but it's probably not going to be production so we can you know we can get to this discussion later okay so who has reviewed or who has any thoughts or suggestions regrading that. + +**Nethermind** +* By communication protocol do you mean eth1 to communication by rpc or anything else + +**Mikhail Kalinin** +* So, yeah, that's yep. +* Okay, nothing in particular, but if you have any questions, please let me know. + +**Nethermind** +* I have a query because I'm not completely sure how to handle potential problems and errors in this protocol. For example, if the assemble book fails, the new block fails, or the set head fails because the payload is incorrect or any internal error occurred, I'm not sure how to handle it. The specification makes no mention of error handling. + +**Mikhail Kalinin** +* Yeah, there are statuses for finalize block set head and obviously for new block I mean that you can return false if it wasn't done correctly yeah you're right but assemble block doesn't have any. yes, that's right Yeah, that's a good question. I think we should add some kind of status there as well, so it'll be an item and the status alongside it. + +**Danny** +* Right, particularly because you can specify as a parent hash now, you might point to anything that's just terrible, so there's certainly a failure case there, or something that's non-existent. + +**Mikhail Kalinin** +* Also, the other alternative is to use the errors in json rpc that we have today, right? You mean the result, right? Is it in the spec? Okay, I see the issue. + +**Nethermind** +* One thing because I'm not sure whether it's specifically defined assembly block doesn't have it so after assemble block new block will be called or do we assume that new block won't be called and it can be called just by set head uh we actually in general expect that new block will be called So there is a state transition occurring on the consensus side when the block is assembled and it's suggested it should yeah, which the state transition is uh called triggered and yep it will activate the call to the new block method okay so it's assemble block then new block yeah you can like I wouldn't say I would, Danny. + +**Danny** +* Assume that get work does not add it to the block tree today. Only if they find a solution, uh, does something get added to the block tree. + +**Tomasz Stanczak** +* So it's kind of similar logic, yeah, but in a proof of authority chain, you'd create the block and edit it right away, so yeah, I understand your argument here because there's a difference in time, like when you keep preparing the blocks with the assemble block, you can theoretically call it several times right with the same parent because can you? + +**Danny** +* You might presumably, but there isn't an instantly clear use case for that, unless, of course, there isn't. There is no obvious application for that. + +**Mikhail Kalinin** +* You may want to give a block and stop repeating the same transactions. + +**Danny** +* Right, as you would expect Yeah, you can imagine doing it slightly earlier to get something ready to broadcast and then doing it again quite close to the time of broadcast to see whether you've got a better coinbase output on the mev side, but even then, I don't know that's a very obviously good strategy just a possible strategy actually an useful is it worth the complexity. +* In having the ability to point to something arbitrarily to build build on rather than just the head, I mean presumably the beacon node keeps the execution engine in sync with what it thinks is the current head and so if there was a reorg you can trigger that and then call symbol block uh and just ask me primarily when you definitely know the head. +* And you can say the parent hash to build on, but that opens up a design constraint on the fusion engine to be able to build on arbitrary heads, which I'm not sure is worth the complexity. + +**Mikhail Kalinin** +* That's a good question because it might be the case when there is the arbitrary like block becomes the head afterwards I can imagine this kind of stuff with racing between a bit of racing between the new head and assemble block or yeah so wha what if the head has changed during the block has been assembled what could happen here I mean you even imagine if the head is being changed when the block has been assembled. +* What could happen here, I mean, imagine if the head is moved when the block is being proposed, then how will this work? How can the beacon node manage this? + +**Danny** +* I mean, at some point, the beacon node has to make a decision on what it thinks the head is and assemble the block based on that, but the idea is that when I start assembling a block, another subsystem triggers that there's a new head halfway through me assembling a block, and I ask the execution engine for the transaction payload, but it's gotten a trigger from somewhere else, but there's a new head halfway through me assembling. +* In a block, I ask the execution engine for the transaction payload, but it's received a trigger from somewhere else, but there's a new head, and I'm now out of sync on that, and this protects against that. + +**Mikhail Kalinin** +* Yes, some degree of consistency is needed. + +**Protolambda** +* uh sir go ahead. + +**Danny** +* no please please + +**Protolambda** +* Cases of consistency are extremely important in general, but remember the situation where there are many beacon notes referring to the same thing if you're not sure. + +**Nethermind** +* Okay, so I'd say we should think about any concurrent calls to those rpcs. For example, if we have one set head and a second set head, we should probably cue them that the last one wins or something like that in the implementations. Uh, finalized blocks are probably not significant, and new blocks are probably not important. + +**Danny** +* That the other relationship between fat head and and some of the other calls may be important. + +**Mikhail Kalinin** +* Yes, my feeling is that all of these messages should be processed sequentially, but what should be but yeah, new set head and new block are causally based, so they must be processed sequentially, but others may be processed concurrently, but I'm not sure whether this is true in all situations. + +**Danny** +* Yeah, I can see how a symbol bot and set head might get out of sync depending on whether different subsystems in the beacon chain are out of sync and therefore the parent hashes are certainly immediately it's a nice simple fix without having to worry about things deeper but it could open up complications on the execution engine side um but I'm not sure. + +**Mikhail Kalinin** +* Okay, so the assemble block should have some we've started from the error message okay, so let me think about it and we'll continue fine probably. +* If there is something else here, we will proceed to full choice and chain management. + +**Danny** +* Yeah, I just want to highlight the current like with that parent hash in there and there's not really any bounds on that that like a cymbal block might trigger an arbitrary not reorg because it wouldn't be changing the head but trigger an arbitrary like attempt like you have to go and put yourself into this different state to build a block and so there could be complexity. +* It's worth people looking into that over the next week or two so we can talk about it again the next time we meet. + +**Protolambda** +* For the time being, I'll simply lift the error if the consistent consistency check fails, and then we can adjust implementations to actually handle the situation. + +**Nethermind** +* So, if we have a finalized block, it probably affects what parent hashes can be supplied to the new block, doesn't it? So, we can't organize finalized blocks, so we do have some constraints on this parent hash. + +**Danny** +* right, correct So they are arbitrary in the sense of finality in the subblock tree, yes, but. + +**Mikhail Kalinin** +* I would not impose these checks on the execution engine because this is the duty of consensus, and in some situations, um consensus may move from one finalized checkpoint to the concurrent one, which is similar to some forks or whatever. + +**Danny** +* Even locally you'd never reverse penalty like yeah even if there was enough contact value well it can't happen locally even even though there was enough contact value + +**Dankrad Feist** +* is it likely with manual intervention? You might have ended up on the wrong fork and then changed it, but the node will never do that. + +**Nethermind** +* Yeah so I have a question so uh finalize block how many uh how much height of the chain may not be finalized yet i'm not aware of that because it's important uh for state management pruning implementations things like that that's a problem that that is rise here of what i'm uh aware of in from eth one side so practicality of this problem is how big this unfinalized chain. + +**Danny** +* Because in regular operation it's two epochs is the depth so that's because that's usual operation because you get in the happy case you get pruning on fair depths but you can't actively prune if you're in a time of non-finality and you know you might go days without finality so there's certainly a variance that has to be managed on pruning. + +**Mamy Ratsimbazafy** +* so the risk about non-finalized state is what happened during medarsha for a couple of days the chain didn't finalize and we had many many forks and in that case uh theoretically you can store all the forks in your client so maybe they will become legitimate but if one fork just has a few votes it might not be worth it. + +**Dankrad Feist** +* The issue is that if a new block builds on one of those forks, you have to validate that block so you later need to see if attestations to that block are valid, which I believe is the issue, but I can say that on mainnet, we should definitely be prepared for longer non-finality periods, but hopefully not days, so maybe we can get a more reason a compromise like days would be pretty extreme, and if we ran into that, it would be a pretty insane failure. + +**Milkhail Kalinin** +* Okay, so something else about the protocol of communication between ssi execution is fine. + +# let's just move to the folk (time:20:18) + +**Milkhail Kalinin** +* Chain management options I know that people have begun to investigate and that how difficult it will be to make the fork option pluggable and how much of an effect it has on changing the chain management of their clients what I just wanted to ask about any changes and thoughts here about how it could be enhanced like from any point so. + +**Nethermind** +* So maybe I'll start from a different mindset it's actually fairly easy we had it fairly broken down already the problem there might be that I haven't investigated that much is later syncing the network up to the head and then starting it so integrating the syncing and the fork choice management itself might be harder than just but for starting from the head like for the hackathon we want it's fairly easy. + +**Danny** +* Yeah, since we're hackers, we just have like this absolute difficulty uh rules for the beginning, and there's progress on Guthrie yeah. + +**Proto** +* If no one from Guff is in the call, I suppose I can send an update. + +**Danny** +* Currently, Peter has been unmuted and silenced a couple of times, so we can't hear you if you've been speaking. + +**Peter Szilagyi** +* So, if the question was what's the guest's progress on these matters, we had a meeting with proto, I believe yesterday or two days ago, and he kind of went through different stuff. I suppose the conclusion was that if you need anything for Monday, the closest we can give you is guillaume's pr, which just kind of hacks into hacks of things into basically guillaume's PR. +* We started working on an essentially new consensus engine that does the whole new fork option rule but we haven't integrated it in yet because as far as we know, it only explicitly inserts into the blockchain and hacks through all the internals. +* I know it's not finalized yet, and I've also started working on the synchronization, but I'm a little sidetracked because, in order to make the synchronization work, I also need to change some other parts of production get, and I'm not super keen on hacking stuff together in production parts, so I just want to make it properly, which means it's going to take a little longer. + +**Milkhail Kalinin** +* Great, thanks Peter. + +**Nethermind** +* If I could ask the pr you're talking about if this is the pr i've seen is following the old spec it's not a big deal but the jason rpc gui is different than the new spec it's not a big deal. + +**Gullaume** +* So it's not following the old spec, no uh, it needs some modifications, but it'll be finished after this call. + +**Nethermind** +* Okay. + +**Milkhail Kalinin** +* yeah, any questions about chain management and book selection? Okay, so this scene process is just a high-level proposal in the design dock we discussed on the previous call on how to download the state and do the boxing, and if people have any opinions on whether it's viable or not, or any other inputs, that would be great. + +**Danny** +* I assume we'll say that, but some people haven't quite made it there yet, so we should probably bring it up as well. + +**Milkhail Kalinin** +* I agree, let's just assume for the time being that it would work. There was a question in the chat in the discord, I don't remember where exactly, perhaps in this court, on which part will decide on the gas limit and target voting after the merge, so my basic my basic thoughts are that it doesn't change so the execution engine has this voting mechanism and every proposes. + +**Danny** +* I'd say by default it remains the same, which is a block producer, regardless of whether it's a minor proposer or validator, it does it similarly to how 1559 post merge you know the block producer will be responsible for paying base fee for transaction and figuring that out in a similar method don't know on on e1 clients today what's the how does one access that is it in a similar method don't know on on e1 clients today what's the how. + +**Milkhail Kalinin** +* Has a flag with only a number on it, which is the goal for the gas limit, and it will be increased according to the gas limit formula per block, from what I recall. + +**Danny** +* So, well, the features should stay stable and work well. + +**Peter Szilagyi** +* I mean, we can always add methods to adjust it because it's such a small thing, but I don't think people want to keep changing it at runtime, but yeah, if there's a need to be able to tweak the limits at runtime, it's more than trivial, it's real to just change it. + +**Milkhail Kalinin** +* Added it correctly, and now if a miner wants to do anything like raise the gas cap, it simply restores the node with the new parameter. + +**Peter Szilagyi** +* Yes, but if you look at mainnet generally, miners still run with the maximum gas cap that was kind of considered secure for the network, and it's just modified maybe once every half-year or so, so it's not like you have to constantly adjust it right? + +**Milkhail Kalinin** +* I believe that following a consensus update, there should be no need to change this section. + +**Danny** +* Okay. + +**Milkhail Kalinin** +* So one thing for the next the next item is just slot clock ticks this is I suppose it's been missing like on the previous call and in the dog but I think it might be relevant because there is the consensus component that has the slots clock and these sticks could be propagated to the execution I guess because the sticks timestamp goes to the block to the next block and and it's probably necessary for transactions that use the timestamp of code to be up to date with this kind of details, so some additional message or order might be required. + +**Danny** +* So you're saying that transactions in the mempool might be invalidated, or that they're not as important, or that there's logic that's based on them? + +**Milkhail Kalinin** +* Yes, they could adjust the way they execute their like execution flow uh inside of a smart contract method it calls, and it might be necessary for the pending block functionality because you have to restart the block any time a new timestamp is observed. + +**Peter** +* Could you elaborate on this a little bit, so what exactly is this notion that I'm missing? + +**Danny** +* Proof of stake blocks only have a time stamp determined by the slot, and the slot is only every 12 seconds, so there's no granularity of time like you'd find for transactions reaching timestamps opcodes that aren't on those 12 second boundaries, so it's fine. The activation engine can either know the time and determine where it is and use that, or it can be told the time and use that, okay? + +**Peter** +* But then, basically, this would mean that the eth1 blocks could also reach the same twelve seconds, right? So, when you call produce block or whatever it's called, you'd specify the timestamp to produce it at, right? + +**Danny** +* Right okay, this is more of an uh, I think Milkhail is concerned about systems that are maybe dependent on time stamp that aren't right at the granularity of produce block like managing the mempool thing, which is good because it gives you a deterministic outcome. I think michael is concerned about systems that are maybe dependent on time stamp that aren't right at the granularity of produce block like managing the mempool thing. + +**Peter Szilagyi** +* Okay, so I think the only thing I wanted to emphasize is that everything that transaction execution depends on needs to be stuffed into the block header because otherwise we won't be able to synchronize best box. + +**Danny** +* Right. + +**Peter Szilagyi** +* So you can add so we've discussed it with diamond a few days ago that the original rpc apis already had this round out thing plus some second field right which at least in the past api they were only passed along as two more fields independent of the block and I just wanted to ask that if we ever want to add those fields back then we probably need to have them integrated into the header, and because we've nuked out three four fields, for example, the mix digest and others, we can still repurpose them if we want to get them in with minimal harm to the I mean the remote adjustments to the iphone clients. + +**Danny** +* correct, there isn't really a need for another field because the time stamp field's consistency with the slot can be checked on the consensus side and we can check it outside, so I don't think you really need it. I think Mikhail is more concerned about the execution engine knowing what slot it is without the context of a new block being called, and so I don't think you really need it. + +**Mikhail Kalinin** +* So, my question was about how two ample transactions are executed on which block is a dependent block that is generated and restored each time after the new block is obtained and imported from the wire. + +**Peter Szilagyi** +* I'm not sure what other question you're asking. + +**Mikhail Kalinin** +* The question is, before propagating the transaction to the y, you must check and confirm it. + +**Peter Szilagyi** +No, you just check if the sender has enough balance to condemn his correspondence when you receive the transaction. + +**Mikhail Kalinin** +* And yeah, I get it, and it does matter which time stamp is used for a dependent block, right? + +**Peter Szilagyi** +* So, for the pending block, I guess the question is if there's uh, if you want to implement this 12 12 second issue, it would make sense to add a rule into the consensus engineer that the time stamps for the pending block are again on this 12 second boundary, but I guess that's an important spec question. + +**Danny** +* I mean, calls to the assemble block will only ever be on that 12 second boundary, so anything opportunistic, like the pending block, should respect that. Then there's the question of whether the execution engine can just use its local time to modify these 12 second boundaries, or if it needs to be explicitly told on, say, a click from the beacon node. okay new slot okay new slot okay new slot so it doesn't have to worry about time sync issues + +**Peter Szilagyi** +* no, I think it's better to just let them let the pen fly. I mean, you don't really care what the real world time is, you just care that it's in line with your 12 second click right and the pending block is either way just some opportunistic let's try to execute a batch of transactions and see what happens but it's not a good idea. + +**Danny** +* so the concern will be if I have the beacon node and the execution engine on different machines and the pending block becomes is like one second off and so it's a slightly different spot and then when I actually call symbol block the pending block's not as useful to me that would be the reason for the beacon node clicking you know ticking on that boundary so that they can + +**Peter Szilagyi** +* Yeah, I truly believe that in Geth, if you are not mining, you are creating these pending blocks, and if you are mining, you are not creating these spending blocks, but rather mining blocks, which are a little different and done differently. +* So for validates, average nodes will just guess the next time and they won't care because they won't ever be caught to finalize anything, and for miners, well, well, I guess for miners you won't really poke at the pending block because you just want to wait for the next thing right? + +**Dannyi** +* what is the aim of the pending block when it is for non-mining nodes? + +**Peter Szilagyi** +* Ok, to be honest, I think it's pointless, yeah, well. + +**Mikhail Kalinin** +* I was under the impression that miners used band and block. + +**Peter Szilagyi** +* so the reason I say it's useless is because you have 4000 transactions in the pool or maybe even larger if you count the larger pools and miners can pick a few so your local note sees 4000 transactions fix 200 to execute and then you can check the result but even if you swap two of them which are doing some uni-swap things then you will get wildly differing results. + +**Danny** +* how is this being made available to users today? + +**Peter Szilagyi** +* Like the pending block, you can only query the pending state, so instead of having the balance of the network's current status, you can query the balance of the funding state, but as I said, it's not very useful right now. +* Sorry, only one more thing: the only reason we didn't really press for getting rid of the pending block is that it serves as a nice little caching layer, which means that I keep a list of transactions that I believe will be included in the network. +* I choose the best 200 and run them as pending blocks, but there's a good chance that only 150 of those 200 will actually land in the next block, so by the time I'm executing those 150, all of the storage slots that it hits are already hot in memory. +* But we'll keep the precast. Okay, so it saves you some cash. + +**Danny** +* A little hotter but they got it because if you want to keep the functionality we pretty much just need to get the execution engine respect uh mod 12 second time stamps and then I think you get most of the functionality of today so no problem and even then even if you didn't you probably get most functionality because most things probably aren't calling the execution engine. + +**Peter Szilagyi** +* Yeah, so I think the only request I have is that if there is this particular behaviour that uh any block will be on twice or not by second mark, then it could just be applied to the spec that this is to be anticipated, and that pending blocks should behave accordingly. + +**Mikhail Kalinin** +* Exactly  yeah yeah and also I was thinking that any block could be useful for applications that send any transaction and just get read from there from the nodes they are hosted to send transactions I mean this band in block series dependent state um okay anyway uh by the way then uh what is the uh functionality that is used for miners is it just creating a block + +**Peter Szilagyi** +* Since get currently recreates a block several times during a single mining cycle, it first creates an empty blocks empty log, then it fills it, and then it tries to build better blocks with different transactions, all of which can be mined. +* So, with the proof of work network, you simply build the block with a click whenever a request to the block is sent. +* So, from the e3 viewpoint, one alternative is to simply wait for these two clients to request a block and then run the transactions; however, this will take either half a second or longer. +* however long it takes to mine it to generate a block from scratch, or the other option is to try to prepare a few blocks in advance by guessing the timestamp, and then when you request it, we simply give you the best one and return instantly. + +**Mikhail Kalinin** +* Correct, I believe lifetime ticks will be used as input for this type of optimization as well. + +**Danny** +* Yeah, either works, and if there's a half-second delay to be expected, then the proposer will essentially call it early until they're supposed to broadcast right at the boundary to be able to pack the block, but if it's doing the pre-packing, it can call it later. + +**Mikhail Kalinin** +* Yeah, I was thinking about just standing not only the current uh time stamp rate but also the timestamp of the next slot to match this kind of functionality that prepares the block in advance okay nice um let's just yeah i'll think about it more I mean and probably add this to the specification as a separate message + +**Peter Szilagyi** +* why would you need a separate message when you're giving us new blocks anyway, and the new blocks are supposedly on the right time slot, so I can just add 12 seconds to that? + +**Mikhail Kalinin** +* It's not likely now, so it's possible that the latest block is similar to previous blocks; it's not always the case. + +**Peter Szilagyi** +* Oh yeah, but if these two chains correctly monitor the 12 second marks every block is another second mark then I can just measure which will be the next 12 second mark based on my chain head or and the current time so I don't think that's an issue there is when you give me a produce block request and I have to remake the block + +**Protolambda** +* Will most likely function if you account for capsules as well. + +**Danny** +* Yeah, it will work depending on the time. + +**Mikhail Kalinin** +* I mean, I'd like to add a separate message that just stands for the time this time update. + +**Peter Szilagyi** +* You can just extra because in the first round you will most likely not even attempt to be clever, but just whenever the miner says "I will never quit." If a client requests a block, I can simply create one, and the waiting time of 500 milliseconds is reasonable. However, is it acceptable if each client requests a block, what is the protocol, what is the time out, and how do I proceed? + +**Danny** +* is the planned propagation time, formation time, and so on, and then sometimes there's like a little bit of pre-work done because you know you're about to propose um, and then propagation can happen in that sub second on regular service. + +**Peter Szilagyi** +* so if there were planned latencies in producing a block, you'd only start your work a little bit earlier yeah but so uh let's say it takes me half a second unless it takes me one second to produce a block +What effect does this have on the e2 consensus? Does it matter whether it takes one second or not? + +**Danny** +* If I wait until the slot boundary and it takes one second, as long as I only have one to two seconds propagation for the entire network, it's perfect. You're aiming for something like a sub four second time from when I start my job and when you get maximum propagation, but +* If there were delays in getting the block that took you know a second, I as a block for this manufacturer will just start my job early so that I have the block ready at the start of the slot rather than waiting for a slot and then not having the block fix until one second later. + +**Peter Szilagyi** +* So I don't think it's a good idea to make the e2 client smarter. What I mean is that it takes one second depending about how many transactions I cram in and it might take less or more so I'm just wondering about the worst case scenario that if I take one second what happens does that consensus break block output or is it just a bit unpleasant? + +**Danny** +* It's most definitely perfect. uh, if you're taking two or three seconds, it's no longer appropriate. + +**Dankrad Feist** +* Why would you not take, I mean, my assumption is that what a miner does is continuously process make new blocks and always whenever they have the block available they start mining on that can't you do a similar approach that you start making blocks maybe four seconds before your slot time and whenever you're done you start making the next block with the latest information and send the current one to the beacon node so that it can immediately make a block + +**Peter Szilagyi** +* if it's yeah at if I make a block with a specific timestamp and it turns out that the actual timestamp of the validated requests from me is different, then I have to remake the book. + +**Dankrad Feist** +* No, but the validator will still request the block with the time stamp of the time when it actually has a slot like that's determined like well at that time it's deterministic like you already said. + +**Danny** +* Imagine the time sync between the beacon node and the execution engine being off by three seconds or something. + +**Dankrad Feist** +* Likes that, so you can only tell it what timestamp it needs. + +**Danny** +* That's what makes well, yes, yes it is, but if the execution engine was opportunistically generating blocks for the slightly incorrect timestamp and thus the incorrect thought, then once you ask. + +**Dankrad Feist** +* No, it shouldn't do that. My theory is that when a beacon node detects a block, it notifies the execution engine, say, six seconds before, and then the execution engine begins making blocks with that timestamp, which is still a few seconds in the future, but that doesn't matter. + +**Danny** +* so in the current functionality, you should only call the symbol block several times leading up and take the best one right right yeah connected the last one you can get. + +**Dankrad Feist** +* That would potentially give you more fees here, but I don't think we need an anagram. + +**Danny** +* Oh, this is the time slot. + +**Mikhail Kalinin** +* yeah yes, I get it, so that's just fine, and Lucas, there are certainly things to optimize on this and think about the training contact. + +**Nethermind** +* Yes, so what I meant to say is that I wouldn't put too many constraints in the spec on how long it should take to produce the block okay, of course we can put some max value that we expect because I will consider this implementation detail that can also vary for example on hardware because depending on your hardware it can take longer or shorter to produce a block so if I was implementing. + +**Peter Szilagyi** +* Instead of using a single method saying give me a block and then scattering to make a block, you can split it into two methods simply calling it prepare block, which says i'm going to ask for a block with this specific timestamp in the next whatever time and then the e3 client can try to make the best block possible and then when you actually request the block I i will give you back whatever the best is i have. + +**Danny** +* Right, instead of making a poll on that one tweet, you just say, "Start working, and I'll ask you in a minute. + +**Peter Szilagyi** +* so the issue with the poll is that you ask for a blog but I'm not sure whether I should make better ones or stop can you request once, twice, or 300 times or what happened paul's a little unpredictable whereas if you make two calls then at least I know that okay I gave you my best block I can throw away all that scratch work because it won't be used anymore. + +**Danny** +* Yeah, that's intriguing, but I think it's fair. + +**Dankrad Feist** +* I think you can replicate that with cool as well, like how the eth2 node just uh pulls and then whenever it gets a block it just starts the next request and uses the last one it got from that sequence. + +**Danny** +* Yes, the execution engine doesn't know when to quit optimizing. + +**Dankrad Feist** +* well, it would because you stopped making I guess it produces potentially one more block than appropriate I guess that would be the only downside but that doesn't seem to be a big deal. + +**Peter Sziagyi** +* I don't believe so. + +**Danny** +* but it's a constant check, right? + +**Peter Sziagyi** +* so currently what get does is that when I start mining proof of work networks, I create a block and give it to the miners to start crunching on it, but then some more transactions arrive and I assemble a new block that's better so I give that new block to the miners, and then some more transactions arrive and I create a third block and I'll keep doing that. + +**Danny** +* all right, so it's a continuous optimization, not just a discrete optimization, so make it the next block. + +**Peter Sziagyi** +* Basically, any time a transaction arrives, there's a chance that I can make a better block, so I need a signal to avoid making new blocks. Perhaps the signal will be a fixed head as well. + +**Danny** +* also, if the execution engine is more than a slot past the last calls for the slot, you know that no one's going to ask for it even if there's some kind of time explanation but then you're starting to make assumptions about time and the relationship between the two, which is probably not great yeah so. + +**Peter Sziagyi** +* I suppose this is a kind of open question for us first spec, but I'd say just ask for it once and if I have a block, I'll give it to you; if not, I'll make one and give it to you; and as long as it's quick enough, it shouldn't be a problem all right yeah okay cool yeah great. + +**Mikhail Kalinin** +* Well, so it's much clearer now, at least for me, so I think we can move on to the consensus. + +# [Consensus discussion](https://www.youtube.com/watch?v=ODcNpWiLASk&t=3190) + +**Mikhail Kalinin** +* So yeah, engine to the consensus so yeah, I have a few things to address here and there and some updates okay so the first thing for consensus is that there is an understanding of an enhanced transfer mechanism which is basically like. +* We have a transition epoch, and when the epoch occurs, the consensus node decides on the total difficulty of the transition total difficulty. This could be done by taking the current take the difficulty of the most recent block multiply multiply it by 10 or and setting this as the offset for this total difficulty and computing the total difficulty that will occur in the future what is great about it. + +**Danny** +* Take the most recent e1 data because it is understood to be available on the client right now. + +**Mikhail Kalinin** +* That's what I was going to ask, which one to use, because if we take the most recent block, it would have to be some sort of decided upon by all, which includes some additional agreement mechanism method, but we already have this f1 data voting so that might be right so when transition epoch happens yeah. +* So the eth one data that are in the state right we can use this block hash to get the difficulty and add the difficulty to the most recent block probably yeah so there actually the why is this a good idea is because we have um the exact point in time regardless of what difficulty would be on the network and we have this kind of total difficulty mechanism preserved which has its benefits. + +**Danny** +* And transfer epoch is basically a beacon chain fork because that's the stage at which you modify the data structures to help the execution payload even though they're null, and then there's a fork that actually occurs. +* When the fork occurs, the actual change in update of the consensus code occurs with a lead time before the actual transition and places the new code in place, and then the transition occurs, and then doing it as a function of that dynamically makes sense uh because it also removes another thing miners can potentially play with, like if 75 of the miners go offline, you know they don't. + +**Mikhail Kalinin** +* Well, so the open question here is how to compute this transition total difficulty what to use so we can think about it and get to this discussion. I will also think about how to do it like what potential ways of doing it we have um with relay with the relation to the inputs that we already have like in the beacon state and the beacon block and those that we can get from the beacon + +**Danny** +* Yeah, I guess the actual worst case scenario in hard-coding it rather than doing it as a function of this transition epoch is that you get a beacon chain fork that adds new functionality, as if you set the total difficulty say three months ahead and miners actually sped things up, which is obviously difficult and unlikely, but they sped things up and made the transitions total difficulty happen prior to the actual forking of the code and this prevents that that kind of crazy case from happening. + +**Mikhail Kalinin** +* Do you have any questions about the transfer process? Well, fine. Um, the other thing to talk about is the. + +# [Execution Payload discussion](https://www.youtube.com/watch?v=ODcNpWiLASk&t=3463) + +**Mikhail Kalinin** +* The execution payload size, which is the largest area here, is transactions, which have a mac size of up to 16 gigabytes at the moment, so we have to treat two separate situations where there are a few transactions with massive transaction data and a lot of transactions with no transaction data. +* That is why it is, because there are two constraints, namely the amount of bytes in each transaction and the number of transactions, which is why this 16 gigabyte is technically feasible and has the potential. + +**Danny** +* Should add some sense yeah the ssd sse lists have a max size because this comes into play and the structure of the mercalization rules and like the tree and so max like these stuff all have to have a max size and so when you take the max as the byte payload and max number of transactions currently then you get some crazy numbers like microsoft. + +**Peter Sziagyi** +* So, as far as I know, the death peer-to-peer network has a message size cap of 16 megabytes, but gath restricts the east suburb packages to 10 megabytes, which implies that if anyone mines an 11 megabyte block, gas would be unable to propagate. +* If someone mines a 20 megabyte block of ethereum one, clients will be unable to propagate it with the current specs, but that doesn't mean we can't upgrade it, patch it, or expand it; this is just a mental notice, okay? + +**Mikhail Kalinin** +* Well, I think this is the way to restrict this kind of stuff on the network, by simply restricting the size of gossip messages. + +**Danny** +* Yeah, on the beacon block gossip caps, you can win gossip validation requirements, and you can certainly manage it there based on maybe a feature of like gas cap so and so forth. + +**Mikhail Kalinin** +* We already have these kinds of boundaries in the gossip, if you know what I mean. + +**Danny** +* We do have validation criteria, and you may easily add this. + +**Peter** +* Another thing to bear in mind is that, at least with the ethmoid network, we've found that unless you have a quite very beefy link aka amazon, you have so for snapsync we're using half a megabyte packets and I can request packets from quite a few peers simultaneously and we've actually managed to overwhelm the local node with requests. +* So we've had timeouts not because the remote node isn't sending us the data fast enough, but because we simply overwhelm our own inbound bandwidth with data and it takes too long to bring it through. In essence, what I was saying is that once you get to this half-megabyte message size, things start to get weird. +* So, once again, I'm not sure what the long-term targets are for scaling stuff, but we should definitely keep in mind that network messages should be of some variety. + +**Mikhail Kalinin** +* Okay so get it the app is to limit it on gossip even the gas limit should work but yeah I don't think like this is the gas limit will be checked after the message is received because if there is like a 16 16 gigabyte message nobody wants to download it so it makes sense makes a lot of sense to reduce to to get just refuse this kind of thing something on the gossip network s + +**Danny** +* Agreed. + +**Mikhail Kalinin** +* Okay, the next thing is specific to the structures to the execution payload we have the we are going to have like multiple transaction types right on the mainnet or we already have them since berlin so the default option for the consensus side is to not deal with these different transaction types and just use this op transaction approach which is just the representing transaction as an rlp string and just which is working from consensus standpoint. +* It's just a string of bytes um and have like this introduced this is what it's already done but we can also introduce the union type with like a park selector which will allow for now just one type this string of fights but will give us some forward compatibility with the next updates when we decide to like stem from a back transaction and have them explicitly in the executable. + +**Danny** +* Yes, that is the plan. Uh, the concept is that when you incorporate transaction types structured in the ssd payload, you can get a little bit nicer proof structure rather than only getting the opaque rlp by load, but that for convenience, we can do opaque selector for now and then deprecate opaque selector with unique collectors in the future. I believe this is an idea from Proto. Do you have anything to add? + +**Protolambda** +* So the current z-spec defines a union form. We do not use the union type, but we can boost it by defining it as a single prefix byte to the transaction and then defining a single selector for the back transaction for all of the current forms in their encoded form. +* Then I'm talking about the envelope, which includes the inner selector that applies to the ether1 data, but outside of that we'd like this structured data for nice miracle proofs, and for that we'd like to define other options in the union that are more structured with ssc, and then we get this second byte, which is also kind of like a selector that applies to all the new types of transactions after the merge. + +**Mikhail Kalinin** +* So I think we should just do this at some point in time, so I don't think there's much to discuss in this regard here, so if someone wants to, if anyone has an opinion, let's discuss it offline. + +# [Consider eliminating uint256 requirement on beacon-chain side](https://www.youtube.com/watch?v=ODcNpWiLASk&t=3970s) +**Mikhail Kalinin** +* and the last thing is the uint256 uh in the beacon chain stack which is used for complete difficulty which is around 72 bytes I don't remember well you which is uh like just exceeds the unit unit 64. And we need to use something larger, so the first choice is not to exclude it entirely because it is not used in any arithmetic except for contrast.. +* So the spec simply compares whether the change total difficulty has already occurred or not, and yes, it could be done. The other alternative would be to, I don't know, denominate it in some way, but that would most likely necessitate some distinction happening on the um execution engine side because it returns little difficulty. I don't think it's likely to succeed, but it does necessitate additional, well, work. I'm not sure which approach is preferable. + +**Proto** +* I believe I skipped the fact that you're searching for an encoding for a large integer in if2 uh. + +**Danny** +* Basically, we've avoided bigent's arithmetic and he's two on the node side so far um and right now there is a big end yes right but it's not going to be encoded it's just got from the execution engine compare it to the constant. + +**Mikhail Kalinin** +* No, it's not going to be encoded in ssd structures. + +**Nethermind** +* I'm sorry, but I don't understand why the execution engine returns summer complete difficulty to the consensus engine. This is expected for the transformation procedure's transition phase. + +**Mikhail Kalinin** +* So the transformation occurs after a certain total complexity is reached. + +**Danny** +* Right now, the beacon node simply does not have big end arithmetic, so the total difficulty should be denominated in an un64 and a bunch of the um precision removed, and you'd still need a function it returns that with the lesson precision. + + **Mikhail Kalinin** + * Yeah, so the question is how difficult it would be to implement in 256 on the beacon chain side, and if it's not too difficult, I'd like to leave it there. + +**Danny** +* If any of your clients want to speak up, please do so. + +**Terence** +* It's not too complicated for us to shift, Terence, and we do use speaking in some places. + +**Meredith Baxter** +* Yeah for Terence i don't see it being difficult. + +**Meredith Baxter** +* We already have a beginning for f1, so we can change it. + +**Danny** +* great, let's uh ask the lighthouse people as well, but let's just act as though we can do a big end comparison for this one little thing before we hear otherwise. + + **Mikhail Kalinin** + * Yep great yeah, so let's just leave it as is and if there's a problem, we can change it. + +**Jacek Sieka** +* Okay, if it's only for order, you can do a byte-by-byte encoding uh contrast with the proper encoding. + +**Mikhail Kalinin** +* Oh yeah right yeah but you will receive it uh from why uh in json format I think yeah but you can I see if it's encoded you can compare and compare it as like. + +**Mamy Rasimbazafy** +* lexicographical collection that's like an exact decimal type but otherwise, more difficult. + +**Mikhail Kalinin** +* Okay good anyway, we have about 15 minutes here, let's go to Rayonism and update so Proto do you want to sorry? + +# [Rayonism discussion](https://www.youtube.com/watch?v=ODcNpWiLASk&t=4269s) +**Proto** +* Sure, in the last week or so, we've had a couple of these office hour cars, which are more casual cars where you can remain on the cutting edge of Rayonism. +* But we looked at the first devnet in how to plan the genesis and then also chatted with a few clients about how we move forward with the rpc and now we have this one genesis tool ready to go to prepare a test network. +* We have a guide for anyone who wants to set up their own test nuts on how to use this kind of thing, and I think we should basically concentrate on the rpc on upgrading to the new spec and then we'll be ready for the first prototype devnet. + +**Mikhail Kalinin** +* thanks, um i'd just go through client updates um on where everyone is with regard to ray and is um yep so maybe we can start from geth. + +**Peter Szilagyi** +* so actually the first uh first version was the decision was that uh we're going to keep Guillaume's api updated to I mean it will be tweaked and updated to validate to whatever spec the new api is but otherwise it will still be focused on directly only injecting data into the chain something else + +**Mikhail Kalinin** +* Yep, that's fine, but never mind. + +**Nethermind** +* So, we have an initial implementation that I am currently testing. I expect to finish testing and stabilizing it by tomorrow, and if any of the eth2 clients would like to engage in testing integration with the rpc, please contact me. I would be very happy to work on anything like that, for example, tomorrow. + +**Mikhail Kalinin** +* cool yeah great so um actually work on like taku i'm going to to you it's going to be ready tomorrow so I think I can experiment with catalyst and with another mind as well so just reach out yeah cool thanks everybody from open ethereum to turbo gas, bazoo, and abezu is starting to work on this back as well okay cool  + +# [Consensus-layer discussion](https://www.youtube.com/watch?v=ODcNpWiLASk&t=4521s) +**Mikhail Kalinin** +* So let's just go to the consensus um clients we can we can uh we can be managing clients so as I said takuru um should be ready by tomorrow I guess uh we'll test with catalyst first then try another mind hopefully someone else do bagath and know what's their status is so. + + **Terencen** + * Yeah, I'm still not making much progress on the api side from my end, I'm still reviewing the changes so I think once the api becomes more formalized I'll put it to one side it'll probably take me a while to catch up so that's not too bad other than that we built a faucet and for our regism it's fully configurable it comes with a ready react and angular project as a reference it's also dockerized so + +**Mikhail Kalinin** +* Yeah, thank you very much for this falset um serious all integrated I think in the first stage net i'm just dropping the guide on how to run prison that you listed okay nimbus members do you have any updates with ricardo rennis? + +**Zahary Karadjov** +* we're working on ryan ism now we have a pr but at this stage we're playing with catalyst but it's not here we'll be ready for the first testament that's our target we still have a little bit more work to do in the rpc interface between members and catalysts. + +**Mikhail Kalinin** +* okay great someone from the lighthouse nobody's here anyone else want to give the an update okay great thanks everybody + +**Nethermind** +* I have a question rather than an update; if possible, could you provide a rough estimation of the dates and plans for the devnet? + +**Proto** +* Sure, so the original idea was to start the devnet sometime in the first week of the hackathon just as an experimental short left and it's like okay whether you join later or otherwise but this is this kind of opportunity where you just look at can we try the rpc in something more of a shared devnet and so I just like to try and spin up some kind of prototype we have in the next week. +* I have this example configuration for the first deafness up in the realism repository, which I'll share again in the chat, and there I specify Monday as the ethereum one uh genesis and this can be skipped and then then stay as the actual genesis because there's this delay of knowing the exact genesis state of the theorem one and then from there you can compute the one for ethereum two and so on. +* I need I'd like to confirm this, and I'll probably wait for one or two more office hours to hear about client preparation. + +**Nethermind** +* Thank you very much. + +**Mikhail Kalinin** +* Any other discussions questions or announcements something else before we wrap up nice i'm sorry for screwing up the call this zoom connection, that so okay thank you so much for coming see you tomorrow next week next month um every time so bye bye thank you bye bye everyone. + + +------------------------------------------- +## Speaking Attendees +**Mikhail Kalinin** +**Proto** +**Zahary Karadjov** +**Terencen** +**Peter Szilagyi** +**Danny** +**Dankrad Feist** +**Gullaume** +**Mamy Ratsimbazafy** + +------------------------------------------- + + + + + + diff --git a/Merge/Meeting 04.md b/Merge/Meeting 04.md new file mode 100644 index 00000000..8f9ec2ab --- /dev/null +++ b/Merge/Meeting 04.md @@ -0,0 +1,529 @@ + +# Merge Implementers' Call #4 Notes + +### Meeting Date/Time: Thursday, May 13th, 2021 at 13:00 UTC +### Meeting Duration: 60 minutes +### [GitHub Agenda](https://github.com/ethereum/pm/issues/316) +### [Audio/Video of the meeting](https://youtu.be/uzjhLPtvTMQ) +### Moderator: Mikhail Kalinin +### Notes: Santhosh(Alen) + +# Agenda +- Rayonism updates ☀️ + - Nocturne devnet + - Wrapping up Rayonism discussion +- Research updates +- Spec discussion + - Consensus API standard + - https://eth.wiki/json-rpc/API + - https://ethereum.github.io/eth2.0-APIs + - Execution + - Different blocks with the same state root + - Consensus + - Open discussions + - Proposal to move the call to an hour before Eth2 Implementers call + +# Intro +**Mikhail Kalinin** + +Welcome to the Merge Implementers' Call #4 +* First item on the agenda is the Rayonism Updates and we are currently running nocturne devnet. + +**Mikhail Kalinin** +* Nocturne devnet which started yesterday has reached finality and look stable. +* Have been a few edge cases which we saw on this devnet and there is also like an issue with deposits with in particular with if one deposit voting - We near to solve this issues and see the deposite. +* 8 teams are running a validator and several community team are trying to break the deposit by submitting bad blocks and forth - It is shaping up great +* Couple of questions with Nocturne devnet: + * We've plan of testing of transcation propagation - Is any from Ethereum team on the call? + +**Proto** +* It is a holiday in Germany, so most of the team will be offline. + +**Mikhail Kalinin** +* Proto, probaly you might know, whether this PR is about to be merged or has already been merged, which allows transaction propagation. + +**Proto** +* I believe there is one pr gary that improves on some of the things, but I'm not sure about transaction propagation. The test net will continue to run for a couple more days, so we can try again later. + +**Mikhail Kalinin** +* Okay, that makes sense. +* The other question was about state sync, but I guess that was more at rest to go Ethereum team again so let's just keep it, yeah if someone wants to join Nocturne you're free and you're welcome to do this. You can reach out proto or just drop message in the iranism discord channel and request for any if you get deposited yeah but we need first two deposits to be resolved. +* so here's the layout for Neptune all right proto, do you want to add something about Nocturne. + +**Proto** +* Well, about Rayonism in general maybe I think yeah that's like the next step. Let's [start talking about this](https://youtu.be/uzjhLPtvTMQ?t=222) +* So with reyanism, I think we should basically wrap up the hackathon kind of things and think of the merge more as this thing that we are going to work towards with production, and this basically means that we want to do the rebasewhich i'd like to call it if you're playing with git terminology we have altair and london first this is this missing functionality which has been developed in parallel,  but now it's time to try and like layer the merge work on top of these updates and then implement the new api + +**Proto** +* That sounds like the rough plan, so the rough plan is to wrap up Rayonism and then concentrate on Altair and London while client implementers are focusing on Altair and London. +* We will continue to do some spec and research work, such as determining the transition phase. We'll do some proof of concept research on top of the infrastructure we get from Rayonism. +* Thanks a lot to Proto for doing trmendous amount work on it. +* We'll be back after Alair, and London is almost finished. +* We like spawn another merge testnet. Hopefully with the state sync with the new consensus API which is going to be discussed as well and specked out during this period of time a month or two, thats my understanding. I think this kind of plan makes a lot of sense. + +**Danny** +* Yeah, I think we'll even expand the consensus test vectors for the merge, which there's a ton of work on spectrum right now, and it'll definitely be ready for the next wave of development. + +**Mikhail Kalinin** +* Yes, it has been planned to deploy the Vistrol's devnet and focus on charging during realism. This work will continue in post-training because it is not like pendant and hopefully we'll have yes, as I already said, we have all tools like infrastructure blog,explorer, scripts dockers, just to spam devnets and test nets easily. +* Anything else regarding randomizing? + +**Danny** +I agree with Mikhail that proto deserves kudos for putting in so much work, as do all the contributors and others. It's great to see the devnet up and running. + +**Mikhail** +*Yeah, we'll have seven clients that have introduced the initial merge stack, which is an awesome outcome. +* Which client is not present? Yeah, open Ethereum is missing, and I'm guessing Turbo Gath is as well. But they can catch up with the changes from go ethereum, but I'm not sure if that's possible right now. +* Micah asked a query. I'm not sure how to react to it. Will the open very room be able to merge? + +**Micah Zoltu** +* It's fine if no one knows the answer. I'm just curious if anyone has any clues. + +**Tim Beiko** +* I believe they are still making a decision about it, and I do not wish to talk on their behalf. + +**Micah Zoltu** +* Okay + +**Tim Beiko** +* Yeah, it's likely that someone learned something that makes sense to share here, but if not, let's skip this. + +**Mikhail** +* So, I think that's it for anism. We're now moving on to analysis updates (9.37 seconds) + +* One update from my end: I was supposed to start working on the transition process, but I didn't have enough time to do so in order to create any you know readable spec or this kind of stuff for research that I was supposed to do, but yeah, I guess we'll start the next week was actually a bit busy with the orionism other stuff. +Is there any other research news? + +**Danny** +* Mikhail, the main reason for this is to adjust the transition component to be a dynamic total difficulty dependent on fork. + +**Mikhail** +* That's it, I was going to discuss how difficult it could change over the voting period and what meaning would make sense to how, what would be the correct way to extrapolate digital complexity that we could anticipate. + +**Danny** +* Got it! Yeah, let me know when you open that up so I can assist you. + +**Mikhail** +* Sure, so it's reasonable to use the if one data voting for to get the block hash that we'll use for extrapolation because otherwise, we'd need to reach consensus on this block hash first, which doesn't make much sense, but we'll see. + +**Mikhail** +* Well, that's fine. Are there any other research updates? +* Withdrawals, perhaps? + +**Dmitry Shmatko**  +* Yeah, I could say a few things. I received positive reviews from the previous call, and I made some changes. edit partial withdrawals section it appears viable but it will be restricted to validators with bls withdrawal credentials so it's very limited for use in shared pools, I think we cannot do anything on chain interfering one with it but something like shamir's secret, vls could work in of chain pools you could check an updated dog with rewards withdrawals section and provide me with some feedback on, thanks. + +**Danny** +Will do + +**Mikhail** +* Anything else before we move on? +* Let us now move on to the spec discussion + +# [Spec Discussion](https://www.youtube.com/watch?v=uzjhLPtvTMQ&t=796s) +**Mikhail** +* The first item is the consensus api standard, and I think it's a good time to open this can of worms and start the conversation. Well, I'd just like to share my opinion off the top of my head. +* On that, we'd like to have a json rpc discussion about how the consensus api will be supported by execution engines, and which underlying protocol will be used. We will use for that, and once we have decided on this protocol, we will be free to design the specific endpoints and move forward, so far we have the json rpc api, which I believe most people here are familiar with, and the eth2 api. The beacon node api so json rpc is based on the https as well but yeah it's the rest api and in general I'm leaning towards the rest api it's like convenient it has a lot of tools it can be secured and so forth but the argument for using the json is that it's already implemented in all of the ethon clients and we would only need to reuse the code but one thing that w Because of the close relationship between the consensus layer and the execution layer, I believe that implementing this from the ground up with a rest approach makes sense from this perspective as well to avoid bugs and in the re-implementation that will relate that will abuse the, it will like damage the security, anyhow, so yep, let's just discuss it and any opinions that we should use json rpc for this consensus api. + +**Lukasz Rozmej* +* I have a question: can you give me any more specific examples of what we can gain if you re-implement it? Simply saying that there isn't any tooling doesn't really tell us anything, so if we can concentrate on what it can get us and then decide whether or not to do it. + +**Mikhail** +* Yeah that's fair, Danny? + +**Danny** +* I'm going to pull up an old comment from Peter and Martin when we were debating the api between a beacon and validator and Peter jumped in and gave a long argument for using restful http instead of json rpc and regretted the choice of json rpc on current ethernet client and here it is I won't go through it all here but if you're interested take a look +* That, I believe, is extremely important when making these kinds of decisions. Obviously, I believe what is suggested is that one of the key disadvantages of changing this sort of thing is adding support for another api type on clients that already serve json. + +**Micah Zoltu** +* You just said restful http, did you say that or is that a mismatch? Are we talking about http, which means network sockets are out, or is rest over websocket still considered part rest? They are already on the table in this situation. + + *Danny* + * Well, I mean rest of the design pattern. + + **Micah Zoltu** + * I'm curious because I'm a big fan of rest but I'm also a big fan of websockets and especially for what's essentially going to be a long live connection like this websockets make more sense in my opinion and so rest over websocket I'd be a huge advocate for where I'd be a much weaker advocate for doing all the work to do rest over http or or websocket like we do with json + +**Danny** + * I'm not going to respond because I don't have enough resources. + +**Mikhail** +* Could you explain why websockets make so much sense in this case, Micah? + +**Micah Zoltu** +* Because, correct me if I'm wrong, and maybe I'm here, but there will be a fair amount of traffic over this channel, and we want to make sure we're not being inundated by just http overhead with websocket you spin up the websocket When you open the link at the start and leave it open, the overhead per message is very low compared to http, while with http, you always end up with more overhead from hp headers than you do for the actual payload. + +**Mikhail** +* Yes, we have a lot of people who are actually working on http + +**Danny** +I think you're overestimating the amount of communication and overhead there, not because of the variables themselves, but because the number of requests that have to be sent and the payloads there are probably pretty tiny. + +**Dankrad Feist** +* I mean, it doesn't seem like the header should be as wide as, say, one block. + +**Micah Zoltu** +* So, for one, I'd vote for http because it's just so easy to do things like curl and requests and stuff like that, while every other api seems to have far stronger blockers if you just want to experiment and do some fast stuff. + +**Mikhail** +* You meant to say you're on the side of rest, that you're in favor of rest? Since json rpc is also used + +**Dankrad Feist** +Yes, I agree with http rest. + +**Lukasz Rozmej** +* If we want to do rest and web sockets together, we may have to simulate some sections of the rest in websockets, such as having stuff from the route, which requires some code encoding, and so on, since rest was built primarily as a http api, if I'm correct? + +**Mikhail** +* Yes, and Micah sorry. + +**Micah Zoltu** +* Yes, I'm going to switch off websockets if there's not a lot of traffic, and I wouldn't be surprised if I'm overestimating the volume of traffic here. Do we have any idea on the, so one of the arguments for json rpc is that it allows clients to reuse code because they'll need to open a server on a different port, does that change how much code they're able to reuse like do we know our client? +* Our clients have been built in such a way that it's easier to spend up another copy of the same type of server inside their client or will it be just as easy to spin up a different server? + +**Paul Hauner** +* Oh yeah, I wasn't trying to answer that question; I put my hand up before I could try to answer it, but I'm guessing you're right. Oh yeah, I wasn't trying to react to that question; I put my hand up before I could, but I'm guessing you're right. + +**Lukasz Rozmej** +* So, to answer Micah's query, it's very simple for us to spin up a second port; we're already doing it for websockets communication, so simply adding another one will suffice. + +**Micah Zoltu** +* Is it substantially easier for Nethermind to spin up another json rpc server or is it just as easy to spin up a rest server inside another mine? + +**Lukasz Rozmej** +* A little easier to spin up only the second second port I don't think doing a rest uh will be that difficult but in the rest you have to for example correctly use um http code for communication yeah that's part of rest a little bit like carefully designing the responses etc error responses which are more or less specified in the json rpc already + +**Mikhail** +* Paul, do you have any other thoughts? + +**Paul Hauner** +* So, as far as I can tell, one of the things we haven't figured out about the interactions between the consensus and execution clients is how to deal with them syncing with each other. For example, if your consensus client is long running and you tell it to wipe the database of your execution client, how do we get them to sync with each other again? Um, has this been fleshed out somewhere because it seems like it could be one of the most important factors in determining the communications that we use, and I'm especially interested because I know that rest can be um, I think it's great for this reason, but it can be restrictive at times, and I'm I need to think through but I'm not sure if we'd start to run into problems with receptivity. + +**Mikhail** +* In this situation, I don't think the rest can add much more overhead in terms of sync than the standard json rpc, but that's just my view. + +**Micah Zoltu** +* Is there some correlation between the two stateful at all, or do you hypothetically have three execution clients on the back end talking to one um consensus client and all will be perfect, or is there some kind of presumed state? + +**Mikhail** +* It's stateful because it depends on the design of the execution client or the execution engine, like if we have three servers in front of the execution engine 4 that processes blocks, that's one design, and if we have the monolith architecture that we have today, that's another, so yeah there's a one to one relationship or one to many to one many beacon node relationship. + +**Dankrad Feist** +* It doesn't have to be stateful; the only state will be whether or not the execution node has obtained the block, but once it has, I believe that should be the only state. + +**Danny** +* The execution engines rely on a notion of the current head for a lot of things that could be changed and you could just provide a more dynamic representation of the block tree and several different potential heads, but they currently rely on like when you set the head there are some things that are optimized in terms of what state is available and which pending blocks are being created and that kind of stuff. + +**Micah Zoltu** +*Is it possible to design in a different manner? +* It would be fantastic if this could be a stateful or stateless relation, for example, can we make it so that when the consensus client makes a request of the execution engine, it gives the execution engine all the state it needs to respond correctly at that point in time? + +**Danny** +* I mean, you certainly can, and I believe the inserting block has the state that it requires, i.e., you either have the previous block or you don't, and a symbol block, I believe, right now tells us the head you want to assemble on, and then the information is there again, but there are still likely some optimizations and reuse of how these things work today that that head becomes and the other methods i think would work fine but it doesn't reuse existing code quite the same. + +**Micah Zoltu** +* Okay, so if a consensus client asks an execution client to build me a block, it will give it enough information to either get a correct block or an error saying "I can't build that because I don't know about this head you're talking about," but it won't give back an incorrect block. + +**Mikhail** +* Okay, so let's say there are three execution engines and one consensus client in front of them, and in order to stay in sync and maintain the full state and execution chain, this uh consensus client will have to feed all three with new blocks and any other information required. + +**Micah Zoltu** +* So, effectively, whatever routing you have there will have to do a broadcast so that it receives a new block new set head from the consensus client and then broadcast it down to all of its related execution clients so that they all update themselves, right? + +**Mikhail** +* Yeah, like set had a new block + +**Danny** +* I believe one of Paul's concerns is that if  consensus says insert block but the execution engine doesn't already have the parent in there, what is the communication protocol to recover from that? Does the consensus just walk backwards until the execution engine has what it's supposed to have and then inserts from there or is there some other more complex communication protocol to recover from that? +* Just walk backwards until the execution has what it's supposed to have and and then inserts from there, or is there any other more dynamic recovery that's I don't think we've quite worked on and that's the kind of like how are these two things in sync um you know what happens if one shuts down and then you come back up and it doesn't have a database like that kind of stuff we haven't worked through it + +**Mikhail** +* Yeah, actually, before the assemble block with any parent hash is sent, we have this new block with this parent hash right so if that wasn't the case, then yeah, there is a consistency between the beacon chain and the execution chain uh if we're talking about like uh one consensus engine and one execution engine uh if this is like the infrastructure where you have several we can become we can change clients using like a few execution engines or something like that yeah probably that could be the case. + +**Paul Hauner** +* It seems to me that a test would be if you had one um consensus client, then a proxy, and then three execution clients behind that proxy, so you know the walking back process procedure that Danny was talking about would just doesn't makes sense if you start bouncing off random um execution clients based on the proxies. +* It makes me think that maybe rest isn't the best thing we're chasing after. I mean, naturally, I'd prefer to rest just because it's a good thing to do. It's preferable to a json rpc because I like it, but this just feels a little bit more like a rpc to me, like a one-to-one rpc. + +**Mikhail** +* Yeah, I get it. What I don't like about json rpc is that it has custom error codes and custom error messages, but as previously mentioned, we're all familiar with that. One thing to consider here is that all if two clients already support json or pc and have a json rpc client to fetch deposits and get eth1 data for the rewards, we won't have an overhead and we won't have to implement this json or pc client either. + +**Paul Hauner** +* Thank you, I was just about to mention that the new design has these two different mechanisms for consensus and execution. +* I think that's the direction we're going for now just because it makes sense, but um a world where they're wrapped in the same process, not necessarily maintained by the same team, but they present as a single binary um seems appealing to me, and maybe using something like json rpc is nice because we might start to use something like an ipc socket as a comms transport between them and if we're doing something like that, instead of having instead of having two processors we're importing them as a binary then that works very well for them to talk between each other um whereas like having a http client server between these two like inside the same processes is a little odd as well + +**Mikhail** +* And, when you say binary, you're referring to the binary protocol. + +**Paul Hauner** +* So I'm talking about binaries in the sense that you know what I'm talking about .exe is a type of executable file in Windows. + +**Mikhail** +* Okay. + +**Micah Zoltu** +* Correct me if I'm wrong, but if the consensus client says, "Hey, assemble me a block with this parent," and the execution client doesn't have that parent, is it correct that the execution client then goes to its own gossip network to get that block and doesn't respond to the claim? The block does not respond to this argument, so it is still one-way communication or request response rather than two-way communication. + +**Mikhail** +* There are two options for returning an error, one of which is a database and consistency error, since these two parts are actually one client and their data should be consistent. The other option is and try to go to and download and pull this block from the block, not from gossip, but from the protocol network protocol and to get those blocks, but yeah I guess what was i kept in mind is what like it's just responded with error so there is no such parent block so + +**Rai** +*Do you still request the block through ether or did we remove it as part of the networking to make the execution engine a little more efficient? + +**Mikhail** +No, we didn't take this out, but we did cut out the vlog gossip. + +**Danny** +* It can also be used because of how initial sync, especially state think, is performed, and there's certainly an interesting design decision here if the execution engine detects any sort of inconsis consistency because requests are being made for things it doesn't know about, it can use that endpoint to go and fill you know the unknown stuff and to use the peer-to-peer network to get back in sync with the consensus node, which is fascinating because it probably works right out of the box, but it's also strange. + +**Micah Zoltu** +* I like it because it makes it so that the consensus engine says, "Hey, do this thing for me," and the execution engine says, "I can't," and then it basically fixes itself on its own, like it's essentially a self-healing system, and then if you had an edge server proxy server between the two, you might notice that oh, we got a consistency area, take that execution", engine out of rotation because it's down for a while and then we'll try it again later, and in the meantime, I can fall back to a backup execution engine or just it's got three in rotation now it's just got two in rotation or whatever, and then the whole machine ends up being reasonably self-healing if the execution engines can heal themselves when they get a request that means they're out of sync right? + +**Danny** +* Yeah, I like it as well. I mean, it gets to use exactly what the execution engine does today to heal itself if it learns about things it doesn't know about, but it can still complicate things, especially with one-to-one communication. For instance, if you talk to an execution engine locally and it doesn't know something, you just kind of Sit back and hope that it knows about it in the future because it presumably aids in self-healing, and the consensus can't be as proactive as it would like to be. + +**Micah Zoltu** +* I see, because it'd just have to pull it because it's a one-way communication channel before it returns a success, right? Yeah, the case we're talking about right now is + +**Mikhail** +* If a consensus client asks for assembling a block on top of a parent that isn't known for that isn't like presented in the execution chain, it means that the consensus client before while importing a parent of this block failed or something bad happened, because if the execution engine response is like this block is true, we assume that the consensus client before while importing a parent of this block failed or something bad happened. + +**Micah Zoltu** +* The reason I keep harping on this is because I believe pragmatically, what we'll see is a lot of people running validator clients and very few people relying on third-party providers for the execution client because the execution client is too expensive to run, like I run a few and they're not cheap and they're not easy like it's. +* It's you basically have to run operation center to run an eth1 client or an execution client right now and that's not going to change in the media future like we're working on it but that's a long way off and so I think realistically we'll see people going to places like infra and fast node and all these and alchemy for their execution client and they run their own consensus client. +* And, in that case, we do have exactly this where you've got you reach some proxy server and the processor is going to route you to one of 100 execution clients and so I suspect that's going to be the typical scenario rather than the rare one as we would like which is unfortunate but I think the real I suspected truth. + +**Dankrad Feist** +* I think an operations operations center is a little exaggerated um I agree it's a big problem it's like where it connects more than twice um but also as a comment like uh from a research perspective we're thinking about how to change it like using improve custody where we make it necessary for people to run their own execution like we want to make it very hard to do + +**Zahary Karadjov** +* who we have users at nimbus who run both, gets an investment from a Raspberry Pi. + +**Micah Zoltu** +* I've heard of such people, but I'm not sure how they do it. I rent a server, and I'm struggling to keep it running. + +**Paul Hauner** +* Yes, we fail to hold gath up on a box with eight gigabytes and four cores, but it sometimes works perfectly. + +**Micah Zoltu** +* Just a quick question: if everyone agrees that we're going to brick people who aren't running both execution and consensus client, then yes, I think we should design for the one-to-one connection and focus on making that good and smooth if we think that, at least for the time being, that's what we're going to allow for and enable. +* People to do things like use alchemy and infira, so I believe we should design for that because I believe that will be more popular, so maybe the first question is which one are we actually designing for?. + +**Danny** +* There are two types of validators uh sorry i'm getting a lot of feedback there's validators where there's an explicit desire to put a proof of custody on execution so that it's not outdoor school but for users in general there's all sorts of uh design considerations you know running a beacon chain then and getting proofs about state uh execution their state or running a light.beacon chain and not running execution at all or you know amongst several different versions of that so it's not just the validator that we're developing for here not sure also to add + +**Dankrad Feist** +* Maybe to add that the one-to-one design might include things like secret shared validators and stuff like that we should also consider that because it might make sense for example to run a secret shared validators where you have uh four separate beacon nodes but only one execution node uh designs like that uh made possible so I wouldn't necessarily say that only because we don't know +* Yes, that's correct; sorry, go ahead. + +**Paul Hauner** +* I was going to say if we look at one too many do we have this then the idea that you know if a consensus client requests a block from the execution client because it doesn't know the parent and goes and tries to find the block itself don't we have the problem that the execution client can't rely on blocks being valid unless it can verify them with a consensus client right and Okay, so if it gets a request for a block, it can presume that it's canonical, but that kind of breaks down when you get to infuria, where you'll only have people spraying something at it. The validity of waiting is independent of the right of consensus. + +**Danny** +* So you might tell it to follow an execution chain and it would be valid in terms of execution parameters, you know the evm transformation correct, but the consensus any any consensus kind of outer layer on top of that is not going to select that chain if there isn't a valid set of transactions, I mean a valid set of transactions. + +**Paul Hauner** +* Wouldn't there be a trash can by then? I could only fill it up. + +**Danny** +* Completely, if you open it up to the point that anybody can trigger anything you're talking about, I believe that's a loss factor. + +**Mikhail** +* Yeah, because that's what I was trying to suggest, that it has to be a consensus block first, so it can't just get the execution block hash and heal itself. + +**Danny** +* Tt's possible If it's a trustworthy partnership, I mean, if in bureau running execution layer clients and not getting any view into the consensus layer, I mean, I think they'd have to build their own trust model here on these endpoints. I don't think you can open up any of this stuff to arbitrary requests regardless. + +**Mikhail** +* Returning to json or the sea rest api, are there any reasons for or against json rpc? Does anyone have anything to add? + +**Paul Hauner** +* I have the impression that we aren't quite at that point yet, that we still don't understand the nature of the communications between the two things, and that I believe that when Amica spoke about one to one or one to several, that's probably what we need to be thinking about in abstract terms before we start picking protocols, but maybe I'm wrong. + +**Danny** +* Danny: Yeah, I'd tend to agree that these one-to-one, many-to-many, and many-to-many questions, as well as the staying in sync question, should be poked at least for a week or two to see whether the current communication protocol is sufficient, and if it is or is not, that would tell us what we want to do here. I mean, my gut tells me that respiration tv is better, but based on what I already know, but I believe there is more unknown + +**Paul Hauner** +* If I had my way, I'd like to see it be like one-to-many restful http http because I think that's pretty versatile that'd be that'd be cool to aim for, I think. + +**Danny** +* Proto, because of the authentication model, you had a small preference for wrestle hdg. Is there something you want to share before we move on? + +**Protolambda** +* So I think separation between the two different rpcs is really important for security and just stability. I think in the current design there's a lot of assumptions on the e1 connection on the existing event connection for the deposit data fetching and sync and um in this test net it's really been a struggle mostly to work around these assumptions to make it stable and I think just Starting with a new relation that is based on consensus and is separated and protected is simply a better approach. + +**Micah Zoltu** +* If I understand you right, you're saying that by using a different protocol, we can almost guarantee that we won't have clients with bugs that trigger bleed between the two. + +**Protolambda** +* Yeah, so inside the json rpc they're like it's the client that reveals it and declines that fetches from it that have these current assumptions around it for deposit data sync and at the same time we mix it up with the previous existing code and I think it's just like you just increased the surface for books in the consensus api and I think it's just like you just increased the surface for books in the consensus api. + +**Paul Hauner** +* So, Cody, are you attempting to make a case for dedicated deposit endpoints on execution clients? + +**Protolambda** +* I think that's a better idea as well. We've seen different I also like books in the receipt logs and whatnot, and if they break anything this important, then yeah, it would have been a lot i would not mind separate. + +**Mikhail** +* Is there anything else you'd like to say, Lucas? + +**Lukasz Rozmej** +* Lukaz Rozmej: so a little bit on the side because we are I heard some talk about uh one too many clients um connections is that right oh it's okay uh we were also thinking about making it many too many so uh arbitrary number of clients could talk to one uh ethereum to letter in one node and vice versa so uh we were also wondering what this would be this would require some additional work on our side to enable that and we will have to differentiate the clients and keep some state for them uh some block tree info about the current state and some transaction pool uh need need to be separate uh but the rest could probably be shared that could be a good way to like reduce resource use because if each ethereum to validator node would need an ethereum one node that could be quite a big If we can share each ethereum onenote for like 10 or 100 ethereum two nodes, it will be less of a pain and easier for providers of that infrastructure, for example. + +**Mikhail** +* Unless there are any more arguments, I believe we can conclude, and uh yeah we are already using json rpc for consensus api and we will continue to do so for the proof of concept and development phase um yeah that being said it is yet to be determined what are the requirements for this communication protocol uh with regard to the sync process so we'll see we'll see more inputs to this question and get a response Okay, let's move on to the next one, spec discussions and execution (51.41) First and foremost, we encountered an interesting edge case uh with the catalyst on nocturne.net um there was a kind of uh yeah okay so the case is the following uh like um suppose we have a block uh and we have like two children of this block um and these both children have the same state route which is legal because we no longer have uh minor rewards and these two blocks can have empty transcations list.  +* What catalyst does is reject the second block and observes an error like yeah this, and this process is part of a mechanism that defends against state mirroring attacks. +* I'm guessing no one from go ethereum is here to address this specific action, but I'm sure Proto can add more here. + +**Protolambda** +* So this state mirroring attack really just applies to like long range attacks, like beyond like 100 or so blocks in the test net when they're done with many transactions, it's very common to have the same state routes and rewards are issued in the consensus protocol and not in the execution protocol, so you'll end up with the exact same state and maybe we could redesign this so that we have a unique state route parent block and this would be changed if here in one side. + +**Danny** +* Okay, I've got a similar issue, does anyone know? + +**Mikhail** +* Any other econ clients have a problem with, or have problems with, or have any protection against, having two blocks with the same state route? + +**Danny** +* Is this an issue? While there is an alert from guess, does it currently harm the functionality? + +**Protolambda** +* It will inserts when it adds a side chain and reorganizes the blocks. + +**Mikhail** +* Yes, just reject the block. + +**Rai** +* So, at the very least, we need to read Martin's right away to see if it affects us because we don't know what the conduct is right now. It's on my to-do list. + +**Danny** +* okay, so if the consensus side tries to insert what the execution sees as a block it already has and it just returns and says okay I I have that already um then then you point then you did a set head assuming the method exists um will there be much of an issue here because essentially I I suppose two different we talked about this a little bit but Like two different beacon chain forks might point to the same underlying execution layer chain and that all of them essentially if you reorg from this to that you'd say the head and then point to the same place and the execution layer probably doesn't care I think there's probably only some minor things to work through here but I don't suspect that we will really need to enforce that Every beacon chain execution layer root is distinct across forks; obviously, you could accomplish this by inserting. + +**Protolambda** +* So the problem why we protect against this attack is to optimize the way we sync these execution padlocks in the ethereum one client so if you can trust the state route then you can basically skip ahead and when there's this kind of long range mirror attack I don't know the details so you may skip this validation and so even though the state route is the same the block contents could be different and then you could get into this dangerous kind of sync scenario + +**Danny** +* What exactly do you mean when you say that the contents are different? + +**Protolambda** +* If you optimize to trust the state route, you could run into a problem where your block contents aren't federated correctly if you reorganize and then accept the state route because it's the same. Then your block contents may not be federated correctly + +**Lukasz Rozmej** +* I have a question, we're talking about mirrored state attacks, right? I believe it's related to pruning, and we'll prune once we've finalized a block from ethereum to consensus in the ethereum one execution engine, since we can't prune before that. + +**Mikhail** +* Yeah yeah I agree because with europe with you on that also I think that this is not related to potential and prune state try pruning implementations this is I could be wrong here um it's probably better to ask your ethereum but from what I understand um it's related to how geth does state try pruning and yeah this kind of attack is specific to geth and to this particular pruning algorithm When they have like this side chain that is not executed until it reaches a greater total difficulty than the canonical chain, they switch to the side chain and if there is like a gap where they can't retrieve the state because it's been pruned, they trust like this portion of chain they can't execute that's and that's where the state mirroring appears yeah so is it. as a result of reorganizations Yes, during reorganization and as a result of pruning. + +**Lukasz Rozmej** +* Yeah, so if we don't pronounce it before the block is over, we won't have this issue. + +**Mikhail** +* In the context of executing and the context of execution on the beacon chain, I don't think we need to make state routes unique for each block, but this this is just an edge case that appeared in the nocturne devnet as a signal to consider um this state route is not unique for each block and keep in mind for further um design or like testing and so forth. + +**Lukasz Rozmej** +* So, never mind, we endorse side chains because we want the state to be consistent because of the low level of traffic there, and we're cool with that in general. + +**Danny** +* You said there was a write-up on this from Martin, and that's right. + +*Rai* +* Yeah, he recently posted it in the um private key base that these one developers have, um yeah, I had to read that, and I guess it's essentially just a connection to a github yeah. + +**Mikhail** +* Well, so is there anything else that implementers would like to inquire or mention on the execution side? + +# Consensus discussions [1:00:26](https://www.youtube.com/watch?v=uzjhLPtvTMQ&t=3626s) + +**Mikhail** +* Okay, so the next step is consensus discussions. +I don't think there's anything to discuss here, but just in case, does anyone want to discuss something or ask a question? +* Okay cool, so let's go to open discussions, and there's been a proposal to move this goal to the same day as if another scrum is being made, so we'll just make it but for the same time slot, so it would be like one hour of u merchant panerai school and then the two implementers call just curious what do people think about it and probably Paul this question and probably paul has will express his opinion + +**Paul Hauner** +* Yeah, thanks for bringing it up, Mikhail. Um, these days, this call is 11 p.m. now, and then midnight when daylight savings kicks in, so stacking them together is appealing to me. I'm not sure whether anybody has reasons why that's not a good idea, but I think meeting fragmentation is also something that interests me. + +**Mikhail** +* I'm just curious if we'll be able to see it for two and a half hours, particularly if we have a lot to talk about with the merchant about other e2 things. + +**Danny** +* The other issue I see is call fatigue after an hour or two, but these two calls are normally fairly light. I think that will change a little bit as we move into altair output, but those calls are mostly only 30 or 40 minutes long. + +**Mikhail** +* Any objections to trying it out and seeing how it goes? Well, so we have a youtube letters target next week, so I think we'll try the new time for the vertical three weeks after today, right? + +**Danny** +* I think that's a good idea, a little extra time now that Rayonism has subsided and there's a lot of work on Altair in London that'll happen, that's a good break. + +**Paul Hauner** +* Yes, thank you all for your thoughtful consideration; it means a lot more than you would think. + +**Danny** +* I have a guest room available for you to move into in Colorado time zones are fairly consistent. + +**Paul Hauner** +* Sure, I'll check with my government to see if I'm allowed to leave. + +**Danny** +* Yeah, I guess you won't be able to enter this country. + +**Micah Zoltu** +* I've lived in the United States, and don't believe anyone who claims their time zones are anything close to reasonable. + +**Danny** +* Yeah, it's wonderful; we're awake; I'm awake; I have calls at six a.m.; it's wonderful. + +**Mikhail** +* Okay, any closing remarks okay, thanks everybody, thanks for uh this fantastic month of work that I've had the pleasure of being a part of, and I'll see you in three weeks, + +------------------------------------------- +## Speaking Attendees +**Mikhail** +**Danny** +**Micah Zoltu** +**Rai** +**Paul Hauner** +**Lukasz Rozmej** +**Protolambda** +**Dankrad Feist** +**Dmitry Shmatko**  +**Tim Beiko** + +--------------------------------------- +## Next Meeting - June 03, 2021 at 1300 UTC + +--------------------------------------- +## Zoom Chat: + +09:00:18 From Mikhail Kalinin to Everyone: starting in 3 minutes +09:06:20 From Mikhail Kalinin to Everyone: https://github.com/protolambda/nocturne +09:11:11 From Tim Beiko to Everyone: Yeah, this was great! +09:11:15 From Micah Zoltu to Everyone: Which client is missing? +09:11:19 From Tim Beiko to Everyone: OE +09:11:24 From Micah Zoltu to Everyone: 👍 +09:11:36 From Micah Zoltu to Everyone: Do we believe that OE will be able to make The Merge, or is there worry that they may not make it? +09:11:58 From Tim Beiko to Everyone: We can let them answer that. +09:15:41 From Dmitry Shmatko to Everyone: https://hackmd.io/@zilm/withdrawal-spec +09:16:14 From Dmitry Shmatko to Everyone: rewards withdrawals are here https://hackmd.io/@zilm/withdrawal-spec#Partial-withdrawals- +09:17:12 From Micah Zoltu to Everyone: It is already decided that execution engine is "server" and consensus engine is the "client" and there are no requests that flow the other direction? +09:18:15 From danny to Everyone: that is the current design, and a design goal unless we hit an unexpected snag +09:18:21 From Micah Zoltu to Everyone: 👍 +09:20:16 From danny to Everyone: https://github.com/ethereum/eth2.0-specs/issues/1012#issuecomment-489660765 +09:23:24 From Micah Zoltu to Everyone: I'll back down on WS if the throughput is low. +09:27:04 From Micah Zoltu to Everyone: My very weak and meaningless vote is REST over HTTP at this point I think. +09:35:27 From Lukasz Rozmej to Everyone: https://github.com/ethereum/eth2.0-specs/issues/1012#issuecomment-489660765 +Read through it. I agree on everything. The only thing this is relevant to public current JSON RPC API. I don't see anything that would be related to the merge communication. +So if we would be moving to all REST API - then I am all in. Just for Merge - I don't see the point. + +09:36:47 From Lukasz Rozmej to Everyone: unless we want to pick this as a starting point for migration of other API's +09:48:10 From Micah Zoltu to Everyone: I still like REST over HTTP, even after that conversation. +09:49:29 From Lukasz Rozmej to Everyone: Micah I like it too, it just doesn't bring any benefit to the merge itself +09:50:13 From Micah Zoltu to Everyone: If we need them to stay in sync (bi-directional), then I'll probably switch my non-vote to WS with *something* (maybe JSON) for payloads. +09:50:24 From Micah Zoltu to Everyone: 👍 +09:52:03 From Micah Zoltu to Everyone:I like proto's argument. I'm a fan of making it hard to add a bug to a client on accident. +09:53:33 From Mikhail Kalinin to Everyone: 👍 +10:04:55 From Tim Beiko to Everyone:If it helps with timezones, let’s do it :-) +10:05:45 From Micah Zoltu to Everyone: Do what I do, don't get any human socialization other than meetings, then meetings become the highlight of your week. +10:05:52 From danny to Everyone: lol +10:06:02 From Mikhail Kalinin to Everyone: ahaha +10:06:31 From Tim Beiko to Everyone: +1 to 3 weeks + +--------------------------------------- diff --git a/README.md b/README.md index 340b6b5f..4d379298 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,11 @@ The meetings are independent of any organization. However, Tim Beiko is a contra № | Date | Agenda |Notes | Recording | --- | -------------------------------- | -------------- |-------------- | -------------------- | +114 | Friday 28 May at 14:00 UTC | [agenda](https://github.com/ethereum/pm/issues/321) | [notes](All%20Core%20Devs%20Meetings/Meeting%20114.md) | [video](https://www.youtube.com/watch?v=7MSYLbn-Xro&ab_channel=EthereumFoundation) | +113 | Thursday 15 April at 13:00 UTC | [agenda](https://github.com/ethereum/pm/issues/299) | [notes](All%20Core%20Devs%20Meetings/Meeting%20113.md) | [video](https://youtu.be/ODcNpWiLASk) | +112 | Thursday 13 May 2021, 13:00UTC | [agenda](https://github.com/ethereum/pm/issues/316) | [notes](All%20Core%20Devs%20Meetings/Meeting%20112.md) | [video](https://youtu.be/uzjhLPtvTMQ) | 111 | Friday 23 Apr 2021, 14:00UTC | [agenda](https://github.com/ethereum/pm/issues/301) | [notes](All%20Core%20Devs%20Meetings/Meeting%20111.md) | [video](https://youtu.be/C9hzAYkklQM) | +110 | Friday April 16th, 2021, 14:00 UTC | [agenda](https://github.com/ethereum/pm/issues/293) | [notes](All%20Core%20Devs%20Meetings/Meeting%20110.md) | [video](https://www.youtube.com/watch?v=-H8UpqarZ1Y) | 109 | Friday 02 Apr 2021, 14:00UTC | [agenda](https://github.com/ethereum/pm/issues/289) | [notes](All%20Core%20Devs%20Meetings/Meeting%20109.md) | [video](https://youtu.be/V-Qz4UN6Z88) | 108 | Friday 19 Mar 2021, 14:00 UTC| [agenda](https://github.com/ethereum/pm/issues/288) | [notes](All%20Core%20Devs%20Meetings/Meeting%20108.md) | [video](https://youtu.be/AclPXsRlgSc) | 107 | Friday 05 Mar 2021, 14:00 UTC| [agenda](https://github.com/ethereum/pm/issues/287) | [notes](All%20Core%20Devs%20Meetings/Meeting%20107.md) | [video](https://youtu.be/xWfR-WxjmYg) |