This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Different WASM VM behavior #7540
Comments
Yeah there was a bug at some point in the host interface which could lead to this. It probably means that you tried to apply a runtime upgrade that used host functions that were not available on the nodes at this point in time. I will need to think about it, I don't have a good idea right now. |
Hello @bkchr, any ideas? May be some block import hook could be used here? |
Might be related, several nodes(about 10/90) of ChainX CC1 also encountered this issue, they can not import block 0x822ad79194b344baaa2f331c97a2e05321e4955785b9f3302beeea672d3333fe, we are using this commit of Substrate 11ace4e .
|
@akru I assume this is not relevant anymore? 🙈 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Description
During the upgrade Plasm node up to a stable substrate version, the different WASM runtime interpreter behavior was reached.
Node1 (based on substrate 2.0-alpha.6) is able to import block 656514 without any issues. But node2 based on substrate 2.0 throw excheption in WASM mode:
As I can see, the reason is Sudo-extrinsic 656514-2. This extrinsic upgrades runtime code. Unfortunately, the runtime WASM block was so large to be applied. It returns just
false
in alpha.6 VM, but throw an exception in later versions.Request
I will be glad to get some suggestions about a better solution for it. Technically we have two options:
PS: If VM behavior fixes impossible or illegal, please suggest a better way for chain hardfork: source code injections, changes, etc.
The text was updated successfully, but these errors were encountered: