-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib/runtime: use __heap_base as allocator offset #1406
Conversation
Receiving a time out error on the wasmer module on this PR as well |
@RyRy79261 the CI is not having that issue and I'm not seeing it locally either, how are you running the tests? |
Just running |
@RyRy79261 ok can you run |
Sweet those two pass, trying out the syncing command: |
@RyRy79261 nice! yeah sometimes there is a delay between receiving block responses (they are 128 blocks long, so you might be seeing a delay every 2 responses) I'm going to work on some improvements for syncing still |
Ahh rad well everythings functioning as expected then, will review n approve in the mean time while climbing to 10K |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My node won't get past block 1280, it syncs quickly than pause (as described every 128 block), but at this point it seem stuck (waited over 5 minutes). I do ctrl-c to stop node, get the stopping services message, but it still's on the currently syncing thread, and never quits. So I need to kill it. Then restart, it starts syncing again at block 1025, then gets stuck again. I've stopped and started 5 times, and still can't get past this.
@edwardmack can you try this branch? #1407 it has all the changes from this branch in it also |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was able to sync past 11158, so this seems to have fixed that.
It became "stuck" at 11388 with this:
INFO[02-23|15:32:55] imported block pkg=sync number=11388 hash=0x69d9036f3fc3f4e05b7d8d613b58b3a8c739dd0804e12e83369c7fbe8715ada9 caller=syncer.go:314
WARN[02-23|15:32:55] failed to handle block data; re-adding to queue pkg=network start=0 end=0 error="could not verify block: failed to verify pre-runtime digest: invalid secondary slot claim" caller=sync.go:475
CRIT[02-23|15:32:55] [ext_logging_log_version_1] pkg=runtime module=go-wasmer target=runtime message="panicked at 'Timestamp must increment by at least <MinimumPeriod> between sequential blocks', /home/volt/.cargo/git/checkouts/substrate-7e08433d4c370a21/d9fca7e/frame/timestamp/src/lib.rs:147:4" caller=imports.go:134
WARN[02-23|15:32:55] failed to handle block data; re-adding to queue pkg=network start=0 end=0 error="failed to execute block 11292: Failed to call the `Core_execute_block` exported function." caller=sync.go:475
I stopped and re-started and it's continuing now.
@edwardmack i noticed that happening too, gonna try to fix that. if it made it past 11158 then this should be good! |
noot: lib/runtime: use __heap_base as allocator offset (#1406)
Changes
Tests
also, run
./bin/gossamer --chain ksmcc --bootnodes /ip4/3.235.193.225/tcp/30333/p2p/12D3KooWLLvGJZWF7z6KaCw6CPPbPkccrkCHmQBch27vtfngcZki
and keep it running until it syncs block 11158 (you may need to restart the node because it seems the sycing stalls every so often)Checklist
Issues