-
Notifications
You must be signed in to change notification settings - Fork 20.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fatal: out of memory issue #15428
Comments
Tested on window2012, there is no issue, so I think so far is only existing on ubuntu 16.04 |
Sorry, I jumped the gun, window has the same issue. But it is not crashed yet. The memory setting is 256, but it took almost 3 GB so far. |
got the same error in windows 7 ,my memory is 8G |
@johnluan how much memory does the machine have ? |
The server's memory is 4g.
Now i am using parity, there is no issue at all.
…On 10 Nov. 2017 7:30 pm, "Martin Holst Swende" ***@***.***> wrote:
@johnluan <https://github.com/johnluan> how much memory does the machine
have ?
The cache is a hint to the database on how much to cache; but there are
other memory allocations going on beside that, so the total memory
requirements will be (a lot) larger than that.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#15428 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABoPl556e5iHGLKbuavJSFX5zyAVL1PMks5s1Am0gaJpZM4QSmUG>
.
|
@holiman We have some memory leak during sync. It's not a full blown leak, as in it gets cleaned up after sync completes, but there's some dangling reference that prevents some objects from getting cleaned up. This causes memory use to spike when doing a big sync. I've been trying for a long while to catch it, so did @fjl, but we couldn't yet catch where the objects retain their refs. |
I had the same issue .. after the fast sync failed, I the just continued in normal mode. I used a cache of 512 and 1024 and each time it ran out of memory. After reading the comments here, I tried it with no cache parameter, and all is good. |
My geth on a 4 core 8G ubuntu 16.04 server, every day crashed because of OOM |
Also seeing this issue with an up to date Amazon AMI |
Same issue with latest master. |
geth is chucking RAM like it was chrome. Same behaviour for me on a 8GB node running ubuntu 16.04. Did install yesterday and started syncing over night, the systemd unit I created for geth restarted several times and it does take only a couple of minutes to drain all available memory and to make the system swap hard.
|
My geth(1.8.7) on a 4 core 8G ubuntu 16.04 server, every day crashed because of OOM |
@zzd1990421 We're tracking down a possible issue. The current master branch seems to behave a lot nicer in this respect, both from a memory and CPU consumption standpoint. Might want to try that and see how it performs. |
@karalabe now my geth version is: my command: |
The latest release is 1.8.8, but that doesn't yet contain the fix for the memory issue. If you don't want to run master, the next stable release should be out around next week Monday. You can track progress on this memory issue on #16728. |
@karalabe thanks.I'll follow this issue! |
@zzd1990421 Running with |
@zzd1990421 Oh wow, ok, definitely don't specify more than 1/3rd of your memory for caching. Go's GC will permit junk to accumulate to twice the useful memory before cleaning up. So with an 8GB allowance, Go will flat out let the memory go up to 16GB before cleaning up. On an 8GB machine 2GB cache seems a good choice. The OS will use the remainder of the memory for disk caching too, so you're not missing out that much. |
@GoodMirek It's my mistake.The mem is actually 16GB. |
@zzd1990421 still 8GB cache is too high, since other parts of geth will use some memory too, so |
@karalabe I've changed this parameter. now my command: And I have to run geth with supervisor ☹ |
@karalabe I've got the same issue. But my memory is 16 GB, OOM killed my Geth process. When I run geth, I don't define "cache" and I think the problem is derived reason : @zzd1990421 Do you fixed your problem now? Thank you for your support ! |
@karalabe hello, can you help me? |
@karalabe my eth node is increasing the memory usage like +10mb per hour, the memory used is still growing slowly. I have run the geth full node without specifying the cache, so it is using the default value (1GB) Is this a normal behaviour because of the junk accumulated (since the garbage collector will clean up at 2Gb rigth)? Thanks |
i've got the same issue. and solved it. my geth v1.8.27, 16G Memory, four nodes with POA consensus algorithm, one node generate blocks and other three nodes sync blocks for TPS tests, on the generate blocks node, the memory used is growing with txpool incoming lots of txs continually. i use "go tool pprof" analyze the issue, and find the root cause: AsyncSendTransactions function in peer.go Fortunately, the PR #19702 fix it and i cherry-pick it on my geth v1.8.27 |
We've merged in various memory fixes since this issue was opened, particularly even in the next release 1.9.21 there's one leak fix for fast sync. There's not much information to go on in this PR, so I'll close and ask you to open a fresh one if something still persists. |
System information
Geth version:
1.7.2
OS & Version: Ubuntu 16.04
Commit hash : (if
develop
)Expected behaviour
Start geth, sync blocks
Actual behaviour
After syncing a little while, throw out of memory error
Steps to reproduce the behaviour
geth --cache=256 --rpc --rpcapi admin,eth,net,personal --rpcaddr=0.0.0.0 --etherbase 0x085bba56c11be9f235f460195f5bdd940076b034 --verbosity=2 --datadir "/data/geth"
Backtrace
Nov 6 07:51:25 10-9-102-100 geth[3587]: WARN [11-06|07:51:24] Stalling state sync, dropping peer peer=da3a2b4295214d0a
Nov 6 07:51:25 10-9-102-100 geth[3587]: WARN [11-06|07:51:25] Stalling state sync, dropping peer peer=0c74e66a402212ab
Nov 6 07:51:43 10-9-102-100 geth[3587]: WARN [11-06|07:51:43] Stalling state sync, dropping peer peer=5e22b60c2e148310
Nov 6 07:52:00 10-9-102-100 geth[3587]: WARN [11-06|07:52:00] Stalling state sync, dropping peer peer=cd9e12e7b98e5c51
Nov 6 07:52:14 10-9-102-100 geth[3587]: WARN [11-06|07:52:14] Stalling state sync, dropping peer peer=e34f400b179bbfca
Nov 6 07:54:03 10-9-102-100 geth[3587]: WARN [11-06|07:54:03] Stalling state sync, dropping peer peer=baf24807d46f29c7
Nov 6 07:54:06 10-9-102-100 geth[3587]: WARN [11-06|07:54:06] Stalling state sync, dropping peer peer=7d339a8d86268feb
Nov 6 07:54:45 10-9-102-100 geth[3587]: WARN [11-06|07:54:45] Stalling state sync, dropping peer peer=4b4ba5f8f361797b
Nov 6 07:55:01 10-9-102-100 geth[3587]: WARN [11-06|07:55:01] Stalling state sync, dropping peer peer=488640b18d7675cc
Nov 6 07:55:49 10-9-102-100 geth[3587]: WARN [11-06|07:55:49] Stalling state sync, dropping peer peer=39e5d2229a4334e7
Nov 6 07:55:53 10-9-102-100 geth[3587]: WARN [11-06|07:55:53] Stalling state sync, dropping peer peer=988772acb5ba840d
Nov 6 07:56:20 10-9-102-100 geth[3587]: WARN [11-06|07:56:20] Stalling state sync, dropping peer peer=d8e7276840df25e9
Nov 6 07:56:31 10-9-102-100 geth[3587]: fatal error: runtime: out of memory
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime stack:
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.throw(0xf540f7, 0x16)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/panic.go:605 +0x95
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.sysMap(0xc4f7d20000, 0x8000000, 0x0, 0x1932878)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mem_linux.go:216 +0x1d0
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).sysAlloc(0x1918fe0, 0x8000000, 0x1)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/malloc.go:470 +0xd7
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).grow(0x1918fe0, 0x4000, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mheap.go:887 +0x60
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).allocSpanLocked(0x1918fe0, 0x4000, 0x1932888, 0x7f717f786210)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mheap.go:800 +0x334
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).alloc_m(0x1918fe0, 0x4000, 0x7f7199ed0101, 0x7f7199ed4e18)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mheap.go:666 +0x118
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).alloc.func1()
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mheap.go:733 +0x4d
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.systemstack(0x7f7199ed4e10)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/asm_amd64.s:360 +0xab
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.(*mheap).alloc(0x1918fe0, 0x4000, 0x7f7199010101, 0x41aad4)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/mheap.go:732 +0xa1
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.largeAlloc(0x8000000, 0x7f71b7140101, 0x45dd5b)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/malloc.go:827 +0x98
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.mallocgc.func1()
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/malloc.go:722 +0x46
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.systemstack(0xc420017300)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/asm_amd64.s:344 +0x79
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.mstart()
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/proc.go:1125
Nov 6 07:56:31 10-9-102-100 geth[3587]: goroutine 34170 [running]:
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.systemstack_switch()
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/asm_amd64.s:298 fp=0xc4293c0be8 sp=0xc4293c0be0 pc=0x4608b0
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.mallocgc(0x8000000, 0xd9ca40, 0xdeadbe01, 0xc4569805d0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/malloc.go:721 +0x7b8 fp=0xc4293c0c90 sp=0xc4293c0be8 pc=0x417158
Nov 6 07:56:31 10-9-102-100 geth[3587]: runtime.makeslice(0xd9ca40, 0x0, 0x8000000, 0x7f71b70e7000, 0xc4c9a88000, 0xc497594d01)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/.gimme/versions/go1.9.linux.amd64/src/runtime/slice.go:54 +0x77 fp=0xc4293c0cc0 sp=0xc4293c0c90 pc=0x44a087
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/memdb.New(0x181cc40, 0xc42028fa20, 0x8000000, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/memdb/memdb.go:470 +0xfc fp=0xc4293c0d60 sp=0xc4293c0cc0 pc=0x7a106c
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolGet(0xc420158780, 0x1d481, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:90 +0xb5 fp=0xc4293c0da8 sp=0xc4293c0d60 pc=0x7cadd5
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).newMem(0xc420158780, 0x1d481, 0x0, 0x0, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:147 +0x235 fp=0xc4293c0e58 sp=0xc4293c0da8 pc=0x7cb355
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).rotateMem(0xc420158780, 0x1d481, 0x0, 0xc4293c1030, 0x4645b6, 0x223d)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_write.go:45 +0x81 fp=0xc4293c0eb0 sp=0xc4293c0e58 pc=0x7cea81
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).flush.func1(0xbe78072bcc7a1900)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_write.go:101 +0x2db fp=0xc4293c0f50 sp=0xc4293c0eb0 pc=0x7e505b
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).flush(0xc420158780, 0x1d481, 0xc456638240, 0xc685, 0x0, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_write.go:113 +0x171 fp=0xc4293c1020 sp=0xc4293c0f50 pc=0x7ced51
Nov 6 07:56:31 10-9-102-100 geth[3587]: github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).writeLocked(0xc420158780, 0xc430be8540, 0x0, 0x1, 0x0, 0x0)
Nov 6 07:56:31 10-9-102-100 geth[3587]: #11/home/travis/gopath/src/github.com/ethereum/go-ethereum/vendor/github.com/syndtr/goleveldb/leveldb/db_write.go:150 +0x6c fp=0xc4293c11c8 sp=0xc4293c1020 pc=0x7cf03c
The text was updated successfully, but these errors were encountered: