-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kill the besu service when we deploy the smart contract by below function call #3102
Comments
We can reproduce the issue by these 2 contracts (note that comment in the 2nd contract is intended)
Make a transaction to OOM and delegatecall to Contract, CPU and memory usage rise, the node becomes unresponsive until we restart and produce below error message "class":"PeerDiscoveryAgent","message":"P2P peer discovery agent started and listening on /0.0.0.0:30303","throwable":""} |
Hi @eu09 I am working on reproducing this. So far I can't reproduce it locally (the call gets status 0x0 but I don't see the CPU/memory problems you describe) locally (ethash network, running on my local machine, and I've tried latest main branch and 21.10.2). I've only tried as a public contract - confirming you are not deploying this as a private contract? I haven't had too much to do with solidity assembly so may need to do some more research into that. |
I have also tried with the quickstart using besu version (21.7.4 and) 21.10.2 using QBFT/IBFT2 and I don't see any issues with memory/CPU when I deploy or call these contracts. Can you describe how you are deploying the contracts and sending the transactions? Wondering if there is something different about the way the transactions are being constructed. (eg calldatasize may be large) Also what version of solidity compiler are you using? Actually it might help if you send the contract binary code, so we can see if they are different? |
We use truffle to compile and deploy the contracts, then just simply send the transaction by web3.js . oomContract.methods.callMe("0x...").send(...) Tools version: The Besu Service located in Docker Container. Each Service is limited in Min 2GB Max 4GB on JVM options x-besu-def: |
@eu09 this is what I get for the binaries of those contracts - are yours different? |
We have custom solc settings in truffle-config.js, so the compiled bytecode are different. Contract OOM |
Using this version of solc: 0.6.6+commit.6c089d02.Emscripten.clang |
We weren't able to reproduce this issue after deploying the contract with the binary values we got locally from our machines, nor the binary values you have provided. Can you confirm the memory allocation for the Docker container? Is it min 2, max 4 or min 4, max 8? |
@RP27
The node can hang/crash or not depends on the hardware (vm/container) type/resources and also on the block size (gas limit). CPU % MEM USAGE
validator1_1 6.40% 679.5MiB
validator2_1 2.76% 471.4MiB
validator3_1 5.89% 488.2MiB
validator4_1 6.35% 472.1MiB
rpcnode 207.70% 1.51GiB during that high usage the logs report several times (log cleaned) {
"timestamp": "2021-12-02T19:47:45,365",
"container": "70a0f942ae46",
"level": "WARN",
"thread": "vertx-blocked-thread-checker",
"class": "BlockedThreadChecker",
"message": "Thread Thread[vert.x-worker-thread-0,5,main]=Thread[vert.x-worker-thread-0,5,main] has been blocked for 124379 ms, time limit is 60000 ms",
"throwable": " io.vertx.core.VertxException: Thread blocked "
" at app//org.hyperledger.besu.evm.frame.MessageFrame.writeMemoryRightAligned(MessageFrame.java:648) "
" at app//org.hyperledger.besu.evm.operation.MStoreOperation.execute(MStoreOperation.java:46) "
" at app//org.hyperledger.besu.evm.EVM.lambda$executeNextOperation$0(EVM.java:86) "
" at app//org.hyperledger.besu.evm.EVM$$Lambda$1530/0x0000000840746440.execute(Unknown Source) "
" at app//org.hyperledger.besu.evm.tracing.EstimateGasOperationTracer.traceExecution(EstimateGasOperationTracer.java:31) "
" at app//org.hyperledger.besu.evm.EVM.executeNextOperation(EVM.java:80) "
" at app//org.hyperledger.besu.evm.EVM.runToHalt(EVM.java:73) "
" at app//org.hyperledger.besu.evm.processor.AbstractMessageProcessor.codeExecute(AbstractMessageProcessor.java:157) "
" at app//org.hyperledger.besu.evm.processor.AbstractMessageProcessor.process(AbstractMessageProcessor.java:169) "
" at app//org.hyperledger.besu.ethereum.mainnet.MainnetTransactionProcessor.process(MainnetTransactionProcessor.java:489) "
" at app//org.hyperledger.besu.ethereum.mainnet.MainnetTransactionProcessor.processTransaction(MainnetTransactionProcessor.java:397) "
" at app//org.hyperledger.besu.ethereum.mainnet.MainnetTransactionProcessor.processTransaction(MainnetTransactionProcessor.java:148) "
" at app//org.hyperledger.besu.ethereum.transaction.TransactionSimulator.process(TransactionSimulator.java:222) "
" at app//org.hyperledger.besu.ethereum.transaction.TransactionSimulator.process(TransactionSimulator.java:107) "
" at app//org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.EthEstimateGas.response(EthEstimateGas.java:81) "
" at app//org.hyperledger.besu.ethereum.api.jsonrpc.JsonRpcHttpService.process(JsonRpcHttpService.java:725) "
" at app//org.hyperledger.besu.ethereum.api.jsonrpc.JsonRpcHttpService.lambda$handleJsonSingleRequest$13(JsonRpcHttpService.java:579) "
" at app//org.hyperledger.besu.ethereum.api.jsonrpc.JsonRpcHttpService$$Lambda$1494/0x0000000840726040.handle(Unknown Source) "
" at app//io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313) "
" at app//io.vertx.core.impl.ContextImpl$$Lambda$1006/0x0000000840580c40.run(Unknown Source) "
" at [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) "
" at [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) "
" at app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) "
" at [email protected]/java.lang.Thread.run(Unknown Source)"
} this shows 3 calls "high" cpu usage duration example: After a wile (in the example ~2 minutes) the node recovers and back to normal. Now doing the same tests on more powerful machines the high usage (due to available resources the cpu usage was about 400%) is about 3-5 seconds (instead of ~2 minutes) and no warning logs reported (since the thread lock time < time limit ). We didn't test with the highest |
We have made attempts to reproduce the high CPU usage issue when making that call, but we have been unsuccessful so far in doing so. We have tried with a Besu node running in a docker container with min 1/max 2, min 2/max 4 and min 4/max 8gb of memory. We have also attempted to run the container with just 1 CPU core. Are you able to provide the web3js code you are using to construct the calls to the contracts? Also might be useful if you can provide a mini standalone repository that produces the issue, which we can checkout and work with. |
@rinor As mentioned in my previous reply, it would be very useful for us if you could provide a repository that can reproduce this issue, along with your web3js code. |
I was able to help @RP27 replicate this (deploying the contracts twice and Besu starts creating threads until it reaches the limit and start throwing exceptions) |
Looking at the code:
The call data is copies from this entry point contract to call to the contract being delegated to. Maybe confirm that accidentally, the address being passed into callMe isn't the address of the contract OOM. |
Apologies for the delay. We've done some more testing, and it seems like Besu's behaviour handling the contract call is inline with other clients we've tested, in a 0 gas environment there's an issue with the contract that freaks the EVM out. What's the use case for manually assigning an offset with
Something like this works quite well (ignore the comments) |
See the code in question below. Is the issue that the following code causes ptr to not point to a valid bytes object. When the emit is called, the bytes that ptr points to is copied for the purposes of emitting an event. My guess is that the location ptr points to is interpreted as the bytes object having a very large length (think 2**256). Because the length field of ptr is very large, the EVM spends a lot of time copying bytes (it could keep trying to copy until the computer runs out of memory). In a configuration where gas wasn't zero, the transaction would run out of gas quickly and stop. offset := add(ptr, 0x120) |
hi @eu09 - we've reproduced the memory spike on other clients so are closing this one. Please reopen if you have any further questions. |
Please suggest us how to deal with a issue , when our developer who write a smart contract included below function call , when it deploy it will kill the besu service
result := delegatecall(gas(), _impl, ptr, calldatasize(), 0, 0)
size := returndatasize()
returndatacopy(ptr, 0, size)
.....
emit TopUpPaymentWalletFailed(paymentWalletAddr, gasUsed, currBalance, ptr);
Versions (Add all that apply)
besu --21.10.2
]ubuntu 20.04 lts x64
]Linux ip-172-31-24-141 5.11.0-1021-aws #22~20.04.2-Ubuntu SMP Wed Oct 27 21:27:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
]Version: 20.10.5
]Genesis file
{
"config" : {
"chainId" : 2021,
"constantinoplefixblock" : 0,
"istanbulBlock" : 9702190,
"muirglacierBlock" : 9702195,
"berlinBlock" : 9702200,
"contractSizeLimit": 245760,
"ibft2" : {
"blockperiodseconds" : 2,
"epochlength" : 30000,
"requesttimeoutseconds" : 10,
"blockreward": "5000000000000000",
"miningbeneficiary": "0xe0275e0cf831d5a74089e1a66c810b57891c2d85"
}
},
"nonce" : "0x0",
"timestamp" : "0x58ee40ba",
"gasLimit" : "0x1fffffffffffff",
"difficulty" : "0x1",
"mixHash" : "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
"coinbase" : "0x0000000000000000000000000000000000000000",
The text was updated successfully, but these errors were encountered: