Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 23.10.x #6155

Merged
merged 49 commits into from
Nov 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
93a1391
Add updated storage to evmtool json trace (#5892)
shemnon Sep 19, 2023
fb4f14f
Update holesky with fixed extraData, genesis time, shanghaiTime (#5890)
siladu Sep 20, 2023
afa942a
[CHANGELOG] removed duplicated line (#5904)
macfarla Sep 20, 2023
e93d506
Bump version to 23.7.4-SNAPSHOT (#5913)
siladu Sep 20, 2023
5c37328
Update reference tests to 12.4 (#5899)
shemnon Sep 21, 2023
09eaad6
[MINOR] Block number param additional test (#5918)
macfarla Sep 21, 2023
7c91dd3
update beacon root again [skip ci] (#5903)
macfarla Sep 21, 2023
3cb954e
fix geth rlpx ping command (#5917)
pinges Sep 22, 2023
fdb8351
[4844] Fix some Devnet9 Hive tests (#5929)
Gabriel-Trintinalia Sep 22, 2023
7cade83
Add FlatDbStrategy (#5901)
garyschulte Sep 22, 2023
b8832b6
add get proof for bonsai (#5919)
matkt Sep 24, 2023
8beffc8
add plugin API to enable plugins to validate transaction before they …
pinges Sep 25, 2023
2d441c5
renamed PayloadTuple and made a separate class (#5916)
macfarla Sep 26, 2023
7303ecf
adds Matthew Whitehead as a maintainer (#5876)
jflo Sep 26, 2023
bc66a96
[4844] [Hive] Fix fcuV3 parameter return (#5940)
Gabriel-Trintinalia Sep 26, 2023
1ca5ca0
updated beacon root and modulus to match DRAFT eip (#5941)
macfarla Sep 26, 2023
56bb1e1
Forkchoice v2 hive tests (#5949)
jflo Sep 27, 2023
97d76de
Fixup changelog following 23.7.3 release (#5954)
siladu Sep 27, 2023
9fd8ce0
Transaction pool unit tests refactoring to remove duplications (#5948)
fab-10 Sep 27, 2023
24517c6
Process onBlockAdded event asyncronously (#5909)
fab-10 Sep 27, 2023
542ce98
rlpx - Send empty list instead of Empty Bytes for the Ping and Pong m…
Gabriel-Trintinalia Sep 27, 2023
5c533b7
Improve performance when promoting transaction from next layers (#5920)
fab-10 Sep 27, 2023
d1e29a1
Always enforce promotion filter for transactions in the prioritized l…
fab-10 Sep 27, 2023
cb7accc
BlockTransactionSelector refactoring (#5931)
Gabriel-Trintinalia Sep 28, 2023
267e4d7
Apply fcu even on invalid payload (#5961)
jflo Sep 28, 2023
2321f64
Update execution tests to 0.2.5 (#5952)
macfarla Sep 28, 2023
4fd5f5c
Add Cancun GraphQL fields (#5923)
shemnon Sep 28, 2023
9df51c8
Validate bad block before new head check syncing (#5967)
Gabriel-Trintinalia Sep 28, 2023
fc2415c
Optionally bypass state root verification in reference test worldstat…
garyschulte Sep 28, 2023
741e2da
Added toString implementation for TransactionSimulatorResult (#5957)
Shritesh99 Sep 29, 2023
299c4fb
Fix t8n encoding issue (#5936)
shemnon Sep 30, 2023
7877e4f
Use PendingTransaction in BlockTransactionSelector (#5966)
fab-10 Oct 2, 2023
44ad404
Target to use about 25MB for the new layered txpool by default (#5974)
fab-10 Oct 3, 2023
512b49b
Release 23.10.0-RC burn in (#5989)
fab-10 Oct 5, 2023
8c77068
Set version to 23.10.0-RC (#5991)
fab-10 Oct 5, 2023
421dbaa
Change Array Copying (#5998) (#6002)
fab-10 Oct 9, 2023
a9e44d8
Describe the migration to and how to configure the layered txpool (#6…
fab-10 Oct 10, 2023
09150aa
Set version to 23.10.0 (#6015)
fab-10 Oct 10, 2023
c81e86e
Cherry-pick main into release 22.10.x (#6064)
Gabriel-Trintinalia Oct 20, 2023
17741fb
Bump 23.10.1-RC to 23.10.1 (#6081)
Gabriel-Trintinalia Oct 25, 2023
f4036d8
Unsigned timestamps and blob gas used (#6046)
jflo Oct 18, 2023
c6afed6
[MINOR] ux improvements (#6049)
macfarla Oct 19, 2023
9e5efec
Update changelog release (#6062)
Gabriel-Trintinalia Oct 20, 2023
9da31e5
Dencun corner cases (#6060)
macfarla Oct 20, 2023
50b49a3
add retry logic on sync pipeline for rocksdb issue (#6004)
matkt Oct 24, 2023
ff0d417
Merge branch 'release-23.10.x' into release-23.10.x
jflo Nov 10, 2023
2efb61b
[MINOR] ux improvements (#6049)
macfarla Oct 19, 2023
626e773
update RC version for snapshot
jflo Nov 10, 2023
42b07a1
update submodule
jflo Nov 10, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ethereum/eth/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ dependencies {
implementation 'io.tmio:tuweni-bytes'
implementation 'io.tmio:tuweni-units'
implementation 'io.tmio:tuweni-rlp'
implementation 'org.rocksdb:rocksdbjni'

annotationProcessor "org.immutables:value"
implementation "org.immutables:value-annotations"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
/*
* Copyright ConsenSys AG.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*
* SPDX-License-Identifier: Apache-2.0
*/
package org.hyperledger.besu.ethereum.eth.sync;

import org.hyperledger.besu.plugin.services.exception.StorageException;

import java.util.EnumSet;
import java.util.Optional;

import org.rocksdb.RocksDBException;
import org.rocksdb.Status;

public final class StorageExceptionManager {

private static final EnumSet<Status.Code> RETRYABLE_STATUS_CODES =
EnumSet.of(Status.Code.TimedOut, Status.Code.TryAgain, Status.Code.Busy);

private static final long ERROR_THRESHOLD = 1000;

private static long retryableErrorCounter;
/**
* Determines if an operation can be retried based on the error received. This method checks if
* the cause of the StorageException is a RocksDBException. If it is, it retrieves the status code
* of the RocksDBException and checks if it is contained in the list of retryable {@link
* StorageExceptionManager.RETRYABLE_STATUS_CODES} status codes.
*
* @param e the StorageException to check
* @return true if the operation can be retried, false otherwise
*/
public static boolean canRetryOnError(final StorageException e) {
return Optional.of(e.getCause())
.filter(z -> z instanceof RocksDBException)
.map(RocksDBException.class::cast)
.map(RocksDBException::getStatus)
.map(Status::getCode)
.map(RETRYABLE_STATUS_CODES::contains)
.map(
result -> {
retryableErrorCounter++;
return result;
})
.orElse(false);
}

public static long getRetryableErrorCounter() {
return retryableErrorCounter;
}

public static boolean errorCountAtThreshold() {
return retryableErrorCounter % ERROR_THRESHOLD == 1;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,26 @@
*/
package org.hyperledger.besu.ethereum.eth.sync.fastsync.worldstate;

import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.canRetryOnError;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.errorCountAtThreshold;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.getRetryableErrorCounter;

import org.hyperledger.besu.ethereum.core.BlockHeader;
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldDownloadState;
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage.Updater;
import org.hyperledger.besu.plugin.services.exception.StorageException;
import org.hyperledger.besu.services.tasks.Task;

import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class PersistDataStep {

private static final Logger LOG = LoggerFactory.getLogger(PersistDataStep.class);

private final WorldStateStorage worldStateStorage;

public PersistDataStep(final WorldStateStorage worldStateStorage) {
Expand All @@ -33,24 +44,40 @@ public List<Task<NodeDataRequest>> persist(
final List<Task<NodeDataRequest>> tasks,
final BlockHeader blockHeader,
final WorldDownloadState<NodeDataRequest> downloadState) {
final Updater updater = worldStateStorage.updater();
tasks.stream()
.map(
task -> {
enqueueChildren(task, downloadState);
return task;
})
.map(Task::getData)
.filter(request -> request.getData() != null)
.forEach(
request -> {
if (isRootState(blockHeader, request)) {
downloadState.setRootNodeData(request.getData());
} else {
request.persist(updater);
}
});
updater.commit();
try {
final Updater updater = worldStateStorage.updater();
tasks.stream()
.map(
task -> {
enqueueChildren(task, downloadState);
return task;
})
.map(Task::getData)
.filter(request -> request.getData() != null)
.forEach(
request -> {
if (isRootState(blockHeader, request)) {
downloadState.setRootNodeData(request.getData());
} else {
request.persist(updater);
}
});
updater.commit();
} catch (StorageException storageException) {
if (canRetryOnError(storageException)) {
// We reset the task by setting it to null. This way, it is considered as failed by the
// pipeline, and it will attempt to execute it again later.
if (errorCountAtThreshold()) {
LOG.info(
"Encountered {} retryable RocksDB errors, latest error message {}",
getRetryableErrorCounter(),
storageException.getMessage());
}
tasks.forEach(nodeDataRequestTask -> nodeDataRequestTask.getData().setData(null));
} else {
throw storageException;
}
}
return tasks;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,16 @@
*/
package org.hyperledger.besu.ethereum.eth.sync.snapsync;

import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.canRetryOnError;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.errorCountAtThreshold;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.getRetryableErrorCounter;

import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.heal.TrieNodeHealingRequest;
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
import org.hyperledger.besu.metrics.BesuMetricCategory;
import org.hyperledger.besu.plugin.services.MetricsSystem;
import org.hyperledger.besu.plugin.services.exception.StorageException;
import org.hyperledger.besu.plugin.services.metrics.Counter;
import org.hyperledger.besu.services.pipeline.Pipe;
import org.hyperledger.besu.services.tasks.Task;
Expand All @@ -27,9 +32,12 @@
import java.util.stream.Stream;

import org.apache.tuweni.bytes.Bytes;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class LoadLocalDataStep {

private static final Logger LOG = LoggerFactory.getLogger(LoadLocalDataStep.class);
private final WorldStateStorage worldStateStorage;
private final SnapWorldDownloadState downloadState;
private final SnapSyncProcessState snapSyncState;
Expand Down Expand Up @@ -58,19 +66,35 @@ public Stream<Task<SnapDataRequest>> loadLocalDataTrieNode(
final Task<SnapDataRequest> task, final Pipe<Task<SnapDataRequest>> completedTasks) {
final TrieNodeHealingRequest request = (TrieNodeHealingRequest) task.getData();
// check if node is already stored in the worldstate
if (snapSyncState.hasPivotBlockHeader()) {
Optional<Bytes> existingData = request.getExistingData(downloadState, worldStateStorage);
if (existingData.isPresent()) {
existingNodeCounter.inc();
request.setData(existingData.get());
request.setRequiresPersisting(false);
final WorldStateStorage.Updater updater = worldStateStorage.updater();
request.persist(
worldStateStorage, updater, downloadState, snapSyncState, snapSyncConfiguration);
updater.commit();
downloadState.enqueueRequests(request.getRootStorageRequests(worldStateStorage));
completedTasks.put(task);
return Stream.empty();
try {
if (snapSyncState.hasPivotBlockHeader()) {
Optional<Bytes> existingData = request.getExistingData(downloadState, worldStateStorage);
if (existingData.isPresent()) {
existingNodeCounter.inc();
request.setData(existingData.get());
request.setRequiresPersisting(false);
final WorldStateStorage.Updater updater = worldStateStorage.updater();
request.persist(
worldStateStorage, updater, downloadState, snapSyncState, snapSyncConfiguration);
updater.commit();
downloadState.enqueueRequests(request.getRootStorageRequests(worldStateStorage));
completedTasks.put(task);
return Stream.empty();
}
}
} catch (StorageException storageException) {
if (canRetryOnError(storageException)) {
// We reset the task by setting it to null. This way, it is considered as failed by the
// pipeline, and it will attempt to execute it again later.
if (errorCountAtThreshold()) {
LOG.info(
"Encountered {} retryable RocksDB errors, latest error message {}",
getRetryableErrorCounter(),
storageException.getMessage());
}
task.getData().clear();
} else {
throw storageException;
}
}
return Stream.of(task);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,25 @@
*/
package org.hyperledger.besu.ethereum.eth.sync.snapsync;

import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.canRetryOnError;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.errorCountAtThreshold;
import static org.hyperledger.besu.ethereum.eth.sync.StorageExceptionManager.getRetryableErrorCounter;

import org.hyperledger.besu.ethereum.bonsai.storage.BonsaiWorldStateKeyValueStorage;
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.heal.TrieNodeHealingRequest;
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
import org.hyperledger.besu.plugin.services.exception.StorageException;
import org.hyperledger.besu.services.tasks.Task;

import java.util.List;
import java.util.stream.Stream;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class PersistDataStep {
private static final Logger LOG = LoggerFactory.getLogger(PersistDataStep.class);

private final SnapSyncProcessState snapSyncState;
private final WorldStateStorage worldStateStorage;
Expand All @@ -43,41 +52,58 @@ public PersistDataStep(
}

public List<Task<SnapDataRequest>> persist(final List<Task<SnapDataRequest>> tasks) {
final WorldStateStorage.Updater updater = worldStateStorage.updater();
for (Task<SnapDataRequest> task : tasks) {
if (task.getData().isResponseReceived()) {
// enqueue child requests
final Stream<SnapDataRequest> childRequests =
task.getData().getChildRequests(downloadState, worldStateStorage, snapSyncState);
if (!(task.getData() instanceof TrieNodeHealingRequest)) {
enqueueChildren(childRequests);
} else {
if (!task.getData().isExpired(snapSyncState)) {
try {
final WorldStateStorage.Updater updater = worldStateStorage.updater();
for (Task<SnapDataRequest> task : tasks) {
if (task.getData().isResponseReceived()) {
// enqueue child requests
final Stream<SnapDataRequest> childRequests =
task.getData().getChildRequests(downloadState, worldStateStorage, snapSyncState);
if (!(task.getData() instanceof TrieNodeHealingRequest)) {
enqueueChildren(childRequests);
} else {
continue;
if (!task.getData().isExpired(snapSyncState)) {
enqueueChildren(childRequests);
} else {
continue;
}
}
}

// persist nodes
final int persistedNodes =
task.getData()
.persist(
worldStateStorage,
updater,
downloadState,
snapSyncState,
snapSyncConfiguration);
if (persistedNodes > 0) {
if (task.getData() instanceof TrieNodeHealingRequest) {
downloadState.getMetricsManager().notifyTrieNodesHealed(persistedNodes);
} else {
downloadState.getMetricsManager().notifyNodesGenerated(persistedNodes);
// persist nodes
final int persistedNodes =
task.getData()
.persist(
worldStateStorage,
updater,
downloadState,
snapSyncState,
snapSyncConfiguration);
if (persistedNodes > 0) {
if (task.getData() instanceof TrieNodeHealingRequest) {
downloadState.getMetricsManager().notifyTrieNodesHealed(persistedNodes);
} else {
downloadState.getMetricsManager().notifyNodesGenerated(persistedNodes);
}
}
}
}
updater.commit();
} catch (StorageException storageException) {
if (canRetryOnError(storageException)) {
// We reset the task by setting it to null. This way, it is considered as failed by the
// pipeline, and it will attempt to execute it again later. not display all the retryable
// issues
if (errorCountAtThreshold()) {
LOG.info(
"Encountered {} retryable RocksDB errors, latest error message {}",
getRetryableErrorCounter(),
storageException.getMessage());
}
tasks.forEach(task -> task.getData().clear());
} else {
throw storageException;
}
}
updater.commit();
return tasks;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,22 @@ public SnapWorldStateDownloadProcess build() {
"step",
"action");

/*
The logic and intercommunication of different pipelines can be summarized as follows:

1. Account Data Pipeline (fetchAccountDataPipeline): This process starts with downloading the leaves of the account tree in ranges, with multiple ranges being processed simultaneously.
If the downloaded accounts are smart contracts, tasks are created in the storage pipeline to download the storage tree of the smart contract, and in the code download pipeline for the smart contract.

2. Storage Data Pipeline (fetchStorageDataPipeline): Running parallel to the account data pipeline, this pipeline downloads the storage of smart contracts.
If all slots cannot be downloaded at once, tasks are created in the fetchLargeStorageDataPipeline to download the storage by range, allowing parallelization of large account downloads.

3. Code Data Pipeline (fetchCodePipeline): This pipeline, running concurrently with the account and storage data pipelines, is responsible for downloading the code of the smart contracts.

4. Large Storage Data Pipeline (fetchLargeStorageDataPipeline): This pipeline is used when the storage data for a smart contract is too large to be downloaded at once.
It enables the storage data to be downloaded in ranges, similar to the account data.

5. Healing Phase: Initiated after all other pipelines have completed their tasks, this phase ensures the integrity and completeness of the downloaded data.
*/
final Pipeline<Task<SnapDataRequest>> completionPipeline =
PipelineBuilder.<Task<SnapDataRequest>>createPipeline(
"requestDataAvailable", bufferCapacity, outputCounter, true, "node_data_request")
Expand Down
Loading
Loading