-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ms.mempool locking #9050
Ms.mempool locking #9050
Conversation
This pull request introduces 3 alerts when merging 01f3688 into 3df0f9e - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging bf4a33d into 3df0f9e - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging c44e676 into 3df0f9e - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging e5ee15a into 3df0f9e - view on LGTM.com new alerts:
|
This pull request introduces 1 alert when merging 516b7cf into eb62a28 - view on LGTM.com new alerts:
|
* Logging for cache * Less logging * Return to original plan * Clean up * Remove coment * Remove log
|
||
f = getattr(self.api, message_type, None) | ||
if len(message_types) % 100 == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are changes in chia/server/server.py just logging every 100th message per type ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are logging the current state (which API tasks are still running) for debugging purposes
@@ -493,12 +495,11 @@ def connection_closed(self, connection: WSChiaConnection, ban_time: int): | |||
f"Invalid connection type for connection {connection.peer_host}," | |||
f" while closing. Handshake never finished." | |||
) | |||
self.cancel_tasks_from_peer(connection.peer_node_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was there some race going on here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(it got moved before on disconnect)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No i just did that as a precaution, i was scared the stuff below it might throw a exception
@@ -484,30 +513,52 @@ async def new_peak(self, new_peak: Optional[BlockRecord]) -> List[Tuple[SpendBun | |||
return [] | |||
assert new_peak.timestamp is not None | |||
|
|||
use_optimization: bool = self.peak is not None and new_peak.prev_transaction_block_hash == self.peak.header_hash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this needed, I though changes are tracked across batch updates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well after sync we call this method, but we dont have all of the coin changes since the start of the sync. We could instead clear the mepool after the sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aok
Many improvements with two goals:
Improvements
peak_post_processing
when tasks are cancelled.peak_post_processing
andpeak_post_processing_2
. The first one must be called under the blockchain lock, since it modifies full node state. The second one can be called outside the lock, and it deals with sending blocks and messages to other peers and wallets. Before, they were both called inside the lock.