-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: dedicated peer slots for light clients #15624
Comments
This doesn't seem particularly difficult, so could submit PR, even if it takes me a while. This block puts a lower limit on "regular" peers, to be 1/2 of What would be a reasonable approach and option name?
|
❤️ 👍 🎆 🎉 🎈 💯 🥇 |
To clarify: this is 5-15 minutes after node restart. I've noticed that after a few days of continuous operation at the limit of the system's CPU load capacity, the numbers gravitate towards: less "regular" peers, more light ones. At some point, though, the node experiences some failure and stops syncing (while still having both types of peers). I'll see what running at increased verbosity gets me. EDIT: May be same as issue #15636. |
I'm having a similar issue whereby I am running a light server but I actually get too many light clients (see #15689), which means I end up with only light peers and no full peers to get new blocks from. |
From #15689 (comment), Closing this, as it's now a duplicate (sort of). |
Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9
I've recently spun up a node with the sole intent to serve light clients. In the systemd service file, I've indicated:
--maxpeers 150 --lightserv 90 --lightpeers 135
Of course,This doesn't do what I'd like. My peer list is 85 "light" clients and 65 "regular" ones. I'd like it to be 135/15, respectively.It "would be nice" to have an option to limit "regular" peers; alternatively, it "would be nice" to be able to reserve peer slots specifically for "light" clients.
EDIT: For ref, this has been implemented in PR #16010.
The text was updated successfully, but these errors were encountered: