Make the debounce adaptive for validation job #1973
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TL;DR
This PR makes the debounce time of
performValidation()
adaptive. This change won't affect the perf for a specific scenario, like the time to calculate the completion list, but it can significantly boost the overall throughput of the JDT Language Server, and the more powerful the machine is, the more perf boost it will get.400ms debounce is too large
The current 400ms debounce time looks too large for the validation job. I added some logs to see how long it takes for performValidation() job.
Randomly writing some code in a java file with 4000+ lines and 400+ methods:
If we check the time of
JDTLanguageServer.waitForLifeCycleJobs()
, you can see threads take lot of time just wait for the document lifecycle jobs.Windows
MacOS
Use Adaptive Debounce
I used a moving average window to make the debounce time adaptive and make sure the largest debounce time is
400ms
.Average Time Cost per LSP Request
To check the impact of this change, let's see the average time cost to resolve each LSP request. We can get the time for each request calculate from the trace:
Windows (Unit: ms)
MacOS (Unit: ms)
Throughput
We can convert the above table to throughput, the unit is number of LSP request handled per second
Windows
MacOS
Below are two videos illustrate the impact when we have a higher throughput, please note that the time it costs to semantic highlighting variable
aaa
. A higher throughput makes the semantic highlighting faster (And for other kind of requests as well 😃).400ms debounce
400_debounce_2.mp4
Adaptive debounce
The first round still use 400ms as the init debounce time, after several rounds, the moving average become more and more smaller and you can see the highlighting become more and more faster.
adaptive_debounce_2.mp4
Signed-off-by: Sheng Chen [email protected]