Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate dispatch queue for server push requests #43

Closed
radeksimko opened this issue Feb 12, 2021 · 3 comments
Closed

Separate dispatch queue for server push requests #43

radeksimko opened this issue Feb 12, 2021 · 3 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@radeksimko
Copy link
Contributor

radeksimko commented Feb 12, 2021

While I appreciate concurrency from performance perspective, we had to reduce it to 1 early on in our LS due to the fact that LSP makes certain assumptions about the ordering of requests and this allows us to basically ensure that requests are processed in the same order they come in.

(anecdotally) This doesn't seem to affect the performance too much in the context of standard language server usage as the user would be typically editing just one file at a time anyway, which limits the number of requests for dispatch at any given time.

However the negative side effect of this decision is that this basically prevents the server from sending requests back while a handler/task is being dispatched, which is important e.g. for cancellation or reporting progress.

Therefore I was wondering if you would consider processing push requests/notifications sent from the server in a separate queue, so that server and client don't block each other when Concurrency is set to 1?

@creachadair
Copy link
Owner

I'm willing to consider this, but it would be a substantial change—and not one that fits well with the existing design. I would have structured things differently if I'd been attempting to implement a true "peer" rather than separate client and server. Ironically, I considered that approach when I was first building it, but wound up not going that direction.

Let me start by asking: Is it still necessary to set concurrency to 1 to achieve what you are describing?

Since #24 et seq., requests should be properly sequenced (not just notifications): The handlers for two requests should not overlap in time unless they are concurrent (as defined in the docs).

@radeksimko
Copy link
Contributor Author

Let me start by asking: Is it still necessary to set concurrency to 1 to achieve what you are describing?

I'm tempted to say yes, but I don't have any data to prove it at this point. We can try to raise the concurrency and add some telemetry to see how it behaves now in practice to confirm one way or the other.

We didn't have any way of collecting such data other than by surfacing it to the user before, so I'd be worried about taking such a step in the past, but I think we can take the (calculated) risk now. I will try to find a way to measure this and PR something in and get it released in coming weeks and then report back.

Thank you for the patience and quick response, as usual.

@creachadair creachadair added enhancement New feature or request question Further information is requested labels Mar 4, 2021
@radeksimko
Copy link
Contributor Author

We have raised the concurrency to 2+ (+ depending on available CPUs) in recent release of terraform-ls and so far we have not received any reports of wrongly ordered requests, so I'm going to close this issue for now and raise a new one if/when we do receive any and provide more concrete data.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants