-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to improve slight completion stutter? Performance vs native Neovim LSP. #291
Comments
Hello Nate, I wonder if it worth experimenting to see the cost of completion list processing cost vs IPC cost. If the slight stutter, on slow machines, is IPC (talking to the language server) then maybe async/debouncing may the item of interest. However, if it is list processing/sorting then maybe Lua'izing that may be worthwhile. I suspect it is more the former, and maybe there isn't much that can be done. I may try and find out what If you can, can you tell me were to comment out the list processing in the LSC completion handling? Again, on a modern fast machine LSC seems nice and smooth (aka my desktop Linux box). On low end hardware there is a slight, and I mean slight, stutter compared with the Neovim native + completion-nvim solution. |
There are a couple things that could contribute here. When we request completions we always flush changes: vim-lsc/autoload/lsc/complete.vim Line 92 in baa44d4
We don't have a debounce there, as soon it wants to trigger a completion it flushes changes, going through the diff algorithm. Once we have the suggestions from the server, there are 2 main pieces of work. Translate the representation to something vim friendly: vim-lsc/autoload/lsc/complete.vim Lines 232 to 248 in baa44d4
Filter and sort the completions vim-lsc/autoload/lsc/complete.vim Lines 205 to 221 in baa44d4
The "sort" in this case is just collecting them in buckets based on the type of match, but I do think this is the most likely cause of slowness. |
DO NOT SUBMIT Testing behavior and performance. Investigating for #291
#294 drops It's possible that implementing parts of this in lua could help. That could also make it feasible to introduce fuzziness in the matching instead of strict prefix or substring matches... |
Good details Nate. I see you have a testing branch ready to go. |
Hi Nate,
We can eliminate that as a cause. |
Hi Nate, I wonder, is the initial rendering of the completion menu a synchronous act? I assume it is just the standard Vim completion popup menu, not a new style Vim popup window or Neovim floating window? Playing with the Dart language server it really does feel like the stutter is between the time the completion menu is rendering into existence and myself still typing characters (at a certain pace). Once the completion popup is on-screen I don't get the stutters no matter the IPC, filtering or my typing pace (so it seems). Just a theory, maybe the act of displaying the menu on screen the first time pauses the editor from asynchronously displaying ongoing keyboard input. I switched over to Neovim LSP + completion-nvim and the completion popup never causes keyboard stutters and pauses. Maybe it is a floating window OR maybe it is asynchronously rendered and updated? A useful way to test that theory would be a test branch were the completion menu is always rendered (even with a fake first entry); that way it is on the screen all the time. That should prove or disprove my current theory. |
Hi again, I just tested out vim-lsp + asyncomplete.vim and observed similar stutter behaviour, if anything actually a little worse. Source being a Flutter Dart file with the Dart Language Server run on a very slow Core-M Macbook. And, if I trust my eyes, I observe the same pause point, that being the moment when the completion renders into existence. Once the completion popup is on-screen, no pauses or stutters as best as I can tell. I wonder if native LSP in Neovim + completion-nvim is rendering a floating window instead of a completion popup? Or maybe it is asynchronously being rendered? In LSC and vim-lsp it really does appear that rendering the completion popup blocks the main thread pausing keyboard input. Again, I can't yet confirm the theory yet. More research needed, I could be wrong. If we do figure and solve this issue, big if, then we should also let the vim-lsp know as well since their stutter is even more egregious. Yes, I am now greedy, I do want that silky smooth experience I get in Neovim's native LSP client in LSC itself. I will test out the Completor plugin next. |
Here is my investigations on perf for complete. The issue is that it takes forever to loop 2000 items and then convert lsp items to vim complete items. prabirshrestha/vim-lsp#823 Vim-lsp calculates certain same things for every item which causes it to be more slow then others. |
Nice to chat, thanks for dropping by, I appreciate it. Would that processing still happen once the completion menu is on-screen? My understanding is yes. It seems to me that once the completion menu is actually on the screen, I don't get stutters as the Flutter API matches are being filtered depending on the prefix being entered. With vim-lsp the menu can appear after the first character is typed, or second, or third etc. I varied my typing rate to trigger the completion on the first, then second, then third and the pause is always the same, the point when the menu pops into existence; from then on filtering does not seem to pause. So I am not sure if list processing is the cause of the stutter? Again, I am using a ridiculously slow Macbook using the Dart Language Server which is also a tad slow as well. So it is a good worst case scenario. Even with all that, I can use LSC with Dart on my machine just fine, I have since I starting using LSP. The stutter is about 300ms, if I was to guess. I only raised this issue since the native Neovim LSP completion was buttery smooth; and it would be nice if both LSC and vim-lsp were also as smooth. P.S. Neovim's Native LSP has other issues that make it not good to use now. Very green, weird floating window issues. |
Potential fix for #291 A completion list can contain many more items than will be displayed if many items are filled out due to not matching the already typed base word. Any time spent performing translation from an LSP style completion item to a vim style item is wasted for the words that aren't displayed. Delay the conversion until after an item is known to not be filtered. - Perform only the dict -> list normalization initially, and pass the list of LSP items to the next step of completion. - Find the star of the completion using the LSP items. - Pass the base string to `s:CompletionItems`, perform the filtering in same loop as the conversion. Skip any items which don't match the base. - Split `s:CompletionItem` into `s:CompletionItemWord` which finds the text to insert, and optionally the snippet user data, and `sFinishItem` which does the heavier normalization including documentation detail.s
Yeah, this is using
The problem is most likely the synchronous vimscript doing all the work it needs before it calls
Not in the case of vim-lsc. The As of today, once the menu is on screen the plugin isn't doing any work.
This does highlight something interesting - I'm not sure if vim-lsp does any filtering but if it does you might be able to do a similar fix. Your comment did make me realize that vim-lsc is wasting a bunch of time in all the cases where there is filtering happening. If it filters first it wouldn't need to spend as much time on the items that get thrown away anyway. I've got an attempt at filtering before converting in #298 @bluz71 can you try it out and see if you notice any difference? |
@bluz71 a side note specific to Dart, |
Hello Nate,
You and Prabir are probably right. It just really seems that once the popup is on-screen it is smooth sailing. I need to test again with the Crudely, we could also just cut out that pre-processing, aka me commenting out a big block, and see how that goes.
Yes, I will test soon and let you know.
I use my own Dart configuration for Dart, Ruby and JavaScript as follows: let g:lsc_server_commands = {
\ 'dart': {
\ 'command': 'dart $DART_SDK/bin/snapshots/analysis_server.dart.snapshot --lsp',
\ 'log_level': -1,
\ 'suppress_stderr': v:true,
\ },
\ 'ruby': {
\ 'command': 'solargraph stdio',
\ 'log_level': -1,
\ 'suppress_stderr': v:true,
\ },
\ 'javascript': {
\ 'command': 'typescript-language-server --stdio',
\ 'log_level': -1,
\ 'suppress_stderr': v:true,
\ }
\} I experimented with machine learning mode, and it was simply too slow and RAM-consuming on my little 1.1GHz Core-M Macbook. Normal LSP mode for the Dart Analysis Server already does a great job. I also doubt anyone is using Dart LSP on a slower machine :) Again, I want to emphasise that LSC is more on the fast side than slow side; there is no significant issue issue, more a micro issue. If I hadn't compared to an alternative LSP implementation, I would have just lived with it; the evils of comparison. Also, your recent Lua optimization had me in a performance thinking mode, yes, I experimented with Neovim native due to your Lua work (since I was intrigued). P.S. Your simple configuration for LSC is bang on the money. Neovim's native LSP implementation throws you in the Lua config deep-end, I ended up drowning :) |
Ok Nate, A couple non-scientific data points.
On a fast machine I think you fixed the biggest part stutter. The other way to tackle the issue is to make Dart Analysis Server a bit less chatty? |
I did a quick hack of the Candidate insert text:
Even the A list of nearly 6,000 items sounds like a non-optimal recipe for pure Vimscript. I don't know if this is useful data, heck you probably already knew this. |
Awesome! I'll try to get that PR in soon. I'm not sure if I want to wait for adding tests. I think we're probably at the point of diminishing returns to optimize further in the client unless/until I have a chance to dig in on fuzzy completion so I'll close this issue when I submit that PR. The Dart server is pretty aggressive about suggestions. There is some discussion at dart-lang/sdk#40026 but no concrete plans to send fewer items that I'm aware. One thing that could help though, the Dart server sends much fewer suggestions following a |
Hi Nate,
Best regards. |
So in summary, the performance comparison when completing using LSC or Neovim LSP augmented with completion.nvim basically boils down to list processing speed differential between VimScript and LuaJIT. With language servers that send small list sizes (say well less than 2,000 items) there is no tangible difference. Getting up towards 6,000 items things start to get dicey with VimScript on slower machines. My comment here is probably also applicable, that being that LuaJIT is really damn fast. |
Oh dear, I just upgrade Flutter from
This can't be right? Latest Dart Language Server is now sending 5 times as many items as before which itself was probably 5 times too many to begin with? In this LSC code: function! s:CompletionItems(completion_result) abort
let completion_items = []
if type(a:completion_result) == type([])
let completion_items = a:completion_result
elseif type(a:completion_result) == type({})
let completion_items = a:completion_result.items
endif
call map(completion_items, {_, item -> s:CompletionItem(item)})
let completion = {'items' : completion_items}
for item in completion_items
if has_key(item, 'start_col')
let completion.start_col = item.start_col
break
endif
endfor
return completion
endfunction I assume completion items are candidate text strings that the Language Server thinks may be relevant? So is the expectation here that LSP clients must always do the filtering; more text from the server implies more filtering needed in the client? If so, with the latest Dart language server we are now dealing with 26,000 completion items which is bringing Vim to it knees even on a quad-core desktop. |
I improved a bit perf for vim-lsp. You need prabirshrestha/vim-lsp#835 and latest asyncomplete-lsp.vim. For vim-lsp I would like to avoid filtering out but others can easily create plugins that can filter certain things out since. https://github.com/machakann/asyncomplete-ezfilter.vim https://github.com/tsufeki/asyncomplete-fuzzy-match I'm not familiar with dart but one thing I saw in typescript lang server was that it will show all possible global vars such as (window/console) as well as completions that cannot exist but when you complete it it will auto import. So one way to reduce might be to disable code actions and see if that helps? One thing I had thought of to improve was that if the completion count is greater than X, only convert partial results but also set incomplete = true so next char key will call completion results again. What we really needs is wasm in vim so we can use threads :) https://twitter.com/PrabirShrestha/status/1239023903235686401 and https://github.com/prabirshrestha/vim/tree/wasm3 It will be interesting to see how lua performs here. But it does require converring quite a lot of chunk of vimscript to lua script at least in vim-lsp. https://github.com/prabirshrestha/vim-lsp/blob/052d26fa1846300339313114060dcc710c8f4670/autoload/lsp/omni.vim#L258-L332. This would mean double maintainace :( |
I will give your latest versions a test on my slow laptop (which still uses the older less verbose Dart Language Server) this evening and feedback the results in this issue. Stay tuned.
Does that exchange push filtering server side? Say I type It definitely feels like more than 1,500 candidates starts to cause issues.
I don't know if Nate is planning to do fuzzy matching in Lua or VimScript. I assumed Lua, but your post makes me nervous about the doubled up code path and maintenance burden. Then again, LuaJIT in Neovim is up to 2,000 times faster than VimScript. If LSC stays in pure VimScript than list processing needs to be bounded to minimize latency. It will be an interesting conversation. What I do know is that Dart LSP is pushing pure VimScript LSP clients to the extreme. |
I don't have benchmarking tools, so all I can judge is on feel and visuals. The LSC with and without Nate's latest enhancement feels snappier. No doubt you are doing more work especially on the diagnostic side; I turn all LSC diagnostics off, I use ALE instead. Dart Language Server is sending a lot of data over the wire. My gut instinct is that it is sending way too much data. I think this suggestion of yours is worth contemplating:
Best regards. |
vim includes profiler. :profile start profile.log
:profile func *
:quit Do you have a sample dart repo I can try? I haven't used dart before. |
Profile 1, FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
128 3.049541 0.006727 <SNR>162_on_stdout()
128 3.042815 0.310671 <SNR>161_on_stdout()
2 2.703529 0.000184 <SNR>84_handle_completion()
2 2.702487 0.230760 lsp#omni#get_vim_completion_items()
11249 2.471727 0.210586 lsp#omni#get_vim_completion_item()
11249 2.261141 1.841535 lsp#omni#default_get_vim_completion_item()
15 1.272256 0.007046 <SNR>145_recompute_pum()
15 1.258250 1.244765 <SNR>145_default_preprocessor()
94 0.731074 0.000691 <SNR>160_next()
73 0.520560 0.000179 <SNR>160_callback()
11249 0.371560 0.325156 <SNR>163_create_user_data()
21 0.211942 0.000130 lsp#utils#step#start()
13 0.146393 0.001032 <SNR>152_ensure_flush()
13 0.143449 0.001016 <SNR>152_ensure_start()
13 0.136070 0.000433 <SNR>152_ensure_init()
13 0.132453 0.000538 <SNR>152_ensure_conf()
13 0.128766 0.001010 <SNR>152_ensure_open()
13 0.101145 0.015624 <SNR>152_ensure_changed()
2 0.080890 0.000258 AutoSave()
1 0.080481 0.004167 DoSave()
FUNCTIONS SORTED ON SELF TIME
count total (s) self (s) function
11249 2.261141 1.841535 lsp#omni#default_get_vim_completion_item()
15 1.258250 1.244765 <SNR>145_default_preprocessor()
11249 0.371560 0.325156 <SNR>163_create_user_data()
128 3.042815 0.310671 <SNR>161_on_stdout()
2 2.702487 0.230760 lsp#omni#get_vim_completion_items()
11249 2.471727 0.210586 lsp#omni#get_vim_completion_item()
41 0.055542 0.053719 <SNR>158_encode_uri()
1 0.048732 0.048717 <SNR>145_on_insert_leave()
11249 0.046404 <SNR>163_create_user_data_key()
11249 0.046307 lsp#utils#_trim()
15 0.061809 0.024264 <SNR>145_on_change()
6 0.023068 0.022320 gitgutter#async#execute()
13 0.101145 0.015624 <SNR>152_ensure_changed()
175 0.018454 0.014806 MoonflyActiveStatusLine()
175 0.013290 ObsessionStatus()
15 0.013484 0.010986 asyncomplete#preprocess_complete()
39 0.009979 asyncomplete#context()
30 0.008930 <SNR>87_abs_path()
175 0.014743 0.008897 <SNR>166_GetCounts()
15 1.272256 0.007046 <SNR>145_recompute_pum() Profile 1, FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
139 2.802449 0.008733 <SNR>162_on_stdout()
139 2.793716 0.302474 <SNR>161_on_stdout()
2 2.457827 0.000170 <SNR>84_handle_completion()
2 2.456899 0.237656 lsp#omni#get_vim_completion_items()
11249 2.219201 1.810996 <SNR>163_get_vim_completion_item()
14 1.172933 0.007402 <SNR>145_recompute_pum()
14 1.158770 1.146942 <SNR>145_default_preprocessor()
94 0.685620 0.000715 <SNR>160_next()
73 0.488437 0.000192 <SNR>160_callback()
11249 0.361058 0.316452 <SNR>163_create_user_data()
21 0.198623 0.000108 lsp#utils#step#start()
13 0.137317 0.000626 <SNR>152_ensure_flush()
13 0.134760 0.000899 <SNR>152_ensure_start()
13 0.126804 0.000613 <SNR>152_ensure_init()
13 0.123144 0.000489 <SNR>152_ensure_conf()
13 0.119776 0.000692 <SNR>152_ensure_open()
13 0.096777 0.014412 <SNR>152_ensure_changed()
2 0.078387 0.000278 AutoSave()
1 0.077961 0.004329 DoSave()
14 0.064039 0.000874 <SNR>146_maybe_notify_on_change()
FUNCTIONS SORTED ON SELF TIME
count total (s) self (s) function
11249 2.219201 1.810996 <SNR>163_get_vim_completion_item()
14 1.158770 1.146942 <SNR>145_default_preprocessor()
11249 0.361058 0.316452 <SNR>163_create_user_data()
139 2.793716 0.302474 <SNR>161_on_stdout()
2 2.456899 0.237656 lsp#omni#get_vim_completion_items()
41 0.054174 0.052785 <SNR>158_encode_uri()
11249 0.045230 lsp#utils#_trim()
11249 0.044605 <SNR>163_create_user_data_key()
1 0.042033 0.042023 <SNR>145_on_insert_leave()
14 0.063165 0.025992 <SNR>145_on_change()
7 0.021637 0.021055 gitgutter#async#execute()
201 0.019751 0.016166 MoonflyActiveStatusLine()
13 0.096777 0.014412 <SNR>152_ensure_changed()
201 0.013861 ObsessionStatus()
35 0.010392 <SNR>87_abs_path()
201 0.016213 0.010135 <SNR>166_GetCounts()
14 0.011828 0.009535 asyncomplete#preprocess_complete()
36 0.009206 asyncomplete#context()
139 2.802449 0.008733 <SNR>162_on_stdout()
14 1.172933 0.007402 <SNR>145_recompute_pum()
I recommend these steps:
au User lsp_setup call lsp#register_server({
\ 'name': 'dartls',
\ 'cmd': {server_info->['dart', '/usr/local/flutter/bin/cache/dart-sdk/bin/snapshots/analysis_server.dart.snapshot', '--lsp']},
\ 'whitelist': ['dart'],
\ })
One quirk is that it appears the newest Dart Analysis Server is a lot more verbose than the old one (aka slower by a lot). I am using the older one on my laptop. Fundamentally as you said a while ago this is a list processing issue AND possibly a Dart LSP too verbose issue. It is probably why the YCM crowd use Python and the Neovim guys use LuaJIT. |
I am back to looking at Flutter completion slowness that I noted yesterday, Flutter 1.12 vs 1.17. I decided to add some debugging to function! s:CompletionItems(completion_result) abort
let completion_items = []
if type(a:completion_result) == type([])
let completion_items = a:completion_result
elseif type(a:completion_result) == type({})
let completion_items = a:completion_result.items
endif
call map(completion_items, {_, item -> s:CompletionItem(item)})
redir @a
echo completion_items
redir END
let completion = {'items' : completion_items}
echo len(completion_items)
for item in completion_items
if has_key(item, 'start_col')
let completion.start_col = item.start_col
break
endif
endfor
return completion
endfunction
Here are the Interesting. Do I interpret this correctly, each word is a element in the For comparison here is the And lastly Ruby in a Rails project: The JavaScript and Ruby logs kind-of make sense to me. But it is worth looking into since it does appear to be exhibiting poor performance in LSC. Maybe that poor performance is elsewhere, JSON-decoding perhaps? Best regards. |
Partial fix for #291 A completion list can contain many more items than will be displayed if many items are filled out due to not matching the already typed base word. Any time spent performing translation from an LSP style completion item to a vim style item is wasted for the words that aren't displayed. Delay the conversion until after an item is known to not be filtered. - Perform only the dict -> list normalization initially, and pass the list of LSP items to the next step of completion. - Find the star of the completion using the LSP items. - Pass the base string to `s:CompletionItems`, perform the filtering in same loop as the conversion. Skip any items which don't match the base. - Split `s:CompletionItem` into `s:CompletionItemWord` which finds the text to insert, and optionally the snippet user data, and `s:FinishItem` which does the heavier normalization including documentation details. - Add some smoke tests for autocomplete.
Yes, It is indeed very confusing to see so many items coming from the server. It looks like those The instrumentation log files from the linked issue in the SDK do look like they should have enough info to debug this - let's follow up over on that issue. |
Yes, there does appear to be a massive truncation going on; it is hiding something. I will use a for loop instead and log all the completions that way. I want to see each completion. My gut instinct, something seems wrong. 27,000 completions feels completely wrong. I wonder if there are many duplicates or whether LSC inadvertently is double or triple appending? I will keep the Vim and LSC discussion here. If we ascertain that the Analysis Server is amiss then focus should shift to the linked Dart SDK issue. Noting that I added some logging over there that was requested. Stay tuned I will update here soon. |
See my latest comment there: dart-lang/sdk#40026 (comment) 14,801 of those completions appear to be duplicates. |
Ok here is the full I need to study this list. But no doubt it is beyond VimScript's ability to handle efficiently; iterating 27,000 items will bring auto-completion to its knees. |
We are now deep down the rabbit hole. I did not expect to end up here when I opened this. But I have a feeling good will come of this. Nate, have you given any thought to @prabirshrestha's suggestion about a hard-cap; pulling a number out of thin air, say 1,500? His idea:
We may be able to tame Dart's Language Server (maybe, maybe not); but it is the wild west out there; who knows what some other Language Server's do. What we do know is that too many completion items will cause stutter and in the worst case scenario outright pauses. Ideally I want LSC never to pause, heck I don't even want stutters if possible. Best regards. |
@prabirshrestha and @natebosch , Do you know if an LSP client can feed prefix suggestions to a language server for it to filter server side? In LSC's case auto-completion for terms happens after a user has typed a third character, hence it seems baffling that 27,000 completion suggestions are sent over the wire if I only care about choices that start with the 3 character prefix I already typed (e.g And these 27,000 suggestions are sent every time I type 3 character terms, which is often, very often. Not having read the Language Server spec, I am in the dark. |
Well it seems the Dart SDK team will indeed be implementing server-side prefix filtering as noted here. This will be of great benefit. I strongly suspect it will entirely eliminate the stutter I note in this issue (along with Nate's recent optimisation). It would be nice to have LSC and vim-lsp both should support this server-side filtering. Hopefully Dart's implementation is applicable / adheres to existing LSP spec and that other language servers may already, or in future, support that same mode of operation. More detail needed from the Dart team. Lastly, this will be of no benefit for autocompletions that start after a space. I personally see no value, and plenty of negatives, for completions after a space (aka no prefix). LSC defaults to autocompleting after three characters; now more than ever that is a very sensible default. |
My interpretation, he prefer's server side filtering to obtain a manageable of relevant candidate completions and then use client-side filtering to refine the completions as the user types more characters. Only one data exchange between client and server. |
My current feeling is that it's better to have the client start filtering once it has the initial list. This should result in much better filter-as-type performance when the latency between client/server is high (which I think is a growing situation with things like Codespaces, GitPod, VS Code Remoting becoming more common). If the payload can't be made small enough to prevent stuttering in some editors, using IsIncomplete=true and capping the list might be something to consider (perhaps as an option). I think some of the incoming fixes and the prefix-filtering might be significant enough to be able to avoid that though.
The full docs should not be transferred in the initial completion list - they should be omitted and then provided by the |
This plugin currently ignores the concept of incomplete suggestion lists and once it suggests completions it won't request any more from the server. This would be interesting to explore if I have a chance to look at supporting incremental completion lists.
Not per the spec - that's why it's up to the server to decide to filter based on the file contents and cursor position. Luckily that's what the Dart server will start doing.
+1
That requires support for resolving completion items 😉 This client doesn't support that, and since it doesn't advertise that support the Dart server correctly sends them with the original request. We don't have a hook for when the user iterates through the various suggestions, so we can't lazily request docs when they are looking at an item. |
Nate,
I will create a new issue exploring the idea of hard caps and incremental lists. However, before you do anything I will explore a heap of language servers (Go, Rust, C++, Scala, Python, etc) and see what is the medium completion list size each provides. We know that Dart (pre-fixes and enhancements) was up to 27,000; which is far too much for LSC to handle. Is this a one off, or not? I suspect it may take me a week to explore this space in languages and frameworks I am not familiar with. But I will pass on my findings in that new issue. I'd say around 1,000 is a decent upper limit; 5,000 things start going bad and 20,000 is very pear-shaped.
Your new LSC optimisation and the upcoming Dart Analysis Server enhancement should resolve this particular issue. Hence, I will close this and open a new one about
It will be up to you; but I personally see little need for resolve. Other clients support it, and it seems laggy to me. |
Should this be spawned into a new issue? Aka, make LSC advertise that it does not support resolve. Since you don't use resolve, the server sending it to you will only make things more bloaty and slow for LSC completions. |
Oh, I see. I don't think LSP allows a client to say whether it supports this. I think what you're seeing may be that the items with docs are not those that are added as part of SuggestionSets, but rather the original list (we don't support any sort of lazy-resolution for the original set). I suspect if you don't support resolve that you'll be missing docs + the auto-import edits for those items right now.
This shouldn't introduce lag, it's used only to provide supplementary info (like docs). In VS Code they do allow a resolve request to continue in the background when you complete (but not hold up the complete) so if it responses with additionalEdits they can still be edited (eg. for auto-imports). |
The lag I see is the Neovim LSP client slowly rendering a secondary floating window attached to the completion window. As I quickly navigate the completion choices the floating window with resolve details struggles to keep up. It looks janky. Personally I would favour an insert mode mapping to trigger resolve details being displayed in the Vim preview window, which should auto-close if I scroll to the next completion choice. |
I had thought this sort of thing wasn't possible in vim, but I clearly overlooked stuff like |
A related update, Dart Analysis Server ( See the comparison:
Much fewer matches now returned. And LSC itself improved via this issue as well, as noted above. All good. |
Hello again Nate,
I have been playing with a nightly of Neovim's inbuilt LSP augmented by the completion.nvim plugin. LSP is not in stable Neovim yet, only in the nightlys.
LSC is clearly more mature, easier to configure, and with far less floating window issues.
One thing I do notice on my very slow Core-M powered Macbook 12" is that with Flutter / Dart LSP completion typing never stutters and it can with LSC (very slightly).
For example, if I type
Mate
(which expands to Material in Flutter API) at a deliberate not fast not slow manner, after the third character I get a slight pause in LSC (the trigger point to gather candidates) whilst with Neovim's LSP + completion.nvim I do not get any pause (even as it gathers and displays candidates).I know that LSC uses async API's as best it can, but there is still a tiny pause as I type the fourth character of
Mate
that I don't get in Neovim's LSP + completion.nvim.I know that you are dipping your toes in Lua recently, and performance in general.
You may or may not see fit to explore this. I leave that to you. Don't hesitate to close if this is an area not worth exploring.
Here is my Neovim LSP setup if you are interested in exploring:
The faster the machine, the less LSC stutter I see. I pretty much get no pauses on my quad-core i5 Linux desktop.
The text was updated successfully, but these errors were encountered: