-
-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration with Copilot is very slow #620
Comments
Is it only first question or further questions as well? Also depends on model, gpt-4o is a lot faster than claude for example. But on first question the plugin needs to fetch models, agents and check policies, question after will cache this data so should be fine. |
it's all questions, first and subsequent. The default gpt-4o is selected. |
well are you opening big file? as the content of the file needs to be sent every time when asking the question (default behaviour is sent whole buffer). and the history also needs to be sent every time, thats how every LLM works mostly. 10 seconds is still a lot tho, maybe it could be also curl related? Can you output what |
I merged some optimizations but im still curious about your curl version + the size of file |
Here is the checkhealth output: CopilotChat.nvim ~
CopilotChat.nvim [core] ~
CopilotChat.nvim [commands] ~
CopilotChat.nvim [dependencies] ~
|
Can you try on latest canary? Added some status reporting for embedding files as well. Also how big is the file again? char count/line numbers will do. You could also try upgrading curl, 7.81 is very old and see if it helps. |
Looks like upgrading to latest curl (latest from github) doesn't change much. |
Maybe there is a quick fix for it, but on the same machine when I talk to Copilot (for company account) via the Web interface it takes at most 2 sec, even when it has a lot of context and searches company files.
When CopilotChat it takes ~10 sec even for very simple questions in which I don't ask to analyze any code.
Is there a way to accelerate the process?
The text was updated successfully, but these errors were encountered: