Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wingman gets slow on big modules #1602

Closed
isovector opened this issue Mar 21, 2021 · 14 comments · Fixed by #1873
Closed

Wingman gets slow on big modules #1602

isovector opened this issue Mar 21, 2021 · 14 comments · Fixed by #1873
Labels
component: wingman performance Issues about memory consumption, responsiveness, etc.

Comments

@isovector
Copy link
Collaborator

Even for things like "refine hole" which should be instant. My guess is that waiting for a fresh type-checking result is to blame. We could probably get away with using stale data instead.

@isovector isovector added performance Issues about memory consumption, responsiveness, etc. component: wingman labels Mar 21, 2021
@pepeiborra
Copy link
Collaborator

pepeiborra commented Mar 21, 2021

GHC is very slow at typechecking holes. Even if Wingman used stale data, the default ghcide code action handler would still be slow and bottleneck the code action response. You may want to play with the settings we use:

setUpTypedHoles ::DynFlags -> DynFlags
setUpTypedHoles df
= flip gopt_unset Opt_AbstractRefHoleFits -- too spammy
#if MIN_GHC_API_VERSION(8,8,0)
$ flip gopt_unset Opt_ShowDocsOfHoleFits -- not used
#endif
$ flip gopt_unset Opt_ShowMatchesOfHoleFits -- nice but broken (forgets module qualifiers)
$ flip gopt_unset Opt_ShowProvOfHoleFits -- not used
$ flip gopt_unset Opt_ShowTypeAppOfHoleFits -- not used
$ flip gopt_unset Opt_ShowTypeAppVarsOfHoleFits -- not used
$ flip gopt_unset Opt_ShowTypeOfHoleFits -- massively simplifies parsing
$ flip gopt_set Opt_SortBySubsumHoleFits -- very nice and fast enough in most cases
$ flip gopt_unset Opt_SortValidHoleFits
$ flip gopt_unset Opt_UnclutterValidHoleFits
$ df
{ refLevelHoleFits = Just 1 -- becomes slow at higher levels
, maxRefHoleFits = Just 10 -- quantity does not impact speed
, maxValidHoleFits = Nothing -- quantity does not impact speed
}

@pepeiborra
Copy link
Collaborator

Let me insist on playing with the settings we use - Opt_SortBySubsumHoleFits was only "fast enough" for occasional use, and maybe that's no longer good enough

@isovector
Copy link
Collaborator Author

@pepeiborra my understanding is that we currently use the hole fit suggestions to fill in the replace _ with <whatever> code actions, and I'm a little scared to break that.

@pepeiborra
Copy link
Collaborator

Some of those options, like Opt_SortBySubsumHoleFits, may not be worth the computational cost. So by all means feel free to tweak them and test the results! At the very least we could expose a config option to enable users to decide

@isovector
Copy link
Collaborator Author

I've had all hole fits turned off globally since 2018 for exactly this performance reason :)

@isovector
Copy link
Collaborator Author

Was playing around with this today, and typechecking is significantly (and noticeably) faster having disabled the hole fit suggestions. Seeing as they mostly suggest things like replace _ with id _, I think the performance gains here more than make up for the slight loss of functionality.

@anka-213
Copy link
Contributor

anka-213 commented Jun 8, 2021

Was the performance still too bad after disabling Opt_SortBySubsumHoleFits? I would much prefer disabling that and maybe refinement hole fits over disabling type hole fits entirely.

@isovector
Copy link
Collaborator Author

@anka-213 I just tried that this morning, and the behavior is still wicked slow. After case-splitting a HsExpr GhcPs, having hole-fit (even without subsumption holes) suggestions on takes upwards of 1 minute. And this gets invalidated on every keystroke. Comparatively, disabling all hole-fits means the code actions come up in <1 second.

@anka-213
Copy link
Contributor

anka-213 commented Jun 9, 2021

Ah, that's too bad! Thanks for testing it!

@isovector
Copy link
Collaborator Author

@wz1000 and I were discussing on IRC why this is so likely so slow:

<isovector1> wingman can't (today) use whatever's in scope
<isovector1> so it's not a drop-in replacement :)
<isovector1> i'm happy to just implement that if there's an easy way to see what's in scope
<wz1000> isovector1: I suspect "using whatever is in scope" is where most of the slowdown comes from...
<isovector1> yeah, my guess too?
<wz1000> since you have to typecheck *all* the interfaces
<isovector1> is that stuff not cached?
<wz1000> usually interface typechecking is done lazily, so you only pay for the bits you need
<wz1000> i.e. when you actually use a function

Sounds like this functionality in general is anathema to any sort of good performance.

As an alternative, I'd be happy to implement the same functionality in Wingman, except rather than suggest anything in scope, it suggests only things defined in the current module, plus a whitelist of other functions from anywhere if they're in scope. What do you think, @anka-213 ?

@anka-213
Copy link
Contributor

I think the feature is most useful for things imported from elsewhere that I didn't realize would fit the hole. For example like this: https://twitter.com/idontgetoutmuch/status/1390258191511007233

So I'm not sure how much that would help.

@pepeiborra
Copy link
Collaborator

How hard would be to improve the performance of hole fit suggestions in GHC?

@anka-213
Copy link
Contributor

I wonder if we could do something similar to what hoogle does to create an index for external libraries?

@isovector
Copy link
Collaborator Author

@pepeiborra It's tracked here: https://gitlab.haskell.org/ghc/ghc/-/issues/16875

@mergify mergify bot closed this as completed in #1873 Jul 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: wingman performance Issues about memory consumption, responsiveness, etc.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants