-
-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: multithread linting #129
base: main
Are you sure you want to change the base?
Conversation
I don't see any mention of the previous RFCs around parallelisation: Both of these have a lot of context about the difficulties of parallelisation outside of the core rules - eg in cases where the parser or the rules store state in any form.
Naively parallelising by just "randomly" distributing files across threads may lead to a SLOWER lint run in cases where people use such stateful plugins because the cached work may need to be redone once for each thread. I would like to see such usecases addressed as part of this RFC given that these mentioned usecases are very prevalent - with both mentioned plugins in use by the majority of the ecosystem. These problems have been discussed before in the context of language plugins and parser contexts (I can try to find the threads a bit later). |
Thanks for the input @bradzacher. How would you go about incorporating context from #42 and #87 into this RFC? I see that #42 suggests introducing a plugin setting As for #87, it seems about an unrelated feature that doesn't even require multithreading. But I get why it would be beneficial to limit the number of instances of the same parser across threads, especially if the parser takes a long time to load its initial state, like typescript-eslint with type-aware parsing. If you have any concrete suggestions on how to do that, I'd love to know.
I imagine the way one would address such use cases is by making no changes, i.e. not enabling multithread linting if the results are not satisfactory. But if something can be done to improve performance for popular plugins that would be awesome. |
To be clear - I'm all for such a system existing. Like caching it can vastly improve the experience for those that fit within the bounds. The thing I want to make sure of is that we ensure the bounds are either intentionally designed to be tight to avoid complexity explosion, or that we are at least planning a path forward for the mentioned cases. #87 has some discussions around parallel parsing which are relevant to the sorts of ideas we'd need to consider here. Some other relevant discussions can be found in I'm pretty swamped atm cos holiday season and kids and probably won't be able to get back to this properly until the new year. |
Co-authored-by: 唯然 <[email protected]>
Thanks for putting this together. I'm going to need more time to dig into the details, and I really appreciate the amount of thought and explanation you've included in this RFC. I have a few high-level thoughts from reviewing this today:
|
Yes, it would be interesting to look into other tools to understand how they handle concurrency. This could actually bring in some interesting ideas even if the general approach is different. I was thinking to check Prettier but haven't managed to do that yet. Jest and Ava are also good candidates.
Thanks for the list. I missed most of those links while skimming through the discussion in eslint#3565. I'll be sure to go through the items and add a prior art mention.
Workers don't need to create a new instance of |
One thing I'd like to point out before it's too late and just in case it's relevant: multithreading makes multifile analysis harder. If there ever comes a system where a single rule can look at the contents of multiple files—as implemented in I imagine this kind of analysis is not really in the scope for ESLint at the moment (I haven't seen anything in the RFCs at least), as it would a high complexity impact on the project (I've written about some tradeoffs in this post). But if this proposal were to be implemented without consideration for multifile analysis—which seems to be the case currently—then the cost of implementing it later would skyrocket and I imagine would lead it to never be implemented. I'm looking forward to see how this evolves, as I have unfortunately not figured out multi-threading well enough for this task to even try implementing it for |
The only way to parallelise and efficiently maintain cross-file analysis is with shared memory. Unfortunately in JS as a whole this is nigh-impossible with the current state of the world. Sharing memory via The shared structs proposal would go a long way in enabling shared memory models and is currently at stage 2 -- so there is some hope for this in the relatively near future! I know the TS team is eagerly looking forward to this proposal landing in node so they can explore parallelising TS's type analysis. For now at least the best way to efficiently do parallelised multi-file analysis is to do some "grouping aware" task splitting. I.e. instead of assigning files to threads randomly you instead try to keep "related" files in the same thread to minimise duplication of data across threads. But this is what I was alluding to in my above comments [1] [2] -- there needs to be an explicit decision encoded in this RFC:
The former is "the easy route" for obvious reasons -- there's a lot to think about and work through for the latter. As a quick-and-dirty example that we have discussed before (see eslint/eslint#16819): Just to reiterate my earlier comments -- I'm 100% onboard with the going with the former decision and ignoring the cross-file problem. I just want to ensure that this case has been fully considered and intentionally discarded, or that the design has a consideration to eventually grow the parallelism system to support such usecases. |
I think what we're going for here is effectively a "stop the bleeding" situation where we can get ESLint's current behavior to go faster, as this becomes an even bigger problem as people start to use ESLint to lint their JSON, CSS, and Markdown files, significantly increasing the number of files an average user will lint. I'm on board with discounting the cross-file problem at this point, as I think it's clear that many companies have created their own concurrent linting solutions built on top of ESLint that also discount this issue. I would like to revisit cross-file linting at some point in the future, but before we can do that, we really need to get the core rewrite in progress. |
|
||
**[Trunk Code Quality](https://trunk.io/code-quality)** | ||
|
||
Trunk manages to parallelize ESLint and other linters by splitting the workload over multiple processes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that mean it's splitting up the file list and spreading across multiple processes? Or just one process for ESLint and other processes for other tools?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file list is being split in chunks of a predefined size and I can see that multiple threads are also being spawn in a process, but not sure what each thread is doing. I will look deeper into the details.
Thanks for the feedback @jfmengels. In fact multifile analysis or project-aware analysis is not a concept we have implemented in the ESLint core at this time, which is the reason why it's not covered it in this RFC.
Do you think it would be too difficult to have multifile analysis and multithread linting at the same time? Or are you suggesting that implementing multifile analysis before multithread linting would be easier than the other way around? If you could clarify your concern we could add that in the drawbacks section for further consideration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good and I love the level of detail. Just left a few questions throughout.
|
||
When `auto` concurrency is selected, ESLint will use a heuristic to determine the best concurrency setting, which could be any number of threads or also `"off"`. | ||
How this heuristic will work is an open question. | ||
An approach I have tested is using half the number of available CPU cores, which is a reasonable starting point for modern machines with many (4 or more) cores and fast I/O peripherals to access the file system. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can just start by using this heuristic with the docs stating that you may get better performance by manually setting the concurrency level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine, I've updated the RFC. We can always improve the heuristic later.
|
||
```js | ||
if (!options[disableSerializabilityCheck]) { | ||
const unserializableKeys = Object.keys(options).filter(key => !isSerializable(options[key])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially faster to just run JSON.stringify()
and catch any errors (faster than structuredClone()
, which is creating objects as it goes). If there are errors, only then do we check which keys are causing the problem. I'm just not sure if we want to pay this cost 100% of the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, we can't use JSON.stringify
because it just skips unserializable properties. For example, JSON.stringify({ foo: () => {} })
creates an empty object.
But I think it's still a good idea to try serializing the whole object first and only check the single properties when needed.
Also, the idea is to run the check only when concurrency
is specified. I've clarified that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use the replacer option to check for functions:
JSON.stringify(value, (key, value) => {
if (typeof value === "function") {
throw new TypeError("Function!");
}
});
const abortController = new AbortController(); | ||
const fileIndexCounter = new Int32Array(new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT)); | ||
const workerPath = require.resolve("./worker.js"); | ||
const workerOptions = { | ||
workerData: { | ||
filePaths, | ||
fileIndexCounter, | ||
eslintOptions | ||
}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit concerned about passing all file paths to every thread, as this is a lot of duplicated memory. Thinking about a situation where there are 10,000 files to be linted (which we have received reports of), that means we'd have 10,000 * thread_count stored in memory.
I wonder about an alternative approach where each thread is seeded with maybe 5-10 file paths (or maybe just one?) that it's responsible for linting. When they are all linted, it sends a message asking for more. I know this creates more chatter, but I'm wondering if it might end up being more memory-efficient in the long-run?
Any insights into how other tools handle this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could certainly compare this approach to other implementations. But really, 10,000 file paths do not take more than a few megabytes in memory. Even with, say, 32 threads, or one per CPU, that's still a totally manageable size. Also, if the shared structs proposal mentioned in the above comments is adopted in Node.js, memory duplication will no longer be a concern. I'm still planning on looking into other tools like Ava or Mocha, so I'll be sure to check how they handle sessions with many files.
Another possible solution is retrieving rules `meta` objects in each worker thread and returning this information to the main thread. | ||
When `getRulesMetaForResults()` is called in the main thread, rules `meta` objects from all threads will be deduped and merged and the results will be returned synchronously. | ||
|
||
This solution removes the need to load config files in the main thread but it still requires worker threads to do potentially useless work by adding an extra processing step unconditionally. | ||
Another problem is that rules `meta` objects for custom rules aren't always serializable. | ||
In order for `meta` objects to be passed to the main thread, unserializable properties will need to be stripped off, which is probably undesirable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this approach. Do you have an example of when a rule uses unserializable values in meta
?
Overall, I think it's safe for us to assume that meta
is always serializable and deal with any outliers through documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right. I've checked typescript-eslint
, eslint-plugin-unicorn
, eslint-plugin-n
, eslint-plugin-import
and eslint-plugin-react
and none has rules with unserializable metadata. We could mention in the docs that unserializable properties in rule metadata will be silently ignored when getRulesMetaForResults()
is called in multithread linting mode. I'll update the RFC.
|
||
Errors created in a worker threads cannot be cloned to the main thread without changes, because they can contain unserializable properties. | ||
Instead, Node.js creates a serializable copy of the error, stripped off of unserializable properties, and reports it to the main thread as a paremeter of the [`error`](https://nodejs.org/docs/latest-v18.x/api/worker_threads.html#event-error) event. | ||
During this process `message` and `stack` are preserved because they are strings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this preserve all serializable properties? My main concern is the messageTemplate
property that we add to produce readable error messages: https://github.com/eslint/eslint/blob/8bcd820f37f2361e4f7261a9876f52d21bd9de8f/bin/eslint.js#L80
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, all serializable properties are preserved.
I don't know that it is too hard, but it's mostly that having each thread analyze a portion of of the project won't work, at least with the multifile analysis approach I've chosen for For my use-case, it's more likely that analysis can be split by rule, instead of by file. i.e. thread 1 will run rules 1 and 2, thread 2 will run rule 3 and 4, etc. But this means that memory-wise, either the project's contents (and derived data) needs to be stored on every thread (multiplying the memory), which doesn't sound great for performance. Maybe this would get improved with the shared structs proposal. Right now, both multifile analysis and multithreading are do-able in isolation, but doing both requires a lot more thinking and maybe additional tools. Therefore doing one may exclude the other. But if you ever figure it out I'll definitely be interested to hear about it! |
@jfmengels I've added a section Multifile Analysis to the RFC. |
It's entirely possible to have both - one just needs to be "smart" about how files are distributed amongst threads. As I've mentioned before -- any "random" distribution1 approach will cause large duplication of work (and memory usage). If you are "smart" about how files are distributed -- eg use the dependency graph to distribute them to minimise the number of edges spanning threads -- then you can minimise the duplication for a net positive benefit. Computing a dependency graph ahead of time is going to take time so it's not necessarily a good heuristic -- but it is an example. As an example for TypeScript projects one could very quickly compute the set of files included in all relevant Another example of ways you can be smart is if ESLint builds in stateful primitives to make it easy for rules and parsers to store and access data across threads. As an example -- most of It's all possible -- it just requires some thought and careful design to make it work well. Footnotes
|
Summary
This document proposes adding a new multithread mode for
ESLint#lintFiles()
. The aim of multithread linting is to speed up ESLint by allowing workload distribution across multiple CPU cores.Related Issues
eslint/eslint#3565