-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running cops in parallel #117
Comments
I'm going to run some benchmarks! |
Taking a look, I think that the time it takes to run the cops is trivial compared to the I/O time to read each file. The performance was actually worse when trying to parallelize the cops (this may be different with JRuby). However, parallelizing the file reads, I was able to check the rubocop source 8 seconds faster by switching 2 lines of code (using the Parallel gem): # ...
require 'parallel'
# ...
#target_files(args).each do |file|
Parallel.each(target_files(args)) do |file|
# ... For some reason it's not incrementing the files checked when it runs in parallel. With a little more tinkering, I may be able to set it up so that the -p flag would use the parallel method. This would allow the brave people to use it while not causing problems for anyone else. |
This sounds very promising. I support your suggestion about the -p flag. On Sat, May 4, 2013 at 9:04 PM, Ryan Boland [email protected]
|
@bolandrm If you use Parallel, you should use What's the speedup if you use processes instead of threads? |
@bolandrm See jurriaan/rubocop@de02724b75d1 for how to fix incrementing files and offences |
@jurriaan Good call, thanks. I was thinking that Parallel used processes by default (if possible on the particular system). Their documentation isn't that great. I was going to check out Celluloid. |
Celluloid looks interesting, never used it though. Looking forward to your results! |
You really do not need Celluloid here... Parallel is more than enough. I would suggest simply marshalling the offences, then using Parallel#map and then displaying them in the main process. |
@bolandrm Are you working on this right now? I could look into this if you want to? |
@jurriaan If you can cook something up soon it would be great. I would have done that myself by now, but I'm currently tied up porting cops to Parser. |
@jurriaan please feel free to take this over. I was planning on finishing it eventually but I'm really busy at the moment. On May 15, 2013, at 1:48 AM, Bozhidar Batsov [email protected] wrote:
|
I'll be closing this one, since @edzhelyov and I have an idea that would render the need for running cops in parallel obsolete. |
Fine, I didn't have the time to implement it, and the code was changing too much for me. What's the new idea? :) |
We'll traverse the AST only once in a dispatch-like class, that would propagate the parser events to the interested cops(all cops that are implemented as parser processors). Since most of the execution time is currently spend in traversing the AST over and over in each cop that approach should speed up things significantly. There are a few cops for which this is not applicable, but overall there should be a huge speed improvement. |
Sounds great! 👍 |
@bbatsov 👍 I was also thinking the behavior “traversing the AST over and over in each cop” wastes too much CPU. Though, in exchange for the waste, we could implement each cop without considering side effects on other cops. Actually I was about to refactor the inspection and parsing logic of |
The inspection and parsing logic should definitely be moved out of the |
Any chance of reopening this? One of my projects has ~2300 files. It takes around 2 minutes to run Rubocop currently with our configuration on my 2.6 GHz Intel Core i7 Macbook Pro, and only using one out of 8 cores seems suboptimial. |
Is there any interest in revisiting this? Our codebase has around 1,600 files to check, and writing a quick script to split the file list into several groups and check each group in a separate process roughly halves the time it takes to check our codebase (1m30s down to 45s). |
@jcoglan What about the caching mechanism introduced in v0.34.2? Doesn't that solve your problem? |
@jonas054 Yes, I've tried that now and it makes a huge difference. Thanks :) |
Any chance of revisiting this decision?
On our codebase it only takes 3.5 minutes to run 8x in parallel but it takes 12 minutes to do it serially. I'm sure with inbuilt support for parallelizing it can get even better. |
Adding built-in support for parallel execution requires a lot of changes. If this wasn't the case we would have done it by now. :-) Frankly, I doubt we'll every get there - I'm certainly not eager to work on this myself. |
This change tries to solve the tricky parallel execution problem by spawning off a number of processes/threads to do file inspection, sharing the work between them, without collecting any output. When all processes are finished, the original process runs the full inspection again, taking advantage of result caching.
This change tries to solve the tricky parallel execution problem by spawning off a number of processes/threads to do file inspection, sharing the work between them, without collecting any output. When all processes are finished, the original process runs the full inspection again, taking advantage of result caching.
As we add more and more checks the performance naturally regresses a bit. Luckily the things we're doing are pretty parallelizable - we can simply split the cops into 3 groups, run them in 3 threads and just merge the results in the end. Even though Ruby's threading is not exactly its strong suite I still think we might be able to get some benefit out of this.
JRuby is inching closer to supporting Ripper(https://gist.github.com/enebo/4548938) and there the performance boost should be even bigger.
This issue is here mostly for discussion purposes. If anyone has the time to run a few simple experiments with basic parallelization and share the results that'd be great.
The text was updated successfully, but these errors were encountered: