Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Makes patches optional #2543
Makes patches optional #2543
Changes from 5 commits
fb95e58
72b18e8
505e48e
504d677
65a4193
11e97a7
8c770cb
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Large diffs are not rendered by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you change this part and the
initialCopy
to be in-memory, then write the resulting buffer to disk for the returned ZipFS instance?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I can, since we need to take snapshots in-between each patch. If I was to closeAndSave a memory zipFs, then we'd to already have in memory another one representing the last snapshot; this would be difficult unless we clone the memory zipFs somehow, which I don't know how to do efficiently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arcanis Wouldn't this work? mael/optional-patches...paul/patch-example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I forgot
getBufferAndClose
🤔 Still, it would require to keep the whole archive in memory, twice per package. I'm a bit worried it would be heavy (by contrast, by closing / opening, it only needs the archive listing)... given that the general case is to apply a single patch, I'd tend to first try the fs approach and switch only if we see a noticeable impact.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we at least profile both approaches and compare the time spent and the memory usage to make sure that we don't introduce a significant performance regression? Since compression is the slowest part of the fetch step, I'd prefer to avoid having to compress the archive for each patch file. I've also just realized that my changes still aren't optimal because they don't prevent compression (I have to pass
level: 0
for that to happen).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will check - I however expect the compression level to affect new writes, not the existing ones. With that in mind we can't disable compression, since it would require to go back and compress the affected files after all the patches have been applied - far too complex for a fringe use case (multiple patches on the same package).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the record: 22s install time for TS code cache, vs 15s before.