-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jszip runs out of memory when creating a zip file with 20,000 files #446
Comments
Testing with just 200 objects and Creating the workers on demand would be a better idea even if it still involves resource copying (we need to "freeze" the files). After this quick fix, I get: For 20000 objects (still with I still see a lot of created objects and I'm not sure we need them all (but that's still way better). I'll prepare a patch. |
A pako object contains a 64k buffer. We create a `FlateWorker` for each zip entry, meaning a zip file with a lot of entries would take **a lot** of memory. Lazy-loading the pako object isn't the best solution but it's the quickest. The best solution is to lazy-load the worker list. Mitigate the issue Stuk#446.
The partial fix has been released in JSZip v3.1.4. |
A pako object contains a 64k buffer. We create a `FlateWorker` for each zip entry, meaning a zip file with a lot of entries would take **a lot** of memory. Lazy-loading the pako object isn't the best solution but it's the quickest. The best solution is to lazy-load the worker list. Mitigate the issue Stuk#446.
Problem can be reproduced on Chrome, using the following files. Note, pkzip.js needs to be in same directory as this file.
When above html file is first opened, memory shown in Task manager is 15 Megs.
After clicing PrepareZip, and forcing a Garbage collection, Memory is 32 Megs.
After clicking GenerateZip, memory climbs to more than 3 Gigs, then "Aw Snap" appears on Chrome.
The root cause seems to be generateWorker() in https://github.com/Stuk/jszip/blob/master/lib/generate/index.js
This creates a compression worker for each file to be compressed, before starting to compress any file.
Each worker contains 9 buffers that are 64 kbyte in length.
The text was updated successfully, but these errors were encountered: