Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any idea to speed up the open_mfdataset for reading many many big netCDF files? #1788

Closed
wqshen opened this issue Dec 18, 2017 · 3 comments
Closed

Comments

@wqshen
Copy link

wqshen commented Dec 18, 2017

I have several WRFout files for 20-year climate simulations, when i use open_mfdataset to read them, it takes me 10 - 20 minutes to finish on my server.

Is there a way to speed up this process?
Multiprocessing ??

@shoyer
Copy link
Member

shoyer commented Dec 18, 2017

Try changing the coords argument to list the explicit coordinates that should be concatenated together. (We should probably change this argument name: it makes more sense on concat than open_mfdataset.)

@jhamman
Copy link
Member

jhamman commented Dec 18, 2017

@wqshen - can you report the versions of xarray/dask that you are using (the original issue template had examples of how to do this)?

@jhamman
Copy link
Member

jhamman commented May 18, 2018

Closed via #1981

@jhamman jhamman closed this as completed May 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants