You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am on Windows 10/11 and observe large memory reservation by terra_1.7.83 / R_4.3.3.
To reproduce it is very simple: Make a new project with an .Rprofile in the root containing the following piece of code:
loadNamespace("terra")
Now on ´R´ command line, simply make a cluster with cluster <- parallel::makeCluster(20) and observe the memory allocation. When each clusters spins up, it will execute loadNamespace("terra"). This call seems to reserve about 10 % of the available physical RAM per process.
From:
To:
This is 56GB (63-7) just for loading terra in 20 parallel processes. Is this intended (?), and if so, how can it be adjusted? If terra was the only thing used in the code, then I wouldn't mind, but I my case I have lots of other memory-intensive code that would also require some memory, and terra seems to be reserving too much. If the system cannot reserve enough memory, then you end up with:
Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared object '[lib_path]/library/terra/libs/x64/terra.dll': LoadLibrary failure: The paging file is too small for this operation to complete. Calls: loadNamespace -> library.dynam -> dyn.load -> inDL Execution halted
The memory reservation seems to be related to GDAL loading in R/zzz.R. I have tried playing a bit with config settings, but to no avail:
options(terra_default=list(
memfrac=0.9,
memmin=0.01,
memmax=0.01# GB, to limit terra memory usage.
))
Sys.setenv("GDAL_CACHEMAX"="32MB")
Sys.setenv("GDAL_MAX_DATASET_POOL_RAM_USAGE"="64MB")
Sys.setenv("VSI_CACHE"="FALSE")
loadNamespace("terra")
I am on Windows 10/11 and observe large memory reservation by terra_1.7.83 / R_4.3.3.
To reproduce it is very simple: Make a new project with an
.Rprofile
in the root containing the following piece of code:loadNamespace("terra")
Now on ´R´ command line, simply make a cluster with
cluster <- parallel::makeCluster(20)
and observe the memory allocation. When each clusters spins up, it will executeloadNamespace("terra")
. This call seems to reserve about 10 % of the available physical RAM per process.From:
To:
This is 56GB (63-7) just for loading terra in 20 parallel processes. Is this intended (?), and if so, how can it be adjusted? If terra was the only thing used in the code, then I wouldn't mind, but I my case I have lots of other memory-intensive code that would also require some memory, and terra seems to be reserving too much. If the system cannot reserve enough memory, then you end up with:
Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared object '[lib_path]/library/terra/libs/x64/terra.dll': LoadLibrary failure: The paging file is too small for this operation to complete. Calls: loadNamespace -> library.dynam -> dyn.load -> inDL Execution halted
The memory reservation seems to be related to GDAL loading in
R/zzz.R
. I have tried playing a bit with config settings, but to no avail:Any suggestions?
The text was updated successfully, but these errors were encountered: