Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase in native image RSS after upgrade to ce 19.3.0 #1984

Closed
johnaohara opened this issue Dec 12, 2019 · 42 comments
Closed

Increase in native image RSS after upgrade to ce 19.3.0 #1984

johnaohara opened this issue Dec 12, 2019 · 42 comments
Assignees

Comments

@johnaohara
Copy link

We have found a significant increase in memory usage of a simple quarkus application [1] built as a native image with ce-19.3.0 compared to ce-19.2.1.

The getting-started application, started with a 2MB heap uses the following memory to bootstrap;
19.2.1 - 13876k
19.3.0 - 63048k

There appears to be a single memory address space that is now using 49MB of memory, compared to 20k previously.

Attached are two pmap outputs showing the memory usage for a build against each version.

The arguments passed to native-image were;

19.2.1;

-J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar getting-started-1.0-SNAPSHOT-runner.jar -J-Djava.util.concurrent.ForkJoinPool.common.parallelism=1 -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:+AddAllCharsets -H:EnableURLProtocols=http -H:-JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace getting-started-1.0-SNAPSHOT-runner

19.3.0;

-J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar config-quickstart-1.0-SNAPSHOT-runner.jar -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:+AddAllCharsets -H:EnableURLProtocols=http -H:+JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace config-quickstart-1.0-SNAPSHOT-runner

Do you know what might be causing this?

19.2.1_2m.pmap.log
19.3.0_2m.pmap.log

1 - https://github.com/quarkusio/quarkus-quickstarts/tree/master/getting-started

@christianwimmer
Copy link

The 49172 KByte block could be a part of the normal Java heap, but I have my doubts because the size is unusual (not a multiple of the 1024 KByte aligned chunk size). It could be a single large array allocation though.

One way to find out if it is part of the Java heap is to use the Enterprise Edition, which by default uses a contiguous address space for the Java heap (so there is only one large memory mapping for the entire heap).

If it is not a Java object, then either Netty or some other framework included is doing a large native memory allocation. Netty by default allocates large buffers in native memory. Older versions of Netty had a Native Image configuration missing and therefore sized buffers based on the heap size of the image generator and not based on the Java heap size of the image at run time.

@cstancu
Copy link
Member

cstancu commented Dec 12, 2019

FYI the netty issue has been addressed in netty/netty#9515 which was included in netty 4.1.

@gwenneg
Copy link
Contributor

gwenneg commented Dec 12, 2019

@christianwimmer: 63048k - 13876k - 20k (the initial memory address space mentioned by @johnaohara) is a multiple of 1024.

The netty version in Quarkus 1.1.0.CR1 is 4.1.42.Final.

@johnaohara
Copy link
Author

Thanks for the thoughts. So 63048k - 13876k - 20k is 49152k. The default chunk size netty allocates is 16384k. 49152k is exactly divisible by 16384k. I will look at netty buffer allocation.

netty/netty#9515 was included in 4.1.41.Final, as @gwenneg stated, we are currently using 4.1.42.Final

@gwenneg
Copy link
Contributor

gwenneg commented Dec 13, 2019

@johnaohara The following commit should also be checked as it was introduced into Quarkus to fix a netty issue with GraalVM 19.3.0: quarkusio/quarkus@3a0599d

For the record, it was introduced as a workaround to delay the netty update to 4.1.43.Final in Quarkus, which was a requirement for GraalVM 19.3.0 (because netty/netty#9631 was needed). See the discussion about it here: quarkusio/quarkus#4218 (comment)

@johnaohara
Copy link
Author

@gwenneg I reverted that patch and upgraded netty to 4.1.43.Final, still see a regression. Am going to generate stack traces of the allocation sites to see what might be causing the new allocations

@johnaohara
Copy link
Author

Quick update on where I am with this issue. I traced out native memory allocation, and I do not see netty allocating large chunks of memory at startup. I built the application with debug symbols, and see there is a call to memset at startup;

(gdb) bt
#0  0x00007ffff7d98111 in __memset_avx2_erms () from /lib64/libc.so.6
#1  0x0000000000405037 in init ()
#2  0x00000000016bfbad in __libc_csu_init ()
#3  0x00007ffff7c5a12e in __libc_start_main () from /lib64/libc.so.6
#4  0x000000000040510e in _start ()

The instruction being executed is;

(gdb)  x/5i $pc-6
   0x7ffff7d9810b <__memset_avx2_erms+11>:	movzbl %dh,%eax
   0x7ffff7d9810e <__memset_avx2_erms+14>:	mov    %rdi,%rdx
=> 0x7ffff7d98111 <__memset_avx2_erms+17>:	rep stos %al,%es:(%rdi)
   0x7ffff7d98113 <__memset_avx2_erms+19>:	mov    %rdx,%rax
   0x7ffff7d98116 <__memset_avx2_erms+22>:	retq  

with registers;

(gdb) info all-registers ecx al ax eax
ecx            0x1d6a462           30844002
al             0x0                 0
ax             0x0                 0
eax            0x0                 0

So it looks like a chunk of memory (30121KB in this case) is being allocated and zero'd with memset at startup with a native image built with 19.3.0.

@emmanuelbernard
Copy link

Hey @johnaohara, can you add to your tests (if not already) and report whether setting a aggressive -Xmx alleviate the issue (even partially).

@johnaohara
Copy link
Author

Setting -Xmx has no effect, the memory is allocated and written to during the initialization phase of the process, before the main() entry point for the application com.oracle.svm.core.code.IsolateEnterStub.JavaMainWrapper_run_5087f5482cc9a6abc971913ece43acb471d2631b(int, long) is called

In the executable created with 19.3.0, there is now a new <init> section that is not produced while compiling and linking with 19.2.1. The instruction highlighted below is the instruction that is allocating and filling the memory region;

Disassembly of section .text:

0000000000405000 <init>:
  405000:       55                      push   %rbp
  405001:       bf 07 00 00 00          mov    $0x7,%edi
  405006:       48 89 e5                mov    %rsp,%rbp
  405009:       53                      push   %rbx
  40500a:       48 8d b5 c0 fe ff ff    lea    -0x140(%rbp),%rsi
  405011:       48 81 ec 38 01 00 00    sub    $0x138,%rsp
  405018:       e8 23 f7 ff ff          callq  404740 <getrlimit@plt>
  40501d:       48 8b bd c8 fe ff ff    mov    -0x138(%rbp),%rdi
  405024:       be 30 00 00 00          mov    $0x30,%esi
  405029:       89 3d f1 f2 15 02       mov    %edi,0x215f2f1(%rip)        # 2564320 <fdCount>
  40502f:       48 63 ff                movslq %edi,%rdi
  405032:       e8 a9 f3 ff ff          callq  4043e0 <calloc@plt>
  405037:       48 85 c0                test   %rax,%rax   <============ Memory allocated here
  40503a:       48 89 05 e7 f2 15 02    mov    %rax,0x215f2e7(%rip)        # 2564328 <fdTable>
  405041:       74 69                   je     4050ac <init+0xac>
  405043:       48 8d 9d 50 ff ff ff    lea    -0xb0(%rbp),%rbx
  40504a:       48 8d 05 2f 63 0d 01    lea    0x10d632f(%rip),%rax        # 14db380 <sig_wakeup>
  405051:       c7 45 d8 00 00 00 00    movl   $0x0,-0x28(%rbp)
  405058:       48 8d 7b 08             lea    0x8(%rbx),%rdi
  40505c:       48 89 85 50 ff ff ff    mov    %rax,-0xb0(%rbp)
  405063:       e8 c8 f3 ff ff          callq  404430 <sigemptyset@plt>

This is the backtrace of where the allocations are coming from;

(gdb) bt
#0  0x00007ffff7d98111 in __memset_avx2_erms () from /lib64/libc.so.6
#1  0x0000000000405037 in init ()
#2  0x00000000016bfbad in __libc_csu_init ()
#3  0x00007ffff7c5a12e in __libc_start_main () from /lib64/libc.so.6
#4  0x000000000040510e in _start ()

And the instruction that is executed that fills the memory region;

(gdb) x/10i $pc-18
   0x7ffff7d980ff:	nop
   0x7ffff7d98100 <__memset_avx2_erms>:	endbr64 
   0x7ffff7d98104 <__memset_avx2_erms+4>:	vzeroupper 
   0x7ffff7d98107 <__memset_avx2_erms+7>:	mov    %rdx,%rcx
   0x7ffff7d9810a <__memset_avx2_erms+10>:	movzbl %sil,%eax
   0x7ffff7d9810e <__memset_avx2_erms+14>:	mov    %rdi,%rdx
=> 0x7ffff7d98111 <__memset_avx2_erms+17>:	rep stos %al,%es:(%rdi)
   0x7ffff7d98113 <__memset_avx2_erms+19>:	mov    %rdx,%rax
   0x7ffff7d98116 <__memset_avx2_erms+22>:	retq   
   0x7ffff7d98117:	nopw   0x0(%rax,%rax,1)

I am writing up how to reproduce now

@christianwimmer
Copy link

christianwimmer commented Dec 19, 2019

Interesting that the allocation happens before Java code starts running. So it must be something in the C code of the JDK.

One possible cause: http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/a2154c771de1/src/solaris/native/java/net/linux_close.c#l88 allocates a table based on the number of file descriptors. Can you try running the executable with a low number of file descriptors and see if that changes the memory allocation behavior? That seems like the easiest way to confirm.

The file descriptor table size was fixed in JDK 9: https://bugs.openjdk.java.net/browse/JDK-8150460

@gwenneg
Copy link
Contributor

gwenneg commented Dec 19, 2019

If the RSS issue comes from a bug fixed in JDK 9 then it might be interesting to build the application using JDK 11 and see the impact on the memory used. I know it's a very incomplete test but it could give some useful information at a very low cost.

@johnaohara
Copy link
Author

@christianwimmer right, this makes sense. So I noticed that there was a class containing a HashMap of file descriptors [1] and a JDK11 substitution that that referenced a global Singleton PosixJavaNetClose [2] for cleanly closing java.net.SocketCleanable.

As java.net.SocketCleanable was introduced in JDK9, I rebuilt 19.3.0 on JDK8.

Now the backtrace is;

(gdb) bt
#0  0x00007ffff7d98111 in __memset_avx2_erms () from /lib64/libc.so.6
#1  0x0000000000f96271 in init ()
    at /opt/jprt/T/P1/225159.buildslave/s/jdk/src/solaris/native/java/net/linux_close.c:88
#2  0x0000000000fa10ed in __libc_csu_init ()
#3  0x00007ffff7c5a12e in __libc_start_main () from /lib64/libc.so.6
#4  0x000000000040502e in _start ()

That backtrace fits with your theory.

Reducing the max number of File Descriptors (from 655360, to 2048), the RSS for a simple app dropped from 51.6MB to 21.2MB.

What I don't understand atm is;

  1. my platform is not solaris, I am running on rhel/fedora linux, idk why I am seeing Solaris specific code being executed.
  2. This behaviour is observable on JDK11 and JDK8 with 19.3.0, but I did not see this issue building a native image with 19.2.1 on JDK8
  3. With 19.2.1 on JDK8 a simple native application uses 13.8MB on startup, but with 19.3.0 on JDK8 (albeit probably a different JDK build) is using 21.2MB, there is still appears to be a delta between the graal versions.

1 - https://github.com/oracle/graal/blob/vm-19.3.0/substratevm/src/com.oracle.svm.core.posix/src/com/oracle/svm/core/posix/PosixJavaNetClose.java
2 - https://github.com/oracle/graal/blob/vm-19.3.0/substratevm/src/com.oracle.svm.core.posix.jdk11/src/com/oracle/svm/core/posix/jdk11/Target_java_net_SocketCleanable.java

@emmanuelbernard
Copy link

@johnaohara the 19.2 vs 19.3 behavior difference is explained by the fact that GraalVM native image has rebased its low level Java support on the JVM C library (sorry fore the layman explanation).

All in all, it's a massive increase for GraalVM native image even on small file descriptor size. While being a "drop in the bucket" for OpenJDK. Would be good to try and find some alternative strategies for these different environments. CC @dmlloyd for ref.

@adinn
Copy link
Collaborator

adinn commented Dec 20, 2019

@christianwimmer Yes, that looks very much like it is the culprit.

The code that John found which is running before JVM startup (__libc_csu_init) is introduced by the linker as part of the premain execution. It runs static initializations for global (static) data. The premain coce hands it a table of pointers to snippets of compiled C code to run. The table includes pointers to functions marked with attribute(constructor) -- like the init() function you highlighted that is in linux_close.c. Table entries are collected by the linker from all the various libraries and objects linked into the final program. Since 19.3.0 includes static OpenJDK libraries one of them (libnet.a I guess?) will include a global init table entry for that init function.

So, if you want to use OpenJDK libs then you either have to build them so as to do this static init differently or else configure the deployment environment to make it less greedy.

Luckily, this appears to be the only native constructor function in the OpenJDK code base (I grepped '__attribute(' in the source tree and that is all I could find.

@johnaohara in jdk8 it was the case that for many of the java native code subtrees the solaris code was the prototypical version that was used for linux and unices. Note that in jdk11the path to file linux_close.c is subdir src/java.base/linux/native/libnet.

@johnaohara
Copy link
Author

johnaohara commented Dec 20, 2019

@adinn Thanks for the explanation wrt to solaris code.

Yes, libnet.a is linked when the executable is built

@dmlloyd
Copy link
Contributor

dmlloyd commented Dec 20, 2019

I wonder if we could talk to Alan or someone else in the (OpenJDK) net/io area and come up with a better (perhaps pure-Java) alternative solution to this (very old) code; I know Alan has been majorly reworking blocking I/O in any event. Or at least, perhaps the code could provisionally be changed to use mmap with MAP_NORESERVE or something similar so that the memory isn't committed until it's accessed (which would also allow the OS to zero the pages for us lazily instead of zeroing many megabytes on first boot). This might be a doubly useful solution overall because in general the OS is going to hand out the lowest-numbered descriptor that is available, so if mmap is used in this way, the table will generally only commit a relatively small number of pages if only a small number of FDs are used.

@adinn
Copy link
Collaborator

adinn commented Dec 20, 2019

You might need to square use of mmap with the substrate team. One of the original design goals of substrate was not to futz around with most low-level library calls or system calls. The API footprint on which substrate originally relied was carefully restricted to a tiny set of lib functions so as not to prejudice embedding into an Oracle DB process.

@christianwimmer would modifying the OpenJDK code to use malloc be ok?

Of course any such change would also need backporting to jdk11u.

@christianwimmer
Copy link

Isn't it enough to just backport https://bugs.openjdk.java.net/browse/JDK-8150460 to JDK 8? That seems like a simple thing that can be done in the next weeks so that the change is in the next release.

I agree that the whole JDK code could be improved, but that sounds like a longer-term project. We don't really want to have different C code in the JDK for Native Image vs. the regular OpenJDK. So for a major rewrite, the change would first need to be merged into OpenJDK master, then backported, which for sure will take a long time.

@emmanuelbernard
Copy link

I agree. I would much prefer go for a quick fix followed by the longer term fix in parallel.
As it stands, I don’t think we can move Quarkus with the current memory increase.

@johnaohara
Copy link
Author

I presume that patch (https://bugs.openjdk.java.net/browse/JDK-8150460) is in JDK11? I see this issue with JDK8 and JDK11, the problem is this line http://hg.openjdk.java.net/jdk9/jdk9/jdk/file/ee0a64ae78db/src/java.base/linux/native/libnet/linux_close.c#l125 calls calloc, which is allocating and zeroing an array.

If the patch was backported, we would still see RSS growth with max number of FD's below INT_MAX

@adinn
Copy link
Collaborator

adinn commented Dec 30, 2019

@christianwimmer yes I think a backport of that patch is the best way forward. I will propose this on the jdk8u list.
@johnaohara I don't understand how you are seeing a comparable amount of extra space being allocated on jdk11. The jdk9 patch Christian referred to is also present in jdk11.

The code at the line you cite is limited to allocating storage for an array of fdEntry_t structures with at most fdTableMaxSize (i.e. 4K) entries - see http://hg.openjdk.java.net/jdk9/jdk9/jdk/file/ee0a64ae78db/src/java.base/linux/native/libnet/linux_close.c#l78. So, at worst that is roughly 50 * 4Kb i.e. 200Kb.

With a large or unlimited fd count there will also be a later allocation of an overflow table which is an array of pointers of type fdEntry_t* -- see http://hg.openjdk.java.net/jdk9/jdk9/jdk/file/ee0a64ae78db/src/java.base/linux/native/libnet/linux_close.c#l139. The amount of pointer storage allocated varies according to the chosen fd count. However, the worst case is when the fd count is RLIM_INFINITY. In that case the overflow table contains (INT_MAX - 0x1000) / fdOverflowTableSlabSize entries. Neglecting the subtraction that is roughly 32K * 64K * 8 / 64K bytes = 256K. So, including both allocations the worst RSS growth at startup ought to be less than half a Mb.

@johnaohara
Copy link
Author

@adinn I will double check the JDK11 result

@johnaohara
Copy link
Author

I have checked the quick start with ce 19.3.0 JDK8 and JDK11. With max files 655360, RSS figures are

version RSS (kb)
19.3.0-JDK8/getting-started-1.0-SNAPSHOT-runner -Xmx2m 53664
19.3.0-JDK11/getting-started-1.0-SNAPSHOT-runner -Xmx2m 25972

The patch in JDK11 is reducing the memory allocated by the native binary. I see the block of memory allocated at start up drop from 30736Kb to 12Kb

@adinn
Copy link
Collaborator

adinn commented Jan 2, 2020

@johnaohara Thanks for confirming. So, the jdk8u patch should be all we need.

@christianwimmer Apologies for misphrasing the question I asked earlier. I meant to say "would modifying the OpenJDK code to use mmap be ok?"

The reason I am pursuing the point is that as far as the OpenJDK lib maintainers are concerned the strategy David proposed of calling mmap with MAP_NORESERVE would have been a perfectly viable alternative to the fix I just pushed. However, if reliance on functions like mmap is an issue for use of these libs in Graal's native images/libs then this raises a flag since there is no OpenJDK policy to ensure that native libs limit their use of system APIs beyond the need to ensure they don't interfere with the JVM library. Was that possibility considered in the switch to use the OpenJDK libs?

@christianwimmer
Copy link

@adinn Substrate VM uses mmap internally also to allocate the Java heap, so a usage of mmap in the JDK native code would not be a problem.

@tstuefe
Copy link

tstuefe commented Jan 4, 2020

One problem with the proposed mmap idea would be that you have to pre-reserve the whole expected range. But its theoretical max size can be very large or is typically even unknown, if RLIM_NO_FILE=0|1. Oversizing it increases virtual process size unnecessarily, which may be less important than RSS but you still do not want it to increase by several GB for such a minor issue.

I still think for this case - where in 99% of cases you only have to store a few dozens or hundred fds - a sparse array is the tighter solution.

@dmlloyd
Copy link
Contributor

dmlloyd commented Jan 4, 2020

Uncommitted memory is, in essence, a sparse array managed by the OS. But, it's only one possible option of course. A sparse array is probably not as good an idea as it seems at first though: the operating system typically will always hand out the lowest-numbered available file descriptor that is free. So a growable array is probably most likely to maximize memory and computational efficiency; in the vast majority of cases, the highest file descriptor that Java is using is generally going to be very close to the total number of file descriptors that Java is handling. Therefore very little space is likely to be wasted by such a scheme.

@tstuefe
Copy link

tstuefe commented Jan 5, 2020

A simple growable array is not a good idea IMHO:

  1. even though numerical values for fds usually grow from the bottom, there is no guarantee you won't encounter high numbered fds at some point, with plenty of "holes" in the middle. The highest theoretical fd is determined by its limit; if that is infinite (which it often is) it is INT_MAX. With a growable (non-sparse) array you would be forced to allocate the whole range from 0 to your highest numbered fd.

For instance, one could have concurrent native non-VM code opening files like crazy and driving the fd counter up, and even though the VM does not care a bit it'd be forced to keep space for the associated fdEntrys of those fds.

  1. Since a growable array can move, we would have to synchronize access to it in some way. The current solution uses a statically allocated base table of 4096 for the lower 4096 fds, which covers about 99% of all cases. That base table can be accessed without worrying about the table being reallocated. Only if a fd happens to be >4096 we bother the sparse overflow table.

@christianwimmer
Copy link

The best solution would probably be to move the whole logic from C to Java. But I don't see that as necessary, the current JDK 9 code solves the problem.

@adinn
Copy link
Collaborator

adinn commented Jan 6, 2020

@christianwimmer thanks for clarifying the more relaxed constraint. I agree that the jdk9 patch solves the problem perfectly well and, as @tstuefe says, avoids the increment in VMEM size that lazy mmap would introduce. The jdk8u backport has been reviewed. I am waiting for confirmation that I may push.

@christianwimmer
Copy link

The backport of the JDK issue should be in the lastest GraalVM 19.3.1 release.

@johnaohara Can you please verify that.

@dmlloyd
Copy link
Contributor

dmlloyd commented Jan 20, 2020

@johnaohara could we please get a table that lists 19.2.1 (JDK 8) vs 19.3.1 (JDK 8) vs 19.3.1 (JDK 11)? I think only then can we see if there is still a problem (I think there might be).

@johnaohara
Copy link
Author

I just updated quarkusio/quarkus#6136

From the tests that I ran, I saw the following;

Build Version RSS (KB)
RSS_REGRESSION-graalvm-ce-19.2.1 17216
RSS_REGRESSION-graalvm-ce-java8-19.3.0.2 94716
RSS_REGRESSION-graalvm-ce-java11-19.3.0.2 86480
RSS_REGRESSION-graalvm-ce-java11-19.3.1 17140
RSS_REGRESSION-graalvm-ce-java8-19.3.1 17232

The 19.3.1 test was run against a built from quarkusio/quarkus#6574

@dmlloyd
Copy link
Contributor

dmlloyd commented Jan 20, 2020

Thanks John!

@n1hility
Copy link
Contributor

yay!

@dmlloyd
Copy link
Contributor

dmlloyd commented Jan 20, 2020

@johnaohara one more question - is that Linux? Thanks.

@johnaohara
Copy link
Author

@dmlloyd yes, should have specified. Those results were on RHEL7.7 (3.10.0-1062.1.1.el7.x86_64)

@johnaohara
Copy link
Author

@christianwimmer is there a PR that contains this backport? I have been looking for it, but can not find it. Thanks

@christianwimmer
Copy link

@johnaohara We apply the patch when building the static libraries only, until proper backports appear in all the JDKs that we are based on. That allowed us to fix the issue without any further delays.

@johnaohara
Copy link
Author

@christianwimmer Thanks for the info, is patching the JDK part of the release process for Graal CE? Is there a public build somewhere?

@dougxc
Copy link
Member

dougxc commented Jan 22, 2020

The patched static libs are included in the binaries at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-19.3-b07

@johnaohara
Copy link
Author

@dougxc thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants