-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
100% CPU in epoll_wait #4241
Comments
rvansa
added a commit
to rvansa/quarkus
that referenced
this issue
Sep 27, 2019
thanks @rvansa ! Pinging @stuartwdouglas as I believe he was looking as well into moving such initializations to runtime. |
Could someone on quarkus-dev ml also ping this user who seems to suffer from the same problem? |
rvansa
added a commit
to rvansa/quarkus
that referenced
this issue
Sep 30, 2019
rvansa
added a commit
to rvansa/quarkus
that referenced
this issue
Sep 30, 2019
gsmet
pushed a commit
that referenced
this issue
Oct 2, 2019
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
In some environments, using native image, all threads reading I/O can spend 100% CPU time polling
epoll_wait(..., timeout=0)
. Turns out this is an effect ofstatic final io.netty.util.concurrent.ScheduledFutureTask.START_TIME
being set to system-dependentSystem.nanoTime()
during compilation and the NioEventLoop behaving in an unexpected way.Note that this behaviour cannot be reproduced when compiling and running on the same machine, due to the nature of
System.nanoTime()
base value.The text was updated successfully, but these errors were encountered: