Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance degradation 0.22.0->1.1.0.Final #6539

Closed
FreifeldRoyi opened this issue Jan 13, 2020 · 8 comments
Closed

Performance degradation 0.22.0->1.1.0.Final #6539

FreifeldRoyi opened this issue Jan 13, 2020 · 8 comments
Labels
kind/bug Something isn't working triage/out-of-date This issue/PR is no longer valid or relevant

Comments

@FreifeldRoyi
Copy link

FreifeldRoyi commented Jan 13, 2020

While upgrading a service from version 0.22.0 to 1.1.0.Final I noticed an increase in latency with the same amount of HTTP request/sec.

Things that were changed:

  1. Upgrade quarkus 0.22.0 to 1.1.0.Final
  2. @Context HttpServletRequest changed to @Context HttpRequest
  3. Json logging integration was added instead of solution from Add JsonFormatter support for the log #1847
  4. @Stream Changed to @Channel

I noticed an increase in latency from <~1ms to <~2ms and increase in memory docker memory consumption from ~250mb to ~750mb

The service is running in alpine docker + OpenJDK 8

@FreifeldRoyi FreifeldRoyi added the kind/bug Something isn't working label Jan 13, 2020
@FreifeldRoyi
Copy link
Author

FreifeldRoyi commented Jan 13, 2020

Upgrade to 1.1.1.Final (with more fields on entities and more validations that are not used since only a small part of the entity is sent via JSON) lowered the memory consumption to ~300mb.
Latency is still <~2ms.
I don't know if the 1ms overhead is constant or relative to amount of data being sent.

@geoand
Copy link
Contributor

geoand commented Jan 20, 2020

cc @johnaohara

@johnaohara
Copy link
Member

@FreifeldRoyi do you have an application that reproduces the issue that we could test? Has the payload being sent/received changed between tests on 0.22.0 and 1.1.1.Final? Is the application now generating more logging?

@FreifeldRoyi
Copy link
Author

FreifeldRoyi commented Jan 21, 2020

  • I had a hunch that it was due to a field change so to pinpoint this issue I reverted back to the good 0.22.0 commit and deployed each commit individually. The very first commit was the 0.22.0 to 1.1.0.Final bump with no new fields, only the changes I mentioned in the first message. Later I got to the 1.1.1.Final commit (with new fields) and got better memory footprint but same <2ms latency.
  • There are no new logs. As a design decision, we don't write logs for each message (due to obvious performance reasons), not even in DEBUG level.
  • I can try to write a small service that simulates the issue

@johnaohara
Copy link
Member

@FreifeldRoyi when you refer to latency;

  1. are you referring to mean latency, max latency, some centile of latency distribution?
  2. is the latency you are measuring the response time from the client side requests?
  3. Has CPU utilization of the application increased with 1.1.1.Final?
  4. Are you running in JVM mode or native mode?

If you are seeing a difference in response times in JVM mode it could be possible to profile both versions to see where the difference might be coming from

@FreifeldRoyi
Copy link
Author

FreifeldRoyi commented Jan 21, 2020

  1. All
  2. I Created a custom ContainerRequestFilter & ContainerResponseFilter to send time metrics to StatsD for every HTTP request
  3. at 0.22.0 CPU peaked at ~9%. Now at ~10-12%
  4. JVM with OpenJDK 8 as I mentioned in the first message

@gsmet
Copy link
Member

gsmet commented Jan 31, 2020

@FreifeldRoyi I think we will need some profiling information. @johnaohara should be able to help you with this.

A list of the extensions you are using could be useful too.

@geoand
Copy link
Contributor

geoand commented Feb 26, 2021

I'll close this as we have now have more than 10 releases since the ticket was opened.

If there is still a problem, please open a new issue

@geoand geoand closed this as completed Feb 26, 2021
@geoand geoand added the triage/out-of-date This issue/PR is no longer valid or relevant label Feb 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working triage/out-of-date This issue/PR is no longer valid or relevant
Projects
None yet
Development

No branches or pull requests

4 participants