Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No options to scale gRPC server #10057

Closed
yatindra opened this issue Jun 16, 2020 · 2 comments · Fixed by #10269
Closed

No options to scale gRPC server #10057

yatindra opened this issue Jun 16, 2020 · 2 comments · Fixed by #10269
Assignees
Labels
area/grpc gRPC kind/bug Something isn't working
Milestone

Comments

@yatindra
Copy link

Describe the bug
io.vertx.grpc.VertxServer do not auto-scale across Vert.x event-loop. In Vert.x the scaling is determined by the number of deployed instances of a Verticles. I am not entirely familiar with how Quarkus initializes the GrpcServerRecorder, but it seems like only one instance of VertxServer is ever started.

Expected behavior
Singletons implementing a gRPC Service should be allowed to scale across Vert.x event-loop. Additionally, provide a way to configure the desired scaling.

Actual behavior
At most 2 Vert.x event loop threads process request for VertxServer

To Reproduce
Steps to reproduce the behavior:

  1. Implement a Quarkus gRPC server with one service annotated with @Singleton and extending MutinyApiConfigServiceGrpc.[GrpcServiceName]ImplBase
  2. In the service, write a log line to print the Thread Name
  3. Build and run the server
  4. Make RPC calls. Used ghz.sh to simulate load.
  5. Notice in the logs that only 2 unique thread names are printed. Threads will have the prefix vert.x-eventloop-thread-.

Configuration
No additional configuration were provided. gRPC extension was using the default values.

Environment (please complete the following information):

  • CPU count: 8
  • Vert.x Event Loop thread count: 16 (the default 2*CPU)
  • Output of uname -a or ver:
Darwin REMMACJ5Y8GTFM 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64
  • Output of java -version:
openjdk version "11.0.2" 2019-01-15
OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)
  • Quarkus version or git rev: 1.5.1.Final
  • Build tool (ie. output of mvnw --version or gradlew --version):
------------------------------------------------------------
Gradle 6.5
------------------------------------------------------------

Build time:   2020-06-02 20:46:21 UTC
Revision:     a27f41e4ae5e8a41ab9b19f8dd6d86d7b384dad4

Kotlin:       1.3.72
Groovy:       2.5.11
Ant:          Apache Ant(TM) version 1.10.7 compiled on September 1 2019
JVM:          11.0.2 (Oracle Corporation 11.0.2+9)
OS:           Mac OS X 10.15.5 x86_64

Additional context
io.quarkus.grpc.runtime.GrpcServerRecorder has the following comment on line 62

// TODO Support scalability model (using a verticle and instance number)
@yatindra yatindra added the kind/bug Something isn't working label Jun 16, 2020
@quarkusbot
Copy link

/cc @michalszynkiewicz, @cescoffier

@cescoffier
Copy link
Member

I believe there is a TODO in the code explaining that the server should be started in a verticle and the number of verticles should be configurable.

michalszynkiewicz added a commit to michalszynkiewicz/quarkus that referenced this issue Jun 24, 2020
michalszynkiewicz added a commit to michalszynkiewicz/quarkus that referenced this issue Jun 24, 2020
michalszynkiewicz added a commit to michalszynkiewicz/quarkus that referenced this issue Jun 25, 2020
michalszynkiewicz added a commit to michalszynkiewicz/quarkus that referenced this issue Jun 25, 2020
michalszynkiewicz added a commit to michalszynkiewicz/quarkus that referenced this issue Jun 25, 2020
@gsmet gsmet added this to the 1.7.0 - master milestone Jul 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/grpc gRPC kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants