-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark that lists artifacts under each resource type. #1068
Conversation
@@ -41,7 +41,7 @@ func getApi(b *testing.B, ctx context.Context, client connection.RegistryClient, | |||
|
|||
func listApis(b *testing.B, ctx context.Context, client connection.RegistryClient) error { | |||
b.Helper() | |||
it := client.ListApis(ctx, &rpc.ListApisRequest{Parent: root()}) | |||
it := client.ListApis(ctx, &rpc.ListApisRequest{Parent: root().String() + "/locations/global"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apropos of nothing, every time I see this omnipresent "/locations/global" constant in our code it sends me down a rabbit hole of wondering how we could eliminate it globally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should do something in the names package. #470?
Here's a pretty interesting result from benchmarking a networked test server running with Postgres (N = 100):
The server has a couple of big projects, total database usage is summarized here:
I think the difference in runtimes of the artifacts under specs and deployments is probably because there are a lot of specs (>13000) and no deployments apart from whatever we create for the benchmark. Here are the results of the same benchmarks with N = 1000:
|
1c3128d
to
f1d89d9
Compare
Codecov Report
@@ Coverage Diff @@
## main #1068 +/- ##
=======================================
Coverage 68.01% 68.01%
=======================================
Files 147 147
Lines 11982 11982
=======================================
Hits 8149 8149
Misses 3133 3133
Partials 700 700
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
This adds a benchmark test that lists artifacts under different resource types.
I've casually observed that listing artifacts under specs takes longer than listing artifacts under APIs, and guessed that this might be related to revision support. The test here creates N=100 APIs, each with one version, spec, and deployment, and with three artifacts ("a", "b", and "c") under each resource. Three revisions of each spec and deployment are created with artifacts only associated with the last ones created. Then we list all of the "b" artifacts under each resource type.
Running locally with Postgres on my Chromebook, I don't see much difference in listing times:
If I increase the number of APIs to N=1000, a distinction is more clear:
This doesn't seem to be as severe as it subjectively seemed to me, but this is local and my observations were running with a remote database.
Setup time dominates the runtime of these tests, so this PR keeps N=100.