-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rate limiting to the entire service policy. #648
Conversation
@y-tabata thanks for the contribution. However, it would make more sense to use the lua-resty-limit-traffic module to implement the rate limiting here. This will allow you to create true periods using the count module. The limits can be defined using the traffic module. You can combine the traffic module with the request module to utilise the "leaky bucket" method to rate limit requests. The idea overall should be to wrap the redis instance so that it is a shared cache and opt in, the local The final policy should be easily extensible for future contributions. cc @mikz if you want to add anything to my comments here. |
@kevprice83 Thanks for your comment. I'm improving my code now and I have a concern. |
@kevprice83 @mikz I combined the lua-resty-limit-traffic module to implement the rate limiting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great progress 👍
But it will needs some cleanup to minimize amount of work done in the access phase and move as much as possible to the initialization.
Also we don't want to depend on redis and default to shared memory.
And we need to verify how this would show in the UI. Suggestions welcome @ddcesare.
local failed_to_instantiate = false | ||
try( | ||
function() | ||
lim, limerr = limit.new(shdict_key, unpack(limitter.values)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this belongs to new
. There is no need to initialize the limiter on each access phase, right?
They can be initialized when initializing the policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz If we use shared memory, and I move this to new, "shared dict not found" error occurs.
Is my understanding correct?
local class_not_found = false | ||
try( | ||
function() | ||
limit = require (limitter.limitter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be using some whitelist in a form of a table like:
local traffic_limiters = {
'fixed_window' = require('resty.limit.count'),
'leaky_bucket' = require('resty.limit.req'),
}
function init_limiter(name, ...)
local limiter = traffic_limiters[name]
if not limiter then return nil, 'unknown limiter' end
return limiter.new(...)
end
And require
has to be done just once during initialization and can't really be happening during access phase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And possibly even writing functions to take the configuration and passing it properly (#648 (comment)) to the initializers like:
local resty_limit_count = require('resty.limit.count')
local resty_limit_req = require('resty.limit.req')
local traffic_limiters = {
fixed_window = function(config)
return resty_limit_count.new(shdict, config.rate, config.burst)
end,
connections = function(config)
return resty_limit_conn.new(shdict, config.conn, config.burst, config.delay)
end,
}
limitters[#limitters + 1] = lim | ||
keys[#keys + 1] = limitter.key | ||
|
||
if limitter.limitter == "resty.limit.conn" then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be abstracted by checking the the limiter has is_commited
key.
|
||
for i, lim in ipairs(limitters) do | ||
local rediserr | ||
lim.dict, rediserr = redis_shdict(redis_info.host, redis_info.port, redis_info.db) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be needed as the limiters are already initialized and have correct redis object in the .dict
property, right?
local limitters_limit_conn = {} | ||
local keys_limit_conn = {} | ||
|
||
local limitters_limit_conn_committed = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should try to minimize number of tables allocated per request. This would cause GC pressure.
My preferred way would be not doing this at all per request and just initializing the limiters once when initializing the policy.
} | ||
} | ||
}, | ||
"redis_info": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better to use just one property: redis_url.
That can embed authentication, database, port, ...
Wdyt @davidor ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think that a single property makes things simpler. I see this has already been changed.
|
||
if not self.redis_info then | ||
ngx.log(ngx.ERR, "No Redis information.") | ||
return ngx.exit(500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want this policy to work without redis. It should just work with shared memory. Redis should be opt-in.
"description": "Values to be applied", | ||
"type": "array", | ||
"items": { | ||
"type": "integer" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty non descriptive and won't have really useful UI.
IMO we could use anyOf
and write a description for each limiter and describing each field (burst
, rate
, delay
, ...).
@mikz Thank you for reviewing. I pushed fixed code. Regarding the part of initializing the limiter, considering using shared memory, if I move the initializing part to new, "shared dict not found" error occurs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to move the limiter initialization to _M.new
so they are created just once per policy.
Only issue there is the connection limiter which is mutated and changes self.commited
so it can't be shared between requests. Might be better to just leave that one out for now and keep just the req/count rate limiters.
end | ||
|
||
for i, lim in ipairs(limiters) do | ||
if lim.iscommitted and lim:is_committed() then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks wrong.
if not delay then | ||
if comerr == "rejected" then | ||
ngx.log(ngx.ERR, "Requests over the limit.") | ||
return ngx.exit(429) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This status will have to be configurable.
return ngx.exit(429) | ||
end | ||
ngx.log(ngx.ERR, "failed to limit traffic: ", comerr) | ||
return ngx.exit(500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be another configuration option. I'm not sure that everyone wants to respond with an error when there is some issue with rate limiting. Some people might want to let the request go through.
local keys = ctx.keys | ||
|
||
for i, lim in ipairs(limiters) do | ||
local latency = tonumber(time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be just converted once before the loop.
local limiters = {} | ||
local keys = {} | ||
|
||
if not self.redis_url then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this. You could use several limiters with one shared dictionary, right?
local delay, comerr = limit_traffic.combine(limiters, keys, states) | ||
if not delay then | ||
if comerr == "rejected" then | ||
ngx.log(ngx.ERR, "Requests over the limit.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is necessarily an "error".
@@ -0,0 +1,31 @@ | |||
local policy_chain = require('apicast.policy_chain').default() | |||
|
|||
local rate_limit_policy = require('apicast.policy.rate_limiting_to_service').new({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this policy can have way simpler name than rate_limiting_to_service
. Just rate_limit
should be enough.
for i, lim in ipairs(limiters) do | ||
if redis_url then | ||
local rediserr | ||
lim.dict, rediserr = redis_shdict(redis_url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed because the connection has been already closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This initializes a Redis client instance on every request. It might be better to have a connection pool and fetch and return connections as needed, but maybe we can leave this for a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I'd like to leave this for a future PR because it's a little bit difficult.
|
||
else | ||
for _, limiter in ipairs(self.limiters) do | ||
local lim, initerr = init_limiter(limiter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
if not self.redis_url then | ||
-- Only one (the first) limiter is enable. | ||
-- Key will be shdict_key ('limiter'). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this. You could use several limiters with one shared dictionary, right?
@mikz No. Although it is a tentative implementation, we can use only one limiter with one shared dictionary. When user defines 3 limiters ("limiter1","limiter2","limiter3"), only "limiter1" is enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't understand this either.
Why is it a problem to allow several limiters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor If we allow to use several limiters with shdict, we have to define the size of the shdict by using lua_shared_dict. If a user use 3 limiters, we have to define 3 lua_shared_dict, and if a user use 10 limiters, we have to define 10 lua_shared_dict. I don't want to limit the number of limiters, but there is no good way to define lua_shared_dict automatically as many as a user defines. So I defined only one lua_shared_dict tentatively.
"$schema": "http://apicast.io/policy-v1/schema#manifest#", | ||
"name": "rate limit policy", | ||
"description": ["This policy adds rate limit.", | ||
"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the ""
needed ?
} | ||
}, | ||
{ | ||
name = "apicast.policy.rate_limit", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This policy is duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I define this policy twice here, because we can open 2 connections by "GET /".
t/apicast-policy-rate-limit.t
Outdated
} | ||
}, | ||
{ | ||
name = "apicast.policy.rate_limit", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This policy is duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please review this test file. I think that policies are duplicated in several tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I define this policy twice here, because we can open 2 connections by "GET /".
return nil, e | ||
end | ||
) | ||
if not lim then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we return lim, limerr
and delete this if
?
local rate_limit_policy = RateLimitPolicy.new(config) | ||
rate_limit_policy:access() | ||
rate_limit_policy:access() | ||
assert.spy(ngx_sleep_spy).was_called_more_than(0.001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not correct. was_called_more_than
refers to the number of times the spy was called. However, what you need is to check that it was called with a value that is > 0.001.
Same in the test below this one.
if not lim then | ||
ngx.log(ngx.ERR, "unknown limiter: ", initerr) | ||
error(self.logging_only, 500) | ||
goto done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could call return
and avoid the goto
, but this is a matter of preference.
--- pipelined_requests eval | ||
["GET /flush_redis","GET /"] | ||
--- error_code eval | ||
[200, 429] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look right. The configuration has count = 1
so I believe the first GET /
would return 200 and the second one 429.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I define the policy twice here, so we can count 2 by "GET /", and the first "GET /" returns 429.
--- pipelined_requests eval | ||
["GET /flush_redis","GET /"] | ||
--- error_code eval | ||
[200, 429] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about this one. conn
is set to 1 in the config. This means that it will only allow 1 concurrent connection.
So if I understood this correctly, the first GET /
should be a 200 for sure. Even if we added more, they could be 200s as well as long as there isn't more than 1 concurrent connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I define the policy twice here, so we can open 2 connections by "GET /", and the first "GET /" is a 429.
t/apicast-policy-rate-limit.t
Outdated
--- pipelined_requests eval | ||
["GET /flush_redis","GET /"] | ||
--- error_code eval | ||
[200, 503] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is correct. rate
is set to 1 in the config, so the first GET /
should return 200 for sure. The second one would return 503 only if it was done in the same second, which is very likely in this test but might be a bit difficult to ensure that it's done that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I define the policy twice here, so we can open 2 connections by "GET /", and the first "GET /" is a 503.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see now why you duplicated the policy in the config in several of these tests.
I find this way of testing the policy a bit counter-intuitive. To use this policy, users should include it just once and add there all the limiters they need. The problem I see with these tests is that the policy appears duplicated for testing convenience, but the configuration does not reflect a realistic scenario, because users will include the policy just once.
I think we should try to include the policy just once and change the tests so they work with that configuration. For example, in this particular test, we can make 3 requests. The first one to /flush_redis
(to make sure we start from a clean state), the second one to /
(which should return 200), and the third one to /
(which should return 503 because we defined a limit of 1 in the configuration).
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor Your method works well for "leaky_bucket" and "fixed_window", but doesn't work well for "connections", I think. Regarding "connections", the second request returns 200 and then the number of connections is reset to zero, so the third request also returns 200.
However, I strongly agree with that the configuration does not reflect a realistic scenario in these tests, so I will change the tests of "leaky_bucket" and "fixed_window".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This policy has to be reusable by multiple services at the same time. It will be loaded and active more than once in the same time.
We can't be defining shdicts per policy, so they all have to reuse one shdict. If some parts do not support that we can't support them. I think they should be supported because we control the rate limiting keys and can for example prefix them with the rate limiter name, so they are unique per rate limiter.
This is duplicated discussion of #648 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz Let me confirm about your comment. Do you mean I should change the integration test as following?
before:
policy_chain = {{name = "apicast.policy.rate_limit", ...},{name = "apicast.policy.rate_limit", ...}}
after:
policy_chain = {{name = "apicast.policy.rate_limit",
configuration = {limiters = {name = "connections", ...},{name = "connections", ...}}}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really saying that.
I'm saying there can be several instances of connection/leaky bucket/fixed window rate limiting. APIcast can run several services and in case of our Cloud Hosted deployment it runs several thousands of services and configurations - each of them could use one or more rate limiting policies - and still have to be using just one or two shared dictionaries.
No matter how many rate limiting policies you have they have to be able to run from ideally one shared dictionary.
Because each policy can have unique rate limiting "key" ( https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/req.md#incoming , passed to .combine as table of keys https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/traffic.md#combine) they could operate on same shared dictionary, but just using different keys.
Now - defining those keys should be left to the user so they can implement for example cross service rate limiting. But if the same key being used between fixed window / leaky bucket / connection limiter causes trouble we can prefix those keys with limiter name. So user can still do cross service rate limiting but only with the same limiter (user setting key "foo" would use it as "fixed_window-foo" or "leaky_bucket-foo" or "connections-foo"). My understanding is those keys can't be shared between limiters anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz Now I defined only one shdict called 'limiter' in nginx.conf.liquid. But I have to let the user to use multiple limiters with one shdict, right? The way I came up with is using FFI cdata as a value (for example here). FFI cdata has the actual key and the value of limiter. Is my image matching with yours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz I misunderstood how shdict works. Could you please review the latest commit?
local connections_committed = {} | ||
local keys_committed = {} | ||
|
||
local delay, comerr = limit_traffic.combine(limiters, keys, states) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using Redis and several limiters, the call to combine()
will run several Redis commands.
For efficiency, we might want to find a way to send all those commands in a single network trip using pipelines: https://github.com/openresty/lua-resty-redis#init_pipeline
This is an optimization we can leave for a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidor I'd like to leave this for a future PR because it's a little bit difficult.
Great progress @y-tabata 👍 I think that the only remaining issue is that some integration tests are failing. Take a look at the The reason is that in |
@y-tabata , the tests that I pointed out in my previous comment are failing for two reasons:
$ENV{TEST_NGINX_RESOLVER} ||= `grep nameserver /etc/resolv.conf | awk '{print \$2}' | head -1 | tr '\n' ' '`;` at the beginning of the file, just below the other 2 envs you already have.
I think this should make all the tests pass. Let me know if you need any help. We are close to being able to merge this! :) |
@davidor Thank you for your advice! I will modify the tests. |
The reason why JSON schema validation failed is because "connections" limiter has 5 items (name, key, conn, burst, delay). Other limiters have 4 items. I confirmed that the validation succeeded when I deleted one of the item ("delay") in apicast-policy json. Do you have any idea? |
Alternative I considered is the followings:
or
|
@y-tabata in tests 5 and 6 you specified "delay": {
"type": "number",
"description": "the default processing latency of a typical connection (or request)"
} |
@davidor Thanks! I changed the schema. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent job @y-tabata 🏅 !
Thanks for your contribution 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can merge it and then work on the comments I left. No need to fix those now.
If you are ok with changing the configuration format a bit in next few weeks @y-tabata.
"name": { | ||
"type": "string", | ||
"enum": ["connections"], | ||
"description": "limiting request concurrency (or concurrent connections)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to start with capital letter. This applies to other comments too.
"description": "limiting request concurrency (or concurrent connections)" | ||
}, | ||
"key": { | ||
"description": "Key of limiter", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not very useful description.
"enum": ["fixed_window"], | ||
"description": "limiting request counts" | ||
}, | ||
"key": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we allow this to be a "template" in the future we will somehow have to distinguish between plain strings and templated strings.
We probably should change this to be more extensible in the future before the final release.
Also right now those keys have global scopes. That does not really work in our Cloud Hosted environment. We probably should offer a way to have "scope" of the key that could be either "global" or "service" and the key would be automatically namespaced.
"type": "integer", | ||
"description": "the status code when requests over the limit, default 429" | ||
}, | ||
"logging_only": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should not be boolean but rather enum like error_handling
with options log
and exit
.
|
||
local function init_limiter(config) | ||
local lim, limerr | ||
try( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not really idiomatic Lua and we really should not be expecting exceptions.
All resty code will return an error as a second return value, so we can just use that.
Otherwise our code should not crash because that would trash performance of the whole gateway.
end | ||
|
||
return { | ||
incr = function(_, key, value, init) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These functions can be defined just once in the top closure and then get the redis
reference from self
.
if not delay then | ||
if comerr == "rejected" then | ||
ngx.log(ngx.WARN, "Requests over the limit.") | ||
error(self.logging_only, self.status_code_rejected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we want to let the requests through when "logging only". That would effectively disable the policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz Do you mean disable
option should be added to error_handling
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. I mean the "logging only" does not terminate the request. So it does not limit requests over limit. This policy does not rate limit requests when set to "logging only".
I think there is a difference between throwing errors when initializing misconfigured policy or redis is inavailable - but when the call is actually rejected - then the policy should reject the request - or have different config option for it.
|
||
for i, lim in ipairs(limiters) do | ||
if lim.is_committed and lim:is_committed() then | ||
connections_committed[#connections_committed + 1] = lim |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is getting number of elements in the table in each iteration, better to use table.insert
.
ctx.keys = keys_committed | ||
end | ||
|
||
if delay >= 0.001 then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if delay > 0
would be more precise.
local rate_limit_policy = RateLimitPolicy.new(config) | ||
rate_limit_policy:access() | ||
end) | ||
it('invalid redis url', function() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to have some whitespace between the test blocks.
Hi team,
I created a new policy that adds rate limiting to the entire Service.
The RFE about this has been opened [1].
[1] https://access.redhat.com/solutions/3308541