Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rate limiting to the entire service policy. #648

Merged
merged 28 commits into from
Apr 9, 2018

Conversation

y-tabata
Copy link
Contributor

@y-tabata y-tabata commented Mar 8, 2018

Hi team,

I created a new policy that adds rate limiting to the entire Service.
The RFE about this has been opened [1].

[1] https://access.redhat.com/solutions/3308541

@kevprice83
Copy link
Member

kevprice83 commented Mar 8, 2018

@y-tabata thanks for the contribution. However, it would make more sense to use the lua-resty-limit-traffic module to implement the rate limiting here. This will allow you to create true periods using the count module. The limits can be defined using the traffic module.

You can combine the traffic module with the request module to utilise the "leaky bucket" method to rate limit requests.

The idea overall should be to wrap the redis instance so that it is a shared cache and opt in, the local dict should be used by default, not redis. You can see a proof of concept here. The shared redis memory would be written into the API shown here. The default should be the shdict and Redis used if configured.

The final policy should be easily extensible for future contributions.

cc @mikz if you want to add anything to my comments here.

@ytabata
Copy link

ytabata commented Mar 16, 2018

@kevprice83 Thanks for your comment. I'm improving my code now and I have a concern.
As you said, the default should be the shdict and Redis used if configured.
Considering using the shdict, I have to define the size of the shdict by using lua_shared_dict.
The existing lua_shared_dict is defined in nginx.conf.liquid and shdict.conf, but if I define lua_shared_dict in these files, the number of limiters is limited.
Considering rate limiting to the service, I don't want to limit the number of limiters.
Do you have any idea?

@y-tabata
Copy link
Contributor Author

@kevprice83 @mikz I combined the lua-resty-limit-traffic module to implement the rate limiting.

Copy link
Contributor

@mikz mikz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great progress 👍

But it will needs some cleanup to minimize amount of work done in the access phase and move as much as possible to the initialization.

Also we don't want to depend on redis and default to shared memory.

And we need to verify how this would show in the UI. Suggestions welcome @ddcesare.

local failed_to_instantiate = false
try(
function()
lim, limerr = limit.new(shdict_key, unpack(limitter.values))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this belongs to new. There is no need to initialize the limiter on each access phase, right?
They can be initialized when initializing the policy.

Copy link
Contributor Author

@y-tabata y-tabata Mar 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz If we use shared memory, and I move this to new, "shared dict not found" error occurs.
Is my understanding correct?

local class_not_found = false
try(
function()
limit = require (limitter.limitter)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be using some whitelist in a form of a table like:

local traffic_limiters = {
  'fixed_window' = require('resty.limit.count'),
  'leaky_bucket' = require('resty.limit.req'),
}

function init_limiter(name, ...)
  local limiter = traffic_limiters[name]
  if not limiter then return nil, 'unknown limiter' end

  return limiter.new(...)
end

And require has to be done just once during initialization and can't really be happening during access phase.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And possibly even writing functions to take the configuration and passing it properly (#648 (comment)) to the initializers like:

local resty_limit_count = require('resty.limit.count')
local resty_limit_req = require('resty.limit.req')
local traffic_limiters = {
  fixed_window = function(config)
    return  resty_limit_count.new(shdict, config.rate, config.burst)
  end,
  connections = function(config)
    return resty_limit_conn.new(shdict, config.conn, config.burst, config.delay)
  end,
}

limitters[#limitters + 1] = lim
keys[#keys + 1] = limitter.key

if limitter.limitter == "resty.limit.conn" then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be abstracted by checking the the limiter has is_commited key.


for i, lim in ipairs(limitters) do
local rediserr
lim.dict, rediserr = redis_shdict(redis_info.host, redis_info.port, redis_info.db)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be needed as the limiters are already initialized and have correct redis object in the .dict property, right?

local limitters_limit_conn = {}
local keys_limit_conn = {}

local limitters_limit_conn_committed = {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should try to minimize number of tables allocated per request. This would cause GC pressure.
My preferred way would be not doing this at all per request and just initializing the limiters once when initializing the policy.

}
}
},
"redis_info": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be better to use just one property: redis_url.
That can embed authentication, database, port, ...
Wdyt @davidor ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think that a single property makes things simpler. I see this has already been changed.


if not self.redis_info then
ngx.log(ngx.ERR, "No Redis information.")
return ngx.exit(500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want this policy to work without redis. It should just work with shared memory. Redis should be opt-in.

"description": "Values to be applied",
"type": "array",
"items": {
"type": "integer"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty non descriptive and won't have really useful UI.

IMO we could use anyOf and write a description for each limiter and describing each field (burst, rate, delay, ...).

@y-tabata
Copy link
Contributor Author

@mikz Thank you for reviewing. I pushed fixed code.

Regarding the part of initializing the limiter, considering using shared memory, if I move the initializing part to new, "shared dict not found" error occurs.
Is my understanding correct?

Copy link
Contributor

@mikz mikz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to move the limiter initialization to _M.new so they are created just once per policy.

Only issue there is the connection limiter which is mutated and changes self.commited so it can't be shared between requests. Might be better to just leave that one out for now and keep just the req/count rate limiters.

end

for i, lim in ipairs(limiters) do
if lim.iscommitted and lim:is_committed() then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks wrong.

if not delay then
if comerr == "rejected" then
ngx.log(ngx.ERR, "Requests over the limit.")
return ngx.exit(429)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This status will have to be configurable.

return ngx.exit(429)
end
ngx.log(ngx.ERR, "failed to limit traffic: ", comerr)
return ngx.exit(500)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might be another configuration option. I'm not sure that everyone wants to respond with an error when there is some issue with rate limiting. Some people might want to let the request go through.

local keys = ctx.keys

for i, lim in ipairs(limiters) do
local latency = tonumber(time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be just converted once before the loop.

local limiters = {}
local keys = {}

if not self.redis_url then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this. You could use several limiters with one shared dictionary, right?

local delay, comerr = limit_traffic.combine(limiters, keys, states)
if not delay then
if comerr == "rejected" then
ngx.log(ngx.ERR, "Requests over the limit.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessarily an "error".

@@ -0,0 +1,31 @@
local policy_chain = require('apicast.policy_chain').default()

local rate_limit_policy = require('apicast.policy.rate_limiting_to_service').new({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this policy can have way simpler name than rate_limiting_to_service. Just rate_limit should be enough.

for i, lim in ipairs(limiters) do
if redis_url then
local rediserr
lim.dict, rediserr = redis_shdict(redis_url)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed because the connection has been already closed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This initializes a Redis client instance on every request. It might be better to have a connection pool and fetch and return connections as needed, but maybe we can leave this for a future PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I'd like to leave this for a future PR because it's a little bit difficult.


else
for _, limiter in ipairs(self.limiters) do
local lim, initerr = init_limiter(limiter)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to move the limiter initialization to _M.new so they are created just once per policy.

@mikz If I move the limiter initialization to _M.new, "shared dict not found" error occurs. Maybe you know this behavior, so you put the limiter initialization to _M.access here, right?


if not self.redis_url then
-- Only one (the first) limiter is enable.
-- Key will be shdict_key ('limiter').
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this. You could use several limiters with one shared dictionary, right?

@mikz No. Although it is a tentative implementation, we can use only one limiter with one shared dictionary. When user defines 3 limiters ("limiter1","limiter2","limiter3"), only "limiter1" is enabled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't understand this either.
Why is it a problem to allow several limiters?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor If we allow to use several limiters with shdict, we have to define the size of the shdict by using lua_shared_dict. If a user use 3 limiters, we have to define 3 lua_shared_dict, and if a user use 10 limiters, we have to define 10 lua_shared_dict. I don't want to limit the number of limiters, but there is no good way to define lua_shared_dict automatically as many as a user defines. So I defined only one lua_shared_dict tentatively.

"$schema": "http://apicast.io/policy-v1/schema#manifest#",
"name": "rate limit policy",
"description": ["This policy adds rate limit.",
""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the "" needed ?

}
},
{
name = "apicast.policy.rate_limit",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This policy is duplicated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I define this policy twice here, because we can open 2 connections by "GET /".

}
},
{
name = "apicast.policy.rate_limit",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This policy is duplicated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please review this test file. I think that policies are duplicated in several tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I define this policy twice here, because we can open 2 connections by "GET /".

return nil, e
end
)
if not lim then
Copy link
Contributor

@davidor davidor Apr 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we return lim, limerr and delete this if ?

local rate_limit_policy = RateLimitPolicy.new(config)
rate_limit_policy:access()
rate_limit_policy:access()
assert.spy(ngx_sleep_spy).was_called_more_than(0.001)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not correct. was_called_more_than refers to the number of times the spy was called. However, what you need is to check that it was called with a value that is > 0.001.

Same in the test below this one.

if not lim then
ngx.log(ngx.ERR, "unknown limiter: ", initerr)
error(self.logging_only, 500)
goto done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could call return and avoid the goto, but this is a matter of preference.

--- pipelined_requests eval
["GET /flush_redis","GET /"]
--- error_code eval
[200, 429]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right. The configuration has count = 1 so I believe the first GET / would return 200 and the second one 429.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I define the policy twice here, so we can count 2 by "GET /", and the first "GET /" returns 429.

--- pipelined_requests eval
["GET /flush_redis","GET /"]
--- error_code eval
[200, 429]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this one. conn is set to 1 in the config. This means that it will only allow 1 concurrent connection.
So if I understood this correctly, the first GET / should be a 200 for sure. Even if we added more, they could be 200s as well as long as there isn't more than 1 concurrent connection.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I define the policy twice here, so we can open 2 connections by "GET /", and the first "GET /" is a 429.

--- pipelined_requests eval
["GET /flush_redis","GET /"]
--- error_code eval
[200, 503]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is correct. rate is set to 1 in the config, so the first GET / should return 200 for sure. The second one would return 503 only if it was done in the same second, which is very likely in this test but might be a bit difficult to ensure that it's done that way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I define the policy twice here, so we can open 2 connections by "GET /", and the first "GET /" is a 503.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now why you duplicated the policy in the config in several of these tests.

I find this way of testing the policy a bit counter-intuitive. To use this policy, users should include it just once and add there all the limiters they need. The problem I see with these tests is that the policy appears duplicated for testing convenience, but the configuration does not reflect a realistic scenario, because users will include the policy just once.

I think we should try to include the policy just once and change the tests so they work with that configuration. For example, in this particular test, we can make 3 requests. The first one to /flush_redis (to make sure we start from a clean state), the second one to /(which should return 200), and the third one to / (which should return 503 because we defined a limit of 1 in the configuration).

What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor Your method works well for "leaky_bucket" and "fixed_window", but doesn't work well for "connections", I think. Regarding "connections", the second request returns 200 and then the number of connections is reset to zero, so the third request also returns 200.
However, I strongly agree with that the configuration does not reflect a realistic scenario in these tests, so I will change the tests of "leaky_bucket" and "fixed_window".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This policy has to be reusable by multiple services at the same time. It will be loaded and active more than once in the same time.

We can't be defining shdicts per policy, so they all have to reuse one shdict. If some parts do not support that we can't support them. I think they should be supported because we control the rate limiting keys and can for example prefix them with the rate limiter name, so they are unique per rate limiter.

This is duplicated discussion of #648 (comment).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz Let me confirm about your comment. Do you mean I should change the integration test as following?

before:
policy_chain = {{name = "apicast.policy.rate_limit", ...},{name = "apicast.policy.rate_limit", ...}}

after:
policy_chain = {{name = "apicast.policy.rate_limit",
configuration = {limiters = {name = "connections", ...},{name = "connections", ...}}}}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really saying that.

I'm saying there can be several instances of connection/leaky bucket/fixed window rate limiting. APIcast can run several services and in case of our Cloud Hosted deployment it runs several thousands of services and configurations - each of them could use one or more rate limiting policies - and still have to be using just one or two shared dictionaries.

No matter how many rate limiting policies you have they have to be able to run from ideally one shared dictionary.

Because each policy can have unique rate limiting "key" ( https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/req.md#incoming , passed to .combine as table of keys https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/traffic.md#combine) they could operate on same shared dictionary, but just using different keys.

Now - defining those keys should be left to the user so they can implement for example cross service rate limiting. But if the same key being used between fixed window / leaky bucket / connection limiter causes trouble we can prefix those keys with limiter name. So user can still do cross service rate limiting but only with the same limiter (user setting key "foo" would use it as "fixed_window-foo" or "leaky_bucket-foo" or "connections-foo"). My understanding is those keys can't be shared between limiters anyway.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz Now I defined only one shdict called 'limiter' in nginx.conf.liquid. But I have to let the user to use multiple limiters with one shdict, right? The way I came up with is using FFI cdata as a value (for example here). FFI cdata has the actual key and the value of limiter. Is my image matching with yours?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz I misunderstood how shdict works. Could you please review the latest commit?

local connections_committed = {}
local keys_committed = {}

local delay, comerr = limit_traffic.combine(limiters, keys, states)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using Redis and several limiters, the call to combine() will run several Redis commands.

For efficiency, we might want to find a way to send all those commands in a single network trip using pipelines: https://github.com/openresty/lua-resty-redis#init_pipeline

This is an optimization we can leave for a future PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor I'd like to leave this for a future PR because it's a little bit difficult.

@davidor
Copy link
Contributor

davidor commented Apr 5, 2018

Great progress @y-tabata 👍

I think that the only remaining issue is that some integration tests are failing. Take a look at the s2i-builder task, tests 6,7,8, and 9 are failing.

The reason is that in location /flush_redis, TEST_NGINX_REDIS_HOST is not defined and because of that redis:connect() fails.

@davidor
Copy link
Contributor

davidor commented Apr 5, 2018

@y-tabata , the tests that I pointed out in my previous comment are failing for two reasons:

  1. The TEST_NGINX_REDIS_HOST and TEST_NGINX_REDIS_PORT envs are nil in location /flush_redis. You need to fetch them like you did in the policy config. So for example, the flush_redis location for test 6 should look like this:
  location /flush_redis {
    content_by_lua_block {
      local redis_host = "$TEST_NGINX_REDIS_HOST" or '127.0.0.1'
      local redis_port = "$TEST_NGINX_REDIS_PORT" or 6379
      local redis = require('resty.redis'):new()
      redis:connect(redis_host, redis_port)
      redis:select(1)
      redis:del("test6_1", "test6_2", "test6_3")
    }
  }
  1. TEST_NGINX_REDIS_HOST is set to "redis" in the test environment. We need to set a resolver to get the IP, otherwise the call to redis:connect will fail. We can do this the same way we do in other tests.
    You just need to define
$ENV{TEST_NGINX_RESOLVER} ||= `grep nameserver /etc/resolv.conf | awk '{print \$2}' | head -1 | tr '\n' ' '`;`

at the beginning of the file, just below the other 2 envs you already have.
And finally, we need to tell nginx to use the resolver with the resolverdirective. So the complete config for the test 6 should be:

--- config
  include $TEST_NGINX_APICAST_CONFIG;
  resolver $TEST_NGINX_RESOLVER;

  location /flush_redis {
    content_by_lua_block {
      local redis_host = "$TEST_NGINX_REDIS_HOST" or '127.0.0.1'
      local redis_port = "$TEST_NGINX_REDIS_PORT" or 6379
      local redis = require('resty.redis'):new()
      redis:connect(redis_host, redis_port)
      redis:select(1)
      redis:del("test6_1", "test6_2", "test6_3")
    }
  }

I think this should make all the tests pass. Let me know if you need any help. We are close to being able to merge this! :)

@y-tabata
Copy link
Contributor Author

y-tabata commented Apr 5, 2018

@davidor Thank you for your advice! I will modify the tests.

@y-tabata
Copy link
Contributor Author

y-tabata commented Apr 6, 2018

The reason why JSON schema validation failed is because "connections" limiter has 5 items (name, key, conn, burst, delay). Other limiters have 4 items. I confirmed that the validation succeeded when I deleted one of the item ("delay") in apicast-policy json. Do you have any idea?

@y-tabata
Copy link
Contributor Author

y-tabata commented Apr 6, 2018

Alternative I considered is the followings:

"limiters": {
  "description": "List of limiters to be applied",
  "type": "array",
  "items": {
    "type": "object",
    "properties": {
      "name": {},
      "key": {},
      "conn": {},
      "burst": {},
      "delay": {},
      "rate": {},
      "count": {},
      "window": {}
    }
  }
}

or

"connections": {
  "type": "array",
  "items": {}
},
"leaky_bucket": {
  "type": "array",
  "items": {}
},
"fixed_window": {
  "type": "array",
  "items": {}
}

@davidor
Copy link
Contributor

davidor commented Apr 6, 2018

@y-tabata in tests 5 and 6 you specified delay = 0.5. The problem is that the schema says that delay should be an integer. Changing that to number fixes the issue:

 "delay": {
   "type": "number",
    "description": "the default processing latency of a typical connection (or request)"
}

@y-tabata
Copy link
Contributor Author

y-tabata commented Apr 6, 2018

@y-tabata in tests 5 and 6 you specified delay = 0.5. The problem is that the schema says that delay should be an integer. Changing that to number fixes the issue:

"delay": {
"type": "number",
"description": "the default processing latency of a typical connection (or request)"
}

@davidor Thanks! I changed the schema.

Copy link
Contributor

@davidor davidor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent job @y-tabata 🏅 !
Thanks for your contribution 👍

Copy link
Contributor

@mikz mikz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge it and then work on the comments I left. No need to fix those now.

If you are ok with changing the configuration format a bit in next few weeks @y-tabata.

"name": {
"type": "string",
"enum": ["connections"],
"description": "limiting request concurrency (or concurrent connections)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to start with capital letter. This applies to other comments too.

"description": "limiting request concurrency (or concurrent connections)"
},
"key": {
"description": "Key of limiter",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not very useful description.

"enum": ["fixed_window"],
"description": "limiting request counts"
},
"key": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we allow this to be a "template" in the future we will somehow have to distinguish between plain strings and templated strings.

We probably should change this to be more extensible in the future before the final release.

Also right now those keys have global scopes. That does not really work in our Cloud Hosted environment. We probably should offer a way to have "scope" of the key that could be either "global" or "service" and the key would be automatically namespaced.

"type": "integer",
"description": "the status code when requests over the limit, default 429"
},
"logging_only": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should not be boolean but rather enum like error_handling with options log and exit.


local function init_limiter(config)
local lim, limerr
try(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not really idiomatic Lua and we really should not be expecting exceptions.

All resty code will return an error as a second return value, so we can just use that.
Otherwise our code should not crash because that would trash performance of the whole gateway.

end

return {
incr = function(_, key, value, init)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These functions can be defined just once in the top closure and then get the redis reference from self.

if not delay then
if comerr == "rejected" then
ngx.log(ngx.WARN, "Requests over the limit.")
error(self.logging_only, self.status_code_rejected)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we want to let the requests through when "logging only". That would effectively disable the policy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz Do you mean disable option should be added to error_handling?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really. I mean the "logging only" does not terminate the request. So it does not limit requests over limit. This policy does not rate limit requests when set to "logging only".

I think there is a difference between throwing errors when initializing misconfigured policy or redis is inavailable - but when the call is actually rejected - then the policy should reject the request - or have different config option for it.


for i, lim in ipairs(limiters) do
if lim.is_committed and lim:is_committed() then
connections_committed[#connections_committed + 1] = lim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is getting number of elements in the table in each iteration, better to use table.insert.

ctx.keys = keys_committed
end

if delay >= 0.001 then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if delay > 0 would be more precise.

local rate_limit_policy = RateLimitPolicy.new(config)
rate_limit_policy:access()
end)
it('invalid redis url', function()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to have some whitespace between the test blocks.

@y-tabata
Copy link
Contributor Author

y-tabata commented Apr 6, 2018

@mikz @davidor Thank you for reviewing! So after merging, I will work on the comments.

@mikz mikz merged commit 6cc7cf6 into 3scale:master Apr 9, 2018
@mikz mikz mentioned this pull request Jun 12, 2018
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants