title | slug | sidebar_label | sidebar_position | description | image | keywords | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FAQ |
faq |
FAQ |
9 |
Frequently asked questions about Aperture. |
/assets/img/aperture_logo.png |
|
While Aperture does add some latency, it is a minimal one. Thanks to colocating Aperture Agents with services, it's a single RPC call within a single node.
If are the benefits of using Aperture over circuit breakers and rate limiting in EnvoyProxy? {#envoy-rate-limit}
While Envoy does have some local and non-local rate-limiting capabilities, there are still benefits of using Aperture:
- Aperture Rate Limiter allows dynamically configuring Rate Limiter parameters through signals from Policy.
- The ability to configure global rate limiting without configuring any external components – mesh of Agents is providing distributed counters.
- Rate-limiting decisions can be made locally on the Agent if lazy sync is enabled.
- In addition to Rate Limiter, Aperture also offers Load Scheduler, which Envoy doesn't have an equivalent of.
- Rate Limiter always accepts or rejects immediately.
- Load Scheduler can hold a request within some time period (derived from gRPC request timeout).
- Load Scheduler can also be configured in a way which effectively disables the queuing and scheduling logic. If such a configuration is desired, it will either accept or reject the request immediately based on workload priorities and other factors.
If Aperture is rejecting or queuing requests, how will it impact the user experience? {#reject-impact}
Queuing requests should not affect user experience (apart from increased
latency). When it comes to rejecting requests, clients (whether it is front-end
code or some other service) should be prepared to receive
429 Too Many Requests
or 503 Service Unavailable
response and react
accordingly.
Remember, that while some users receiving 503 might seem like a thing to avoid, if such a case occurs, an overload is already happening and Aperture is protecting your service from going into an unhealthy state.
- In proxy- or web-framework-based Control Point insertion, most request
metadata is already available as Flow Labels, for example
http.request.header.foo
. - Already existing baggage is also available as Flow Labels.
- With SDKs, it's possible to explicitly pass Flow Labels to the Check call.
- Proxy-based integrations can use a Classifier to define new Flow Labels.
See the Flow Label page for more details.
As Aperture observes the system health, it can detect early sign of overload and can take necessary actions to prevent the system from becoming unhealthy. While auto-scaling could be running in parallel to add capacity, usually new instances take some time to become healthy as they have to establish database connections, perform service discovery, and so on. Therefore, Aperture is still needed to protect the system from overload by queuing or dropping excessive load while additional capacity is being added.
No, as for now, Aperture Controller only runs on a Kubernetes cluster. Remember that it's also possible to use the Aperture Cloud Controller instead of deploying your own.
Yes, the Aperture Agent can be deployed in a non-containerized environment. The Aperture Agent is a binary that can be run on the Supported Linux platforms. The installation steps are available here.
The Aperture Agent is designed to be lightweight and performant.
With the following setup:
- 1 node Kubernetes cluster
- 1 Aperture Agent installed as a DaemonSet
- 1 policy with a rate limiter, a load scheduler and a flux meter
- 3 services in
demoapp
namespace instrumented using Istio Integration - 5000 RPS at constant arrival rate over 30 minutes
The following results were observed:
CPU (vCPU core) | Memory (MB) | |
---|---|---|
Aperture Agent | 0.783 mean, 1.02 max | 13.7 mean, 22.0 max |
Istio Proxy | 1.81 mean, 2.11 max | 12.5 mean, 20.8 max |