-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
APIcast standalone configuration #795
Comments
@mikz we should add our initial PoC here for an open discussion! :) My take on APIcast standalone: https://gist.github.com/jmprusi/d9ba5003e0dc81246e7ba9c7e4f497df |
We settled on following example: https://gist.github.com/mikz/871abfbd9d87e51df6d253dbd722356e merging both approaches. We don't want to go multi-server right now, so we went with a spec that defines just one server block for now but can be extended in the future with more server blocks and tcp streams. |
@jmprusi That's the microservices scenario added now. Let myself or @hguerrero if you have any questions. |
Can you define “In a microservices architecture we need APIcast to have the ability to auto-discover new services deployed within the same namespace.” ? What does mean auto-discover, what do you want to auto discover? how? :) Can you describe a developer/ops workflow for a microservice architecture? "I deploy my service A into a namespace, that gets auto discovered by using some label ? and then this discovered services is pushed.. to? ..." ---- Update As we discussed, scenario 12 it's covered in Ostia project, we plan on keeping APIcast "dumb" in that matter and let an external controller take care of that. |
Right now the APIcast configuration is tightly coupled to the 3scale data model with Service and Proxy objects.
To be a useful standalone gateway APIcast should be usable general purpose gateway without those concepts. As mentioned in #173 (comment) we might need to re-envision how APIcast configuration/data model would look like and then map 3scale concepts to it. With features like TLS termination or TCP proxying we need to introduce concepts of a Port/Server and get rid of routing only by hostname. Also the internal context object passed to policies will change as there might not be "service" or it might have different structure.
Proposals should be posted as comments.
Please note that this is not about the serialization format or how to deliver the configuration (json, toml, grpc, file), but about it's data model and core concepts.
Requirements
Scenarios
Scenario 1
Current APIcast configuration. Echo API, Fake Backend, APIcast, Management API, Metrics API.
Scenario 2
Path based routing (#173 (comment)). API composition.
Internal Service Flights and Service Tickets are exposed as
/flights
and/tickets
on public APIapi.example.com
.But they are also reachable internally on hostnames
flights.internal
andtickets.internal
.Scenario 3
HTTP Request header based routing. Requests having Cookie
env=staging
should be router to staging service, and everyone else to production.Scenario 4
Sharing policy chain between staging and production. A public API is exposed as
api.example.com
and points to upstreamapi.prod.internal
. It has defined a policy chain doing CORS and URL rewriting. It's staging deployment is exposed asapi.staging.example.com
pointing to upstreamapi.stag.internal
. If production configuration is changed it should be reflected in the staging configuration too.Nice to have: explore both options: deploy staging and production together in one config, define production and staging in different configs
Scenario 5
API available on multiple hostnames. One API is available both as
flights.api.example.com
andflights.internal
pointing to the same upstream (flights.internal
). It should use the same policy chain.Scenario 6
2 independent public APIs are each pointing to unique upstream. CORS needs to be configured on both of them the same way.
As a nice to have: plus each of those APIs has own unique URL rewriting.
Scenario 7
An API is running on a wildcard domain
*.example.com
using upstreamapi.internal
. There is also another API running on*-admin.example.com
pointing toadmin.internal
upstream. Plus there is an exception formichal-admin.example.com
pointing toadmin.staging.internal
.Scenario 8
APIcast should start listening on 53/tcp, 53/udp, and 853/tcp+tls and use "dns" policy.
Scenario 9
Two public APIs are authenticating by mutual TLS using different CA certificates.
Each API needs to provide own way of verifying the client certificate and serve own certificate to the client. Routing should be done by using SNI as they don't have a unique IP address.
Nice to have: One API is using TLS 1.2 only and the other TLS 1.3 only.
Scenario 10
APIcast can be started in different environments (staging, production, development, testing) by supplying an environment lua file that can define things like policy chains, ports APIcast starts on, caching TTL.
This technique can be used to parameterize the main configuration file for each environment without needing to change the main configuration (for development for example).
Scenario 11
Minimal viable configuration that proxies all requests to an upstream.
Scenario 12
In a microservices architecture we need APIcast to have the ability to auto-discover new APIs (upstreams) deployed within the same namespace. This could be done in a number of ways:
APIcast should then generate a proxy for the "new" API service. The entrypoint for each namespace is always the gateway. Policies should be configured in exactly the same manner as in any other architecture. Scenario 5 would be a very typical use case in a microservices architecture where there is communication between microservices. "Should those requests be rate limited?" We need to think about how to define rate limits based on internal or external requests. Scenario 6 would apply in this context for both external and internal requests. each namespace could be exposed on different subdomains and therefore traffic between namespace would invoke CORS. Scenario 10 in a microservices architecture would work slightly differently. The configuration should be defined by the environment and how APIcast gets that configuration should be the same in every environment. The configuration available in each environment should be scoped by environment.
The text was updated successfully, but these errors were encountered: