Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APIcast standalone configuration #795

Closed
mikz opened this issue Jun 28, 2018 · 5 comments
Closed

APIcast standalone configuration #795

mikz opened this issue Jun 28, 2018 · 5 comments

Comments

@mikz
Copy link
Contributor

mikz commented Jun 28, 2018

Right now the APIcast configuration is tightly coupled to the 3scale data model with Service and Proxy objects.

To be a useful standalone gateway APIcast should be usable general purpose gateway without those concepts. As mentioned in #173 (comment) we might need to re-envision how APIcast configuration/data model would look like and then map 3scale concepts to it. With features like TLS termination or TCP proxying we need to introduce concepts of a Port/Server and get rid of routing only by hostname. Also the internal context object passed to policies will change as there might not be "service" or it might have different structure.

Proposals should be posted as comments.
Please note that this is not about the serialization format or how to deliver the configuration (json, toml, grpc, file), but about it's data model and core concepts.

Requirements

  • Start listening on more ports, for example: HTTP + HTTPS, HTTP + proxy-protocol, TCP, UDP
  • Ability to route incoming traffic by its metadata:
    • HTTP request by HTTP request data (path, headers, host)
    • TCP/UDP request by port/remote address/other data available from the request.
    • TLS request by TLS metadata (like SNI).
  • Ability to redirect request to another service/policy chain

Scenarios

Scenario 1

Current APIcast configuration. Echo API, Fake Backend, APIcast, Management API, Metrics API.

Scenario 2

Path based routing (#173 (comment)). API composition.
Internal Service Flights and Service Tickets are exposed as /flights and /tickets on public API api.example.com.
But they are also reachable internally on hostnames flights.internal and tickets.internal.

Scenario 3

HTTP Request header based routing. Requests having Cookie env=staging should be router to staging service, and everyone else to production.

Scenario 4

Sharing policy chain between staging and production. A public API is exposed as api.example.com and points to upstream api.prod.internal. It has defined a policy chain doing CORS and URL rewriting. It's staging deployment is exposed as api.staging.example.com pointing to upstream api.stag.internal. If production configuration is changed it should be reflected in the staging configuration too.

Nice to have: explore both options: deploy staging and production together in one config, define production and staging in different configs

Scenario 5

API available on multiple hostnames. One API is available both as flights.api.example.com and flights.internal pointing to the same upstream (flights.internal). It should use the same policy chain.

Scenario 6

2 independent public APIs are each pointing to unique upstream. CORS needs to be configured on both of them the same way.

As a nice to have: plus each of those APIs has own unique URL rewriting.

Scenario 7

An API is running on a wildcard domain *.example.com using upstream api.internal. There is also another API running on *-admin.example.com pointing to admin.internal upstream. Plus there is an exception for michal-admin.example.com pointing to admin.staging.internal.

Scenario 8

APIcast should start listening on 53/tcp, 53/udp, and 853/tcp+tls and use "dns" policy.

Scenario 9

Two public APIs are authenticating by mutual TLS using different CA certificates.
Each API needs to provide own way of verifying the client certificate and serve own certificate to the client. Routing should be done by using SNI as they don't have a unique IP address.
Nice to have: One API is using TLS 1.2 only and the other TLS 1.3 only.

Scenario 10

APIcast can be started in different environments (staging, production, development, testing) by supplying an environment lua file that can define things like policy chains, ports APIcast starts on, caching TTL.
This technique can be used to parameterize the main configuration file for each environment without needing to change the main configuration (for development for example).

Scenario 11

Minimal viable configuration that proxies all requests to an upstream.

Scenario 12

In a microservices architecture we need APIcast to have the ability to auto-discover new APIs (upstreams) deployed within the same namespace. This could be done in a number of ways:

  • The API services are pushed into the local registry
    • APIcast polls that registry to discover new API services.
    • The new API services are pushed to APIcast or triggers a call from APIcast to pull the new API services.
  • The last option would be to use APIs.json to expose each of the new APIs. Less preferable as we already have the registry in Kubernetes here.

APIcast should then generate a proxy for the "new" API service. The entrypoint for each namespace is always the gateway. Policies should be configured in exactly the same manner as in any other architecture. Scenario 5 would be a very typical use case in a microservices architecture where there is communication between microservices. "Should those requests be rate limited?" We need to think about how to define rate limits based on internal or external requests. Scenario 6 would apply in this context for both external and internal requests. each namespace could be exposed on different subdomains and therefore traffic between namespace would invoke CORS. Scenario 10 in a microservices architecture would work slightly differently. The configuration should be defined by the environment and how APIcast gets that configuration should be the same in every environment. The configuration available in each environment should be scoped by environment.

@jmprusi
Copy link
Contributor

jmprusi commented Jul 5, 2018

@mikz we should add our initial PoC here for an open discussion! :)

My take on APIcast standalone: https://gist.github.com/jmprusi/d9ba5003e0dc81246e7ba9c7e4f497df

@mikz
Copy link
Contributor Author

mikz commented Jul 10, 2018

@mikz
Copy link
Contributor Author

mikz commented Jul 11, 2018

We settled on following example: https://gist.github.com/mikz/871abfbd9d87e51df6d253dbd722356e merging both approaches.

We don't want to go multi-server right now, so we went with a spec that defines just one server block for now but can be extended in the future with more server blocks and tcp streams.

@kevprice83
Copy link
Member

@jmprusi That's the microservices scenario added now. Let myself or @hguerrero if you have any questions.

@jmprusi
Copy link
Contributor

jmprusi commented Aug 22, 2018

@kevprice83

Can you define “In a microservices architecture we need APIcast to have the ability to auto-discover new services deployed within the same namespace.” ? What does mean auto-discover, what do you want to auto discover? how? :)

Can you describe a developer/ops workflow for a microservice architecture?

"I deploy my service A into a namespace, that gets auto discovered by using some label ? and then this discovered services is pushed.. to? ..."

---- Update

As we discussed, scenario 12 it's covered in Ostia project, we plan on keeping APIcast "dumb" in that matter and let an external controller take care of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants