-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream extension mapping cluster metadata to HTTP headers #15472
Comments
Hi @lambdai |
After discussing with @PiotrSikora and @htuch, planning to start adding code for this, as experimental for now since an API isn't finalized. |
Just noticed the extension policy... @htuch or @alyssawilk, would you be interested in sponsoring this? |
At first pass is this something needed temporarily while we don't have upstream filters? It seems like something that could be done fairly easily with an upstream filter and if so I'm mildly inclined to push for the long term solution over short term. cc @mattklein123 @snowp |
+1 I would rather just see upstream filters finished. @snowp can you potentially hand the work off to someone that has more cycles? |
Yeah, I think upstream filters would address this and I see why we'd want to favor the long-term solution. Looking at #10455 though, it seems like part of the reason there hasn't been a design or road map after a year is that it's a difficult and nuanced problem that requires expertise and time. This would be significantly easier and lighter weight (the implementation except for an extensible API is basically done in lambdai#7), and has immediate need to help make use of some of the changes @lambdai has made recently. Because this would be an extension separate from the rest of Envoy code, it seems like it would be relatively easy to migrate to upstream filters if/when those are implemented? |
I think I significant amount of work has already been done on upstream filters and I think we should just finish them. If you want to solve this short term do we really need any core changes? Could you just have a custom filter that looks at cluster metadata for the current route and adds a header? |
@mattklein123 I'm happy to hand it over if someone is willing to pick it up! The main things that remain are updating the router to make use of the FM and to make it possible to configure upstream filters via the API. #13095 is a WIP branch that makes us of the FM in the router Happy to sync with whoever is interested in picking this up. |
@mattklein123 @alyssawilk It has been a while since I add the test case #13915 and wait for http upstream filter. My perception is that the engineering cost is somehow high comparing to the requirement. Also the upstream extension proposed here is not a core change. This extension contains its own config and can be injected using existing factory. @tbarrella wants to do more than #13915 to resolve some known use cases while we are waiting for http upstream filter |
What is the actual core change you are asking for here? What you all do in the Istio filters is up to you obviously. |
Do we have different view on the core? There is no core change requested. We provide to an new extension that can fulfill some of the known feature requests regarding encoding per host info into the upstream request. It's is a strict subset of http upstream filter so we can migrate in the future |
Right so can you add the extension in Istio and we can close this issue? |
@tbarrella I can't commit to sponsoring any new extensions at this point. There are other EPT members who might be able to, let's chat internally on how to approach this best. @mattklein123 @alyssawilk if we had a Google maintainer sponsor for the extension and an agreement to move to upstream HTTP filters when available, would that work in principle? Based on some offline chats, I think the upstream HTTP filter work is high here. Looking through the criteria at https://github.com/envoyproxy/envoy/blob/main/EXTENSION_POLICY.md#adding-new-extensions, I'm wondering what the proposal in this issue is missing beyond maintainer sponsorship. Is there some implicit principle that would steer this extension towards the Istio repository rather than the Envoy one that we should add there? |
The general filter I apply is will multiple people care about this extension. Every extension adds maintenance debt for the project. Given that we know the right solution here (upstream filters) why would we want to take on a temporary solution that we have to maintain? If istio can host a private extension in the interim that seems optimal, unless there is desire from other customers. |
Also, could #13897 work as an interim solution? That one seems pretty generic and broadly useful? |
@mattklein123 re: concerns about maintainer debt, would #14078 change the balance, i.e. the |
My earliest attempt can be tracked to #12271 The feedback is that it's adding a new phase of the rewrite header which requires broader and router api redefinition. The upstream extension gives it a chance to add the dedicated phase and avoid coupling with http route api. I thought if the code is here we can solve #13897 and potential other cases before the arrival of http upstream filter |
Marginally I guess? The point I'm getting at is Istio already ships their own proxy out of https://github.com/istio/proxy and maintains their own extensions. Unless we have demand for this outside of Istio, why is this extension different from the other extensions, especially when we have the way we would like to do this properly already known and being worked towards?
I think this is only true if we want to use endpoint metadata. The existing request headers finalization can use cluster metadata without any router changes. The above example at the top of this ticket only needs cluster metadata. Is that good enough for now? Do you need endpoint metadata to affect headers right now?
I looked at the example PR and it's non-trivial. I think if we were to merge it, it also needs extra documentation, better configuration (it's really a wrapper around other connection pools), etc. It just doesn't seem like it's worth it to me when a) we can already easily add upstream cluster metadata, and we have a path forward for upstream filters. |
Thank you @mattklein123
Incorporate endpoint metadata in the upstream stream header is the core motivation of this extension. |
I want to apologize for misassessing #10455 - I didn't realize this was almost done and the knowledge that it is is helpful.
@htuch Thank you!
@mattklein123 To me, if it's known that finishing upstream filters will be prioritized (and that this solution is really only temporary) I'd be less concerned about having this extension within Istio code. In discussion with @alyssawilk, it sounded like the remaining work is still complicated and better done by more experienced Envoy developers. So I'm not sure what I can do to help this along myself.
From discussion with @PiotrSikora, my understanding is there's a long-term concern similar to here: since those extensions were added, it's been decided that we want to move toward using vanilla Envoy with all Istio extensions as Wasm only. So we want to avoid adding new C++ extensions to Istio proxy.
This totally makes sense to me; I think this is at the core of the discussion and I think more communication would help here. So far it seems the discussion has been limited to specific implementation details needed for Istio. I'm considering writing a doc to share about the motivation behind these changes, i.e. why it's valuable to be able to use Envoy in this way, for an audience that isn't Istio developers. Do you think this would be helpful? |
Thanks @tbarrella that would be great. From this issue it's really hard for me to tell how general this solution is, and as I mentioned above, if we decide to move forward with this it would need end-user understandable docs, examples, etc. As I said, given that Istio still does maintain the private repo, and we know that we want this to be a temporary solution, it still feels better suited for that repo while we sort out upstream filters, but I'm happy to discuss further in a short doc. FWIW @lambdai it's not clear to me we shouldn't pursue #12236 in the interim, as I think that would cover a lot of (all) use cases and feels like a pretty natural solution to me for many upstream filter use cases? |
@mattklein123 @alyssawilk @htuch I created the doc, please feel free to comment. Do you have any thoughts after reading it? |
@tbarrella thanks for doing this. I've left a few comments, mostly I think we need a diagram or two! :) |
+1 thanks @tbarrella the doc is a great start. I added a few more comments to further our discussion. |
Thank you everyone for commenting on the doc. At this point, I'm not sure if we've made much progress:
|
I left a comment in the doc but I will leave it here also: My suggestion is the following:
How does this sound? It seems like a reasonable compromise to me. |
Definitely something we can do, but I think we're multiple quarters out from locking down launch blocking feature work and it's only a nice to have for us. Has this been down-prioritized by Lyft and @snowp such that it's unlikely to get done this year if Google doesn't pick it up? |
Thank you, that sounds reasonable to me too. One note is that to make TCP requests work, it looks like upstream filters would need to be configurable for TCP proxy requests over H2 CONNECT as well... |
SG, I think that'd have some pretty substantial value add, as it'd allow
folks to have HTTP filters for CONNECT requests, which would be a big win.
…On Thu, Apr 29, 2021 at 6:36 PM Taylor Barrella ***@***.***> wrote:
Thank you, that sounds reasonable to me too. One note is that to make TCP
requests work, it looks like upstream filters would need to be configurable
for TCP proxy requests over H2 CONNECT as well...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#15472 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AELALPKT2NSKU3WECEFDNGLTLHNQPANCNFSM4ZDDMYXA>
.
|
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions. |
Title: Upstream extension mapping cluster metadata to HTTP headers
Description:
Something more general than #13897 but lighter than #10455 would be an upstream_config extension that allows injecting HTTP headers based on cluster metadata.
For an example usage other than #13897, consider multiple clusters for the same address but different ports, where the upstream Envoy is listening on one port and an application is listening on multiple other ports. The endpoint addresses for each cluster could use the Envoy port, whereas the metadata for each cluster could contain the corresponding application port. This extension could be configured to use that metadata to add an HTTP header with the application port. Route config in the upstream Envoy would then use the header value to direct the request to the right port.
Cluster config could look like
Using a port here is one example, but this could be used for any situation where the upstream server wants information that depends on the cluster used to send the request. A prototype PR for this is lambdai#7. The API could potentially be extended to allow using metadata from the endpoint as well.
Would others find this to be useful? Would it make sense for this repo to contain an extension like this? (Or is there already a way to do this?)
The text was updated successfully, but these errors were encountered: