-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add KEP for Request ID #1348
Add KEP for Request ID #1348
Conversation
Welcome @hase1128! |
Hi @hase1128. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: hase1128 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @enj |
cc @dashpole |
Hey @hase1128, it looks like we have some similar goals. #650 is very similar in that we want to propagate an ID (the trace ID in that case) to all of the controllers so they can emit traces associated with the initial request. While #650 is about tracing, I agree that logging would benefit from being able to associate with a request as well. Out of curiosity, what was your plan to send the "Request ID" to the kubelet? |
@dashpole
Current my plan is matching each k8s logs related to single API request by referring some ID information(e.g. audit-id in audit-log, UUID of resource, user-id, etc.) which is written in each k8s log automatically when processing an API request. |
@dashpole I think it is better that we start from small function and simple implementation. For example, we may be able to match each k8s logs related to single API request from user with only exsisting k8s function.
So, we may be able to match logs by using uuid information.(but we can match only logs which include uuid) I would like to collaborate with you for moving forward towards our goals. |
It sounds like your interest is primarily in audit logs, which don't require any sort of context propagation. I didn't work on #650 for most of this year, and picked it up again in the last few months. I don't think matching by uuid is what i'm looking for, but it sounds like it meets your needs. I will be at kubecon as well, and will be presenting on #650 in the sig-instrumentation deep dive. |
I agree with the general goal of these, but think similarly to @dashpole that this is best solved as part of the existing KEP that is already being worked on by people. |
@brancz |
I'm going to go your deep dive session. I would like to cooperate with you on #650. |
/sig instrumentation |
@brancz
|
/retest |
@hase1128: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/assign |
|
||
#### Case 1 | ||
|
||
In case of insecure or unauthorized operation happens, it is necessary to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should more like in scope of Audit Logging. https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
Please be precise about use cases that audit logs cannot solve.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added additional description.
### Goals | ||
|
||
- Adding a Request-ID into each K8s component log. | ||
- The Request-ID is unique to an operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is an operation here? kubectl request, Operator request etc.
Problem: most requests in cluster are not done by user, but some controller that works on user behalf.
Example user sends 1 request about creating Deployment. This leads to tens of requests done to reconcile Deployment state. Deployment creates replica set. Replicaset creates pods. Pods are scheduled etc. Everything is a request here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
operation means kubectl command. we would like to associate logs related to klog calls which called by functions and APIs via kubectl command. I added more explanation.
This KEP proposes a new unique logging meta-data into all Kubernetes logs. It makes us | ||
more easy to identify specific logs related to a single user operation (such as | ||
`kubectl apply -f <my-pod.yaml>`). This feature is similar to | ||
[Request-ID](https://docs.openstack.org/api-guide/compute/faults.html) for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Openstack defines two types of Request-ID. Local and Global one.
Based on further text I guess that you mean Global Request ID. Please make it more explicit in KEP.
I would propose to rename all references to "Request-ID" to "Trace-ID" to reduce naming confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, It means global request-id. I changed to Global request id in the above sentence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would propose to rename all references to "Request-ID" to "Trace-ID" to reduce naming confusion.
OK, I consider this and may update KEP later.
- We just use Tracing codes provided by `Distributed Tracing` feature. | ||
- We use Trace-ID as Request-ID (This does not interfere Tracing codes). | ||
|
||
The target functions that we add Request-id codes are the following tables. The target functions include the klog call which called via important `kubectl` operations. These operations are listed in [Migration / Graduation Criteria](#migration--graduation-criteria) section. As a result, we can propagate Request-ID regarding important `kubectl` operations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the criteria for picking those and not others?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What marks an important kubectl operation
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added criteria and explanation in the KEP.
``` | ||
Request-ID with structured format | ||
```go | ||
klog.InfoS("Request-ID", trace-id, "Pod status updated", "pod", pod, "status", "ready") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Request-ID should be a normal field, meaning it should be put after message "Pod status updated"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I put request-id after the message.
btw, what is normal field? (sorry, I'm not familiar the structure of field of klog in k8s...)
|
||
It is difficult that support team in k8s Service Provider resolve end user's problem quickly in the above. Therefore, we'd like to add a new identifier which is unique to each user operation. This feature is useful for the following use cases: | ||
|
||
#### Case 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would help to readers to give these cases summaries. This one would probably be something like:
Given an API Request, find the resulting component logs
The one below would be something like
Given a component log, find the API Request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added cases summaries.
- Adding a Request-ID into each K8s component log. | ||
- The Request-ID is unique to a kubectl operation. | ||
- (One kubectl operation by user causes multiple API requests and klog calls. Request-ID has same value in these klog calls.) | ||
- Control enabled/disabled Request-ID feature(Request-ID feature is disabled on default to avoid an impact for existing user). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually the process of moving from:
disabled by default -> enabled by default -> always enabled
maps to
alpha -> beta -> GA
Unless you are proposing something outside of this, it probably doesn't need to be mentioned in goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are pointing feature gate
.
I considered whether feature gate
can satisfy our objective or not, but I'd like to allow the user to adjust the trade-off with log size. (This is similar to the concept of log level), so I'd like to implement the feature which can control request-id feature.
- To centrally manage the logs of each Kubernetes component with Request-ID (This can | ||
be realized with existing OSS such as Kibana, so no need to implement into Kubernetes | ||
components). | ||
- We don't associate Request-ID to all of operations(Our target is important operations such as `kubectl create/delete/etc.`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe more precise here. You want to add it to write operations, as those are the ones that cause changes in the system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right. I added more explanation.
|
||
## Summary | ||
|
||
This KEP proposes a new unique logging meta-data into all Kubernetes logs. It makes us |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you have a well-defined list of places you would like to add this log below. Maybe say here that you are adding metadata to "key" kuberntes logs, rather than "all" logs. "all" logs would be an enormous undertaking and may not be realistic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I modified the word "all".
| ------ | ------ | | ||
| --request-id=0 | Request-ID feature is disabled (Default) | | ||
| --request-id=1 | Request-ID feature is enabled, and Request-ID is added to klogs related to the `Alpha` target operations | | ||
| --request-id=2 | Request-ID feature is enabled, and Request-ID is added to klogs related to the `Alpha and Beta` target operations | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. This is the equivalent of verbosity for adding this info to logs. How do you plan to propagate the request id value here to controllers? Will it also have to be stored in an annotation and propagated the same way the SpanContext is for tracing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am considering the method, but I don't have the best idea now.
Currently, I think we don't use the same way the SpanContext. There is an idea that we add parameter into component-base. This is the same idea as structured logging(--logging-format). However, I think this requires restart of each K8s component when we changes the value of flag. Currently, I am investigating a method which applies against all of component and does not need restart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@serathius
In your strctured logging KEP, you are going to implement following structure.
LoggingConfig structure should be implemented as part of k8s.io/component-base options and include a common set of flags for logger initialization.
Can we add --request-id
parameter into this structure? Since --request-id
parameter is also related to logging, I think it would be a simpler implementation to include it in this structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@serathius
I have additional questions.
set of klog flags is included in klog package. Does LoggingConfig structure is related to these klog flags?
if there is a component which does not use logger initialization via component-base, we need to implement this against such component?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think I understand the proposal now. I'll defer to @serathius, but I don't think we want to introduce a new verbosity level for each piece of metadata we are considering adding to logs.
If we are very concerned about this adding to log size, we might instead prioritize adding this field to lower (>4) verbosity logs. That way, we don't log this field very often by default, but turning up the verbosity adds many logs that contain the field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks the comment.
I will wait reply from @serathius . I understand your opinion, because --request-id
parameter's effect is similar to the existing verbosity level. (So, this may make the use of parameters difficult and complicated for users.)
If we use existing verbosity log(--v), I want to consider which level we should add request-id.
- Adding a Request-ID into each K8s component log. | ||
- The Request-ID is unique to a kubectl operation. | ||
- (One kubectl operation by user causes multiple API requests and klog calls. Request-ID has same value in these klog calls.) | ||
- Control enabled/disabled Request-ID feature(Request-ID feature is disabled on default to avoid an impact for existing user). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you document what the "impact" you are referring to here is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the explanation about "impact".
except for the following points. | ||
|
||
- Audit only collects information about http request sending and receiving in kube-apiserver, so it can't track internal work of each component. | ||
- Audit logs can't be associated to logs related to user operation (kubectl operation), because auditID is different for each http request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also note that context-based logging doesn't have nearly the same degree of security that audit logs have. The context propagation described in the tracing proposal doesn't guarantee that all actions taken by controllers will be tied back to the same trace id. In particular, if the trace context is overwritten, new spans/logs from components will be associated with the new trace context. We can't provide guarantees around the completeness of logs for a given trace id. Audit logging is also purposefully designed to be secure. Component logs do not provide the same protections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In particular, if the trace context is overwritten, new spans/logs from components will be associated with the new trace context.
Does this mean below? (I referred your KEP)
High-level processes, such as starting a pod or restarting a failed container, could be interrupted before completion by an update to the desired state. While this leaves a "partial" trace for the first process, it is the most accurate representation of the work and timing of reconciling desired and actual state.
Even If logs before overwriting trace-context
have same trace-ID and associated logs after overwriting have another trace-ID, I think both logs are valuable from a troubleshooting standpoint.
This KEP proposes a new unique logging meta-data into key Kubernetes logs(Details are described in [Design of Propagate Request-ID section](#design-of-propagate-request-id)). It makes us | ||
more easy to identify specific logs related to a single user operation (such as | ||
`kubectl apply -f <my-pod.yaml>`). This feature is similar to | ||
[Global request ID](https://docs.openstack.org/api-guide/compute/faults.html) for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add Glossary
section to KEP and explain names like Global request ID
in context of this KEP.
It would really help if you used same naming convention as tracing KEP https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/0034-distributed-tracing-kep.md#definitions
It's hard to map different terms used in both KEPs
This KEP proposes a new unique logging meta-data into key Kubernetes logs(Details are described in [Design of Propagate Request-ID section](#design-of-propagate-request-id)). It makes us | ||
more easy to identify specific logs related to a single user operation (such as | ||
`kubectl apply -f <my-pod.yaml>`). This feature is similar to | ||
[Global request ID](https://docs.openstack.org/api-guide/compute/faults.html) for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recent changes to tracing KEP https://github.com/kubernetes/enhancements/pull/1458/files removed idea of propagating traces through K8s controllers. This means that you should rethink idea of Global request ID
as now traces will be very short (they will cover only client
-> apiserver
-> etcd
)
|
||
Support team in k8s Service Provider | ||
|
||
### Target User's objective |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With object tracing removed from tracing KEP, this is no longer possible.
If the container is terminated by OOM killer, there is a case to break down the | ||
issue into parts(Pod or Kubernetes) from the messages related OOM killer on host logs | ||
and the API processing just before OOM killer. If the cause is that some unknown | ||
pod creations, it is helpful to detect the root API request and who called this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is root API request?
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I close this KEP, because this KEP has been replaced by the following KEP. |
This PR is KEP request.
I would like to discuss about this KEP with SIG-Auth members.