-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver not starting with audit logging (STDOUT) #4202
Comments
What is the output of |
There is no output in the kube-apiserver.log. |
+1 to this, we have the same problem |
@sethpollack do you have any insight? I think you dropped in the options for api audit logging. |
nope, never used it in the end |
I can do simple Patch for kops and deactivate the logging of STDERROR to /var/log/kube-apiserver.log, when audit-logging is activated. But it seems to be wrong way of solving the issue. |
What is the right way? We have issues to get api server logs accessible to kubectl get logs as well. |
In my opinion it is not necessary to log to /var/log/kube-apiserver.log, pls correct me if iam worng. STDOUT is the right target. |
with default cluster setup using kops, logs of Kubernetes components are being duplicated to /var/log/ and to STDOUT of corresponding component's container |
@hatemosphere Why? |
Just got this issue also. Updated my kops config with audit details, rolling update of master and now my api server will not start!
Then restarted my master
completely broken after... checked logs on master node for api server:
@caraboides how did you resolve this? This a bug? No idea what is wrong. So reverted the state in s3 to previous version, terminated master and rebooted. Is there an issue with audit? |
My fault! I was running kops on a VM, version 1.7 and messed up the state files on s3 bucket!!! ;-) |
@shavo007 i deleted |
@KashifSaadat does your PR update this as well? |
@chrislovecnm sorry which PR? The The following PRs slightly change the command exec and logging behaviour: |
We're getting the same error. Found this error in systemd journal for kubelet:
Looking at kube-apiserver.manifest the following looks incorrect:
and
kubelet runs in /
Which could explain why .:/. bind mount is used incorrectly. The auditlogpathdir volume mount seems to be coming from here: https://github.com/kubernetes/kops/blob/master/nodeup/pkg/model/kube_apiserver.go#L310 |
What should it be? |
There shouldn't be a volume mount for auditlogpathdir as it's being sent to STDOUT |
https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#configuration says
kops chokes on audit-log-path being set to '-'. Generates an invalid kube-apiserver.manifest that gives the error I posted above and the node fails to come up. |
I have the same issue. This is inconvenient because I use fluentd to ship the STDOUT/STDERR logs of the kube-apiserver to some faraway place. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
any activities ongoing atm to solve this? |
I have the same question. It's still relevant for us. |
I've opened a pull request with a potential fix for this issue, as far as I can reproduce it. When testing this issue, please keep in mind that this happens in |
kops
version are you running?Version 1.8.0 (git-5099bc5)
aws
kops edit
kops update
kube-apiserver pod is not starting, see this in syslog at the master:
A running apiserver and auditlogs in STDOUT
I have a dirty workaround:
deleted
from cluster config and activate audit logging by editing die kube-apiserver.manifes
--audit-policy-file=/srv/kubernetes/audit-policy.yaml --audit-log-path=-
2>&1
The working manifest is then:
Now i have auditlogs in STDOUT of the POD and fluentd is able to ship.
PS:
I see the same error when logging to a file like /tmp/foo with
2>&1
. If i delete2>&1
, then i see audit logs in /tmp/foo in the container.The text was updated successfully, but these errors were encountered: