-
Notifications
You must be signed in to change notification settings - Fork 16.7k
[stable/mongodb-replicaset] unauthorized logs #12631
Comments
Not sure, we're creating the metrics user with the recommended roles. You'd have to either add a higher level role or create a custom role that has endSessions permission on admin. But not sure it's worth the effort. Guessing the exporter is manually closing the connection instead of letting it timeout on its own. Either way, it's gonna close. |
Yeah, thanks. Looks like it isn't even the exporter; I deployed with |
@sei-jmattson So I have deployed mongodb-replicaset 4.0 with auth and metrics enabled. All 3 pods are running with 2/2 status for the replicaset and the metrics. When I look at the replicaset logs for the first pod with kubectl logs pod/snug-sheet-mongodb-replicaset-0 -c mongodb-replicaset. I see the following: 2019-04-04T12:11:53.845+0000 I NETWORK [listener] connection accepted from 127.0.0.1:43396 #447 (8connections now open) 2019-04-04T12:11:53.855+0000 I ACCESS [conn447] Successfully authenticated as principal metrics on admin This is all fine and good and I am able to bash into the pod and bring up the mongo shell. For pod1 the logs read: time="2019-04-04T11:51:49Z" level=error msg="Cant create mongo session to mongodb://:@localhost:27017" source="mongodb_collector.go:200" time="2019-04-04T11:51:49Z" level=info msg="Starting HTTP server for http://:9216/metrics ..." source="server.go:121 For pod2 the logs read: I am confused as to what this means. Is the connection to the mongodb and the prometheus not getting made in order for it to scrape the lmongodb logs? I tried seeing if I could view the current value by querying the pod as well using: curl http://$POD_IP:9216/metrics |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
It is the readinessProbe: If the deployment enforces authentication/authorization, you must be authenticated to run the endSessions command. |
I am also seeing the same logs -
My configuration is MongoDB 3.6 with both Auth and Metrics enabled. @someonegg It's not because of the
@kwill4026 |
/remove-lifecycle stale |
I agree with @someonegg that it is the The Unauthorized doesn't seem to be related to the ping, but to the attempt to end the session. And while I understand the general concept that authorization is required, I don't understand what one needs to do to get the readiness probe to authorize itself. Without that, we're creating a lot of worthless log entries that will chew up disk space and analysis time for our monitors. Without disabling the requirement for authorization, how can we get rid of these spurious log entries? |
Has anyone found a solution how to authorize readinessProbe? Update:
Update mongo deployment readinessProbe:
exec:
command: [
"mongo",
"-u", "healthcheck",
"-p", "Kubernetes",
"--authenticationDatabase", "test",
"--eval", "\"db.adminCommand('ping')\""
] |
Nice to have a fix like @vladi-dev posted... but this isnt a solution. This has to be handled in the chart. |
3.6+ mongo --disableImplicitSessions --eval "db.adminCommand('ping')" |
I have deployed mongodb-replicaset-3.9.2 with auth and metrics enabled. Everything seems fine, but I get a lot of log messages like the following on each pod (every 7 seconds or so):
Any guidance on preventing those logs? I assume it's the mongodb-exporter container connecting to mongo, but it isn't allowed to endSessions?
Thanks.
The text was updated successfully, but these errors were encountered: