Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/mongodb-replicaset] unauthorized logs #12631

Closed
sei-jmattson opened this issue Mar 28, 2019 · 12 comments
Closed

[stable/mongodb-replicaset] unauthorized logs #12631

sei-jmattson opened this issue Mar 28, 2019 · 12 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@sei-jmattson
Copy link
Contributor

sei-jmattson commented Mar 28, 2019

I have deployed mongodb-replicaset-3.9.2 with auth and metrics enabled. Everything seems fine, but I get a lot of log messages like the following on each pod (every 7 seconds or so):

2019-03-28T14:09:09.150+0000 I NETWORK  [conn23432] received client metadata from 127.0.0.1:42140 conn23432: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 9 (stretch)"", architecture: "x86_64", version: "Kernel 4.15.0-1021-azure" } }
2019-03-28T14:09:09.154+0000 I ACCESS   [conn23432] Unauthorized: not authorized on admin to execute command { endSessions: [ { id: UUID("91f0c7a5-b400-4dea-8062-58f2e347723e") } ], $db: "admin" }
2019-03-28T14:09:09.154+0000 I NETWORK  [conn23432] end connection 127.0.0.1:42140 (13 connections now open)

Any guidance on preventing those logs? I assume it's the mongodb-exporter container connecting to mongo, but it isn't allowed to endSessions?
Thanks.

@steven-sheehy
Copy link
Collaborator

Not sure, we're creating the metrics user with the recommended roles. You'd have to either add a higher level role or create a custom role that has endSessions permission on admin. But not sure it's worth the effort. Guessing the exporter is manually closing the connection instead of letting it timeout on its own. Either way, it's gonna close.

@sei-jmattson
Copy link
Contributor Author

Yeah, thanks. Looks like it isn't even the exporter; I deployed with metrics.enabled = false and still get those messages. Guess I'll join all those that are trying to quiet mongodb.

@kwill4026
Copy link

@sei-jmattson So I have deployed mongodb-replicaset 4.0 with auth and metrics enabled. All 3 pods are running with 2/2 status for the replicaset and the metrics. When I look at the replicaset logs for the first pod with kubectl logs pod/snug-sheet-mongodb-replicaset-0 -c mongodb-replicaset. I see the following:

2019-04-04T12:11:53.845+0000 I NETWORK [listener] connection accepted from 127.0.0.1:43396 #447 (8connections now open)

2019-04-04T12:11:53.855+0000 I ACCESS [conn447] Successfully authenticated as principal metrics on admin
2019-04-04T12:11:53:856+0000 I NETWORK [listener] connection accepted from 127.0.0.1:43438 #448 (8 connections now open)
2019-04-04T12:12:01:282+0000 I NETWORK [conn448] received client metadata from 127.0.0.1:43438 conn448: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.1" }, os: { type: "Linux", name: "PRETTY_NAME="Ubuntu", architecture: "x86_64", version: "16.0.4" } }

This is all fine and good and I am able to bash into the pod and bring up the mongo shell.
However on the metrics side of things, when I look at the logs I seem to be getting an error.
For pod 0 I do kubectl logs pod/snug-sheep-mongodb-replicaset-0 -c metrics and see the following:
time="2019-04-04T11:50:49Z" level=info msg="Starting HTTP server for http://:9216/metrics ..." source="server.go:121

For pod1 the logs read:
time="2019-04-04T11:51:49Z" level=error msg="Cannot connect to server using url mongodb://*:@localhost:27017: no reachable servers" source="connection.go:84"

time="2019-04-04T11:51:49Z" level=error msg="Cant create mongo session to mongodb://:@localhost:27017" source="mongodb_collector.go:200"

time="2019-04-04T11:51:49Z" level=info msg="Starting HTTP server for http://:9216/metrics ..." source="server.go:121

For pod2 the logs read:
time="2019-04-04T11:51:49Z" level=info msg="Starting HTTP server for http://:9216/metrics ..." source="server.go:121

I am confused as to what this means. Is the connection to the mongodb and the prometheus not getting made in order for it to scrape the lmongodb logs? I tried seeing if I could view the current value by querying the pod as well using: curl http://$POD_IP:9216/metrics
and all I got was connection refused errors and no route to host. My question to you is, are you getting the same messages I am getting in regards to the logs and how are you viewing the Prometheus metrics (via Grafana?, etc..)

@stale
Copy link

stale bot commented May 4, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2019
@stale
Copy link

stale bot commented May 18, 2019

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed May 18, 2019
@someonegg
Copy link
Contributor

someonegg commented Jul 11, 2019

It is the readinessProbe:
mongo --eval "db.adminCommand('ping')"

If the deployment enforces authentication/authorization, you must be authenticated to run the endSessions command.

@prasadus92
Copy link

prasadus92 commented Sep 9, 2019

I am also seeing the same logs -

2019-09-08T23:58:30.196+0000 I ACCESS   [conn92148] Unauthorized: not authorized on admin to execute command { endSessions: [ { id: UUID("de16f4f7-7b48-4d61-ab85-62ac571e5e4c") } ], $db: "admin" }
2019-09-08T23:58:30.196+0000 I NETWORK  [conn92148] end connection 127.0.0.1:56376 (39 connections now open)
2019-09-08T23:58:33.903+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:58696 #92149 (40 connections now open)
2019-09-08T23:58:33.903+0000 I NETWORK  [conn92149] received client metadata from 127.0.0.1:58696 conn92149: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.14" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }

My configuration is MongoDB 3.6 with both Auth and Metrics enabled.

@someonegg It's not because of the readinessProbe. Here is the output for ping in my case -

mongodb@infra-mongodb-replicaset-0:/$ mongo --eval "db.adminCommand('ping')"
MongoDB shell version v3.6.14
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("09eaf88a-f382-4e86-852a-ced26e90f44a") }
MongoDB server version: 3.6.14
{
        "ok" : 1,
        "operationTime" : Timestamp(1567988017, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1567988017, 1),
                "signature" : {
                        "hash" : BinData(0,"3m5rFa3zlcMQ5hDsuy2mlKK3gU0="),
                        "keyId" : NumberLong("6711544440529354754")
                }
        }
}

@kwill4026 /metrics is just fine with my deployment.

@prasadus92
Copy link

/remove-lifecycle stale

@sdwire
Copy link

sdwire commented Sep 28, 2019

I agree with @someonegg that it is the readinessProbe. When I exercise the same db.adminCommand('ping') from a bash session multiple times in a second, I get a valid response like @prasadus92 does, but I also create multiple "Unauthorized..." entries in the log in that same second.

The Unauthorized doesn't seem to be related to the ping, but to the attempt to end the session.

And while I understand the general concept that authorization is required, I don't understand what one needs to do to get the readiness probe to authorize itself. Without that, we're creating a lot of worthless log entries that will chew up disk space and analysis time for our monitors.

Without disabling the requirement for authorization, how can we get rid of these spurious log entries?

@vladi-dev
Copy link

vladi-dev commented Feb 6, 2020

Has anyone found a solution how to authorize readinessProbe?

Update:
Found a solution by creating a user:

# in mongo console
use test; # switch to test db
db.createUser({ user: 'healthcheck', pwd: 'Kubernetes', roles: []});

Update mongo deployment readinessProbe:

          readinessProbe:
            exec:
              command: [
                "mongo",
                "-u", "healthcheck",
                "-p", "Kubernetes",
                "--authenticationDatabase", "test",
                "--eval", "\"db.adminCommand('ping')\""
              ]

@erSitzt
Copy link

erSitzt commented Aug 11, 2020

Nice to have a fix like @vladi-dev posted... but this isnt a solution.

This has to be handled in the chart.

@someonegg
Copy link
Contributor

3.6+

mongo --disableImplicitSessions --eval "db.adminCommand('ping')"

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

8 participants