Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authentication/Root user fails #346

Closed
asc-adean opened this issue Apr 18, 2019 · 12 comments
Closed

Authentication/Root user fails #346

asc-adean opened this issue Apr 18, 2019 · 12 comments
Labels
question Usability question, not directly related to an error with the image

Comments

@asc-adean
Copy link

I'm running into a strange issue. I'm running in AKS and attempting to create a StatefulSet of 3 mongo nodes. I can get this to work without authentication enabled without an issue. The problem arises when I try to enable authentication using the environment vars MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD

Here's an example of me trying to login:

root@test-mongo-0:/# env | grep INITDB
MONGO_INITDB_DATABASE=admin
MONGO_INITDB_ROOT_PASSWORD=rootpassword
MONGO_INITDB_ROOT_USERNAME=mongodb_cluster_admin
root@test-mongo-0:/# mongo -u mongodb_cluster_admin -p rootpassword --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
2019-04-18T15:45:09.702+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):2:6
exception: connect failed

Here's the StatefulSet config:

# MONGO_INITDB_ROOT_USERNAME: mongodb_cluster_admin
# MONGO_INITDB_ROOT_PASSWORD: rootpassword
---
apiVersion: v1
data:
  MONGO_INITDB_ROOT_USERNAME: bW9uZ29kYl9jbHVzdGVyX2FkbWluCg==
  MONGO_INITDB_ROOT_PASSWORD: cm9vdHBhc3N3b3JkCg==
  MONGO_AUTH_KEY: <redacted>
kind: Secret
metadata:
  name: mongodb-secrets
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  name: test-mongo
  labels:
    name: test-mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: test-mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: test-mongo
spec:
  serviceName: "test-mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: test-mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
      - name: mongodb-init
        secret:
          defaultMode: 0600
          secretName: mongodb-secrets
      initContainers:
      - name: remove-transparent-hugepage
        image: busybox:1.28
        command: ['echo', 'never', '>', '/sys/kernel/mm/transparent_hugepage/enabled']
        imagePullPolicy: IfNotPresent
      containers:
        - name: mongo
          image: mongo:4.0
          imagePullPolicy: IfNotPresent
          command:
            - mongod
            - "--replSet"
            - testrs
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
            - "--auth"
            - "--keyFile"
            - "/tmp/mongodb-init/MONGO_AUTH_KEY"
          ports:
            - containerPort: 27017
          resources:
            requests:
              memory: 250Mi
              cpu: 200m
            limits:
              memory: 500Mi
          volumeMounts:
            - name: mongo-managed-disk
              mountPath: /data/db
            - name: mongodb-init
              mountPath: /tmp/mongodb-init
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              value: "mongodb_cluster_admin"
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb-secrets
                  key: MONGO_INITDB_ROOT_PASSWORD
  volumeClaimTemplates:
  - metadata:
      name: mongo-managed-disk
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
@wglambert wglambert added the question Usability question, not directly related to an error with the image label Apr 18, 2019
@wglambert
Copy link

When you populated your secrets file echo appended newlines, use the -n option

Note the lack of a newline on the last output

rei@Ayanami-clone:~$ MONGO_INITDB_ROOT_USERNAME=$(echo 'mongodb_cluster_admin' | base64)
rei@Ayanami-clone:~$ MONGO_INITDB_ROOT_USERNAME_no_newline=$(echo -n 'mongodb_cluster_admin' | base64)
rei@Ayanami-clone:~$ echo "$MONGO_INITDB_ROOT_USERNAME" | base64 --decode
mongodb_cluster_admin
rei@Ayanami-clone:~$ echo "$MONGO_INITDB_ROOT_USERNAME_no_newline" | base64 --decode
mongodb_cluster_adminrei@Ayanami-clone:~$ 

@asc-adean
Copy link
Author

asc-adean commented Apr 18, 2019

Thanks for the reply. I tested with a non-secret username:

- name: MONGO_INITDB_ROOT_USERNAME
              value: "mongodb_cluster_admin"

and new password (using echo -n) and I am getting the same result:

root@test-mongo-0:/# env | grep INITDB
MONGO_INITDB_ROOT_PASSWORD=Lk1xV3JQ1i7rU1N0I0WZGUuq
MONGO_INITDB_ROOT_USERNAME=mongodb_cluster_admin
root@test-mongo-0:/# mongo -u mongodb_cluster_admin -p Lk1xV3JQ1i7rU1N0I0WZGUuq --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
2019-04-18T17:21:54.924+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):2:6
exception: connect failed
root@test-mongo-0:/# echo "TGsxeFYzSlExaTdyVTFOMEkwV1pHVXVx" | base64 -d
Lk1xV3JQ1i7rU1N0I0WZGUuqroot@test-mongo-0:/#

I tested this previously with the password being a plaintext config value and had the same results.

Here's what I found in the logs that is indicative that the user is not even created:

[conn4] Supported SASL mechanisms requested for unknown user 'mongodb_cluster_admin@admin'
2019-04-18T17:21:54.922+0000 I ACCESS   [conn4] SASL SCRAM-SHA-1 authentication failed for mongodb_cluster_admin on admin from client 127.0.0.1:43066 ; UserNotFound: Could not find user mongodb_cluster_admin@admin

@wglambert
Copy link

Using just Docker I don't run into any issues, so probably either something with Kubernetes or the configuration

$ docker run -dit --name mongo -e MONGO_INITDB_ROOT_USERNAME=mongodb_cluster_admin -e MONGO_INITDB_ROOT_PASSWORD=Lk1xV3JQ1i7rU1N0I0WZGUuq -e MONGO_INITDB_DATABASE=admin mongo:4.0
f814b96b0c0f61da0b3b500bae6c274390e52cb16633185aa228b34578aedace

$ docker exec -it mongo bash

root@f814b96b0c0f:/# mongo -u mongodb_cluster_admin -p Lk1xV3JQ1i7rU1N0I0WZGUuq --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3d6b103e-a041-442a-b570-fc32073806c5") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2019-04-18T17:44:41.276+0000 I STORAGE  [initandlisten] 
2019-04-18T17:44:41.276+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-18T17:44:41.276+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

>

Searching for that specific error Supported SASL mechanisms requested for unknown user yields a lot of Kubernetes results such as https://stackoverflow.com/questions/52766666/mongodb-via-k8s-helm-deploy-authentication-fails-or-worse

@asc-adean
Copy link
Author

I tested this locally using the official mongo image as well as the bitnami image. In both cases, the user is claimed to have been created:

mongodb INFO  ==> Creating mongo_admin user...
mongodb INFO
mongodb INFO  ########################################################################
mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Root Password: **********
mongodb INFO     Username: mongo_admin
mongodb INFO     Password: **********
mongodb INFO     Database: mongo
mongodb INFO   (Passwords are not shown for security reasons)
mongodb INFO  ########################################################################
mongodb INFO
nami    INFO  mongodb successfully initialized

Authentication fails:

MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&gssapiServiceName=mongodb
2019-04-19T17:47:36.460+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):2:6
exception: connect failed

Logs say the user doesn't exist:

2019-04-19T17:47:36.454+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:58118 #3 (1 connection now open)
2019-04-19T17:47:36.459+0000 I NETWORK  [conn3] received client metadata from 127.0.0.1:58118 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.9" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 9 (stretch)"", architecture: "x86_64", version: "Kernel 4.15.0-1040-azure" } }
2019-04-19T17:47:36.459+0000 I ACCESS   [conn3] Supported SASL mechanisms requested for unknown user 'mongo_admin@admin'
2019-04-19T17:47:36.460+0000 I ACCESS   [conn3] SASL SCRAM-SHA-1 authentication failed for mongo_admin on admin from client 127.0.0.1:58118 ; UserNotFound: Could not find user mongo_admin@admin
2019-04-19T17:47:36.460+0000 I NETWORK  [conn3] end connection 127.0.0.1:58118 (0 connections now open)

Also tried other databases, no dice.

@asc-adean
Copy link
Author

asc-adean commented Apr 19, 2019

Also verified by running a second Mongo instance in the same container on port 27018, pointing to a different database path (so it's an entirely different DB instance).

  1. Disabled auth in the new config file
  2. Started the Mongo instance up (/opt/bitnami/mongodb/bin/mongod --config /opt/bitnami/mongodb/conf/mongodb-noauth.conf &)
  3. Manually created users using db.createUser
  4. Exited, stopped Mongo process on 27018
  5. Re-enabled auth in the new config file
  6. Started it again like in Step 2
  7. Logged in without auth (mongo --port 27018) and verified auth was enabled
  8. Logged in with auth (mongo -u mongo_admin-p )
  9. Verified I could now run commands like show dbs and db.mongo.findOne()

Suffice it to say, the process for creating the users via environment vars with auth enabled is not working as documented.

I know I was using the bitnami image for this example but I had the same results with the official mongo image. This thread has been relevant: #174

@yosifkit
Copy link
Member

Unable to reproduce with plain docker using mongo:4.0.

test:
$ ls -l mongokeyfile
-r-------- 1 999 999 1024 Apr 19 14:20 mongokeyfile
$ docker run -it --rm -v "$PWD/mongokeyfile":/tmp/mongodb-init/MONGO_AUTH_KEY -e MONGO_INITDB_ROOT_USERNAME=mongodb_cluster_admin -e MONGO_INITDB_ROOT_PASSWORD=rootpassword --name mongo mongo:4.0 --replSet testrs --bind_ip 0.0.0.0 --smallfiles --noprealloc --auth --keyFile /tmp/mongodb-init/MONGO_AUTH_KEY
2019-04-19T21:38:24.127+0000 I CONTROL  [main] note: noprealloc may hurt performance in many applications
about to fork child process, waiting until server is ready for connections.
forked process: 27
2019-04-19T21:38:24.127+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2019-04-19T21:38:24.129+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] MongoDB starting : pid=27 port=27017 dbpath=/data/db 64-bit host=52eef31c217d
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] db version v4.0.9
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] modules: none
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] build environment:
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-04-19T21:38:24.139+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27017, ssl: { mode: "disabled" } }, processManagement: { fork: true, pidFilePath: "/tmp/docker-entrypoint-temp-mongod.pid" }, security: { keyFile: "/tmp/mongodb-init/MONGO_AUTH_KEY" }, storage: { mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { destination: "file", logAppend: true, path: "/proc/1/fd/1" } }
2019-04-19T21:38:24.139+0000 I STORAGE  [initandlisten] 
2019-04-19T21:38:24.139+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-19T21:38:24.139+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-04-19T21:38:24.139+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=15290M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-04-19T21:38:24.520+0000 I STORAGE  [initandlisten] WiredTiger message [1555709904:520930][27:0x7fe6602b6a80], txn-recover: Set global recovery timestamp: 0
2019-04-19T21:38:24.525+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-04-19T21:38:24.533+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-04-19T21:38:24.533+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: fd9e6d7b-56f5-49a3-85aa-238be6739061
2019-04-19T21:38:24.539+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0
2019-04-19T21:38:24.541+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: d8bc95aa-068f-4c0b-9f51-6450fb5ec0b3
2019-04-19T21:38:24.546+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-04-19T21:38:24.547+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2019-04-19T21:38:24.547+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: c29682be-cb7f-4ffd-a11f-6af3aeba1847
child process started successfully, parent exiting
2019-04-19T21:38:24.555+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2019-04-19T21:38:24.555+0000 I INDEX    [LogicalSessionCacheRefresh] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-04-19T21:38:24.556+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
2019-04-19T21:38:24.583+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:47096 #1 (1 connection now open)
2019-04-19T21:38:24.583+0000 I ACCESS   [conn1] note: no users configured in admin.system.users, allowing localhost access
2019-04-19T21:38:24.583+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:47096 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.9" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-04-19T21:38:24.585+0000 I NETWORK  [conn1] end connection 127.0.0.1:47096 (0 connections now open)
2019-04-19T21:38:24.623+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:47098 #2 (1 connection now open)
2019-04-19T21:38:24.623+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:47098 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.9" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-04-19T21:38:24.648+0000 I STORAGE  [conn2] createCollection: admin.system.users with generated UUID: ba207c09-9cd3-4ad2-a7da-e3a2b0ba52f2
Successfully added user: {
	"user" : "mongodb_cluster_admin",
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	]
}
2019-04-19T21:38:24.657+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error
2019-04-19T21:38:24.657+0000 I NETWORK  [conn2] end connection 127.0.0.1:47098 (0 connections now open)

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

2019-04-19T21:38:24.668+0000 I CONTROL  [main] note: noprealloc may hurt performance in many applications
2019-04-19T21:38:24.668+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2019-04-19T21:38:24.670+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
killing process with pid: 27
2019-04-19T21:38:24.671+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-04-19T21:38:24.671+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2019-04-19T21:38:24.671+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2019-04-19T21:38:24.672+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
2019-04-19T21:38:24.672+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-04-19T21:38:24.672+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2019-04-19T21:38:24.673+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
2019-04-19T21:38:24.673+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
2019-04-19T21:38:24.756+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2019-04-19T21:38:24.756+0000 I CONTROL  [signalProcessingThread] now exiting
2019-04-19T21:38:24.756+0000 I CONTROL  [signalProcessingThread] shutting down with code:0

MongoDB init process complete; ready for start up.

2019-04-19T21:38:25.755+0000 I CONTROL  [main] note: noprealloc may hurt performance in many applications
2019-04-19T21:38:25.758+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-04-19T21:38:25.770+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=52eef31c217d
2019-04-19T21:38:25.770+0000 I CONTROL  [initandlisten] db version v4.0.9
2019-04-19T21:38:25.770+0000 I CONTROL  [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
2019-04-19T21:38:25.770+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-04-19T21:38:25.770+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten] modules: none
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten] build environment:
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-04-19T21:38:25.771+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0" }, replication: { replSet: "testrs" }, security: { authorization: "enabled", keyFile: "/tmp/mongodb-init/MONGO_AUTH_KEY" }, storage: { mmapv1: { preallocDataFiles: false, smallFiles: true } } }
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] 
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=15290M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-04-19T21:38:26.266+0000 I STORAGE  [initandlisten] WiredTiger message [1555709906:266145][1:0x7fee52366a80], txn-recover: Main recovery loop: starting at 1/27392 to 2/256
2019-04-19T21:38:26.344+0000 I STORAGE  [initandlisten] WiredTiger message [1555709906:344123][1:0x7fee52366a80], txn-recover: Recovering log 1 through 2
2019-04-19T21:38:26.391+0000 I STORAGE  [initandlisten] WiredTiger message [1555709906:391217][1:0x7fee52366a80], txn-recover: Recovering log 2 through 2
2019-04-19T21:38:26.426+0000 I STORAGE  [initandlisten] WiredTiger message [1555709906:426884][1:0x7fee52366a80], txn-recover: Set global recovery timestamp: 0
2019-04-19T21:38:26.436+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-04-19T21:38:26.448+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-04-19T21:38:26.484+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-04-19T21:38:26.486+0000 I STORAGE  [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 3e240bc3-5e3a-4180-922c-2e675c74aedd
2019-04-19T21:38:26.495+0000 I STORAGE  [initandlisten] createCollection: local.replset.minvalid with generated UUID: 69381420-c5a1-4408-8e30-998d83ea3ff6
2019-04-19T21:38:26.503+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
2019-04-19T21:38:26.503+0000 I REPL     [initandlisten] Did not find local Rollback ID document at startup. Creating one.
2019-04-19T21:38:26.503+0000 I STORAGE  [initandlisten] createCollection: local.system.rollback.id with generated UUID: 5a565d7d-8122-463d-a3ba-e4c3c79e0232
2019-04-19T21:38:26.514+0000 I REPL     [initandlisten] Initialized the rollback ID to 1
2019-04-19T21:38:26.514+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2019-04-19T21:38:26.515+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2019-04-19T21:38:26.516+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2019-04-19T21:38:29.503+0000 I NETWORK  [listener] connection accepted from 172.17.0.5:34388 #1 (1 connection now open)
2019-04-19T21:38:29.504+0000 I NETWORK  [conn1] received client metadata from 172.17.0.5:34388 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.9" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-04-19T21:38:29.521+0000 I ACCESS   [conn1] Successfully authenticated as principal mongodb_cluster_admin on admin
$ # in another terminal
$ docker run -it --rm --link mongo mongo:4.0 mongo mongo/test -u mongodb_cluster_admin -p rootpassword --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://mongo:27017/test?authSource=admin&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3e27ecee-5832-4ebd-9397-02e5f9382b30") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
	http://docs.mongodb.org/
Questions? Try the support group
	http://groups.google.com/group/mongodb-user
2019-04-19T21:38:29.522+0000 I STORAGE  [main] In File::open(), ::open for '/home/mongodb/.mongorc.js' failed with Unknown error
Server has startup warnings: 
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] 
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-19T21:38:25.771+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> db.test.insert({})
WriteCommandError({
	"ok" : 0,
	"errmsg" : "not master",
	"code" : 10107,
	"codeName" : "NotMaster"
})
> rs.status()
{
	"ok" : 0,
	"errmsg" : "no replset config has been received",
	"code" : 94,
	"codeName" : "NotYetInitialized"
}
> 
$ # bad user or pass fail appropriately
$ docker run -it --rm --link mongo mongo:4.0 mongo mongo/test -u mongodb_cluster_admin -p BADPASS --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://mongo:27017/test?authSource=admin&gssapiServiceName=mongodb
2019-04-19T21:42:20.270+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):2:6
exception: connect failed
$ docker run -it --rm --link mongo mongo:4.0 mongo mongo/test -u NOT_cluster_admin -p rootpassword --authenticationDatabase admin
MongoDB shell version v4.0.9
connecting to: mongodb://mongo:27017/test?authSource=admin&gssapiServiceName=mongodb
2019-04-19T21:42:34.263+0000 E QUERY    [js] Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:343:13
@(connect):2:6
exception: connect failed

@wglambert
Copy link

I stripped down your yaml to remove most extraneous factors outside of Kubernetes secrets and it works fine

mongo.yaml
apiVersion: v1
data:
# $ echo -n "mongodb_cluster_admin" | base64
  MONGO_INITDB_ROOT_USERNAME: bW9uZ29kYl9jbHVzdGVyX2FkbWlu
# $ echo -n "rootpassword" | base64
  MONGO_INITDB_ROOT_PASSWORD: cm9vdHBhc3N3b3Jk
kind: Secret
metadata:
  name: mongodb-secrets
type: Opaque
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: test-mongo
spec:
  serviceName: "test-mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: test-mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:4.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 27017
          resources:
            requests:
              memory: 250Mi
              cpu: 200m
            limits:
              memory: 500Mi
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
               secretKeyRef:
                 name: mongodb-secrets
                 key: MONGO_INITDB_ROOT_USERNAME
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: mongodb-secrets
                 key: MONGO_INITDB_ROOT_PASSWORD
$ kubectl apply -f mongo.yaml 
secret/mongodb-secrets created
statefulset.apps/test-mongo created

$ kubectl get pods
NAME           READY   STATUS              RESTARTS   AGE
test-mongo-0   1/1     Running             0          7s
test-mongo-1   0/1     ContainerCreating   0          3s

$ kubectl exec -it test-mongo-0 bash

root@test-mongo-0:/# mongo -u mongodb_cluster_admin -p rootpassword
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("845ebdab-5498-4120-814c-b3cbdb10c337") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2019-04-19T22:31:10.998+0000 I STORAGE  [initandlisten] 
2019-04-19T22:31:10.998+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-04-19T22:31:10.998+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

>

@asc-adean
Copy link
Author

None of this ever worked out for me, so what I had to do is run a .js file in the /docker-entrypoint-initdb.d to create the users.

@tianon
Copy link
Member

tianon commented Apr 24, 2019

Well, I'm glad you've found a workaround, because as noted above, none of us has been able to reproduce the issue. 😕

For further help debugging what's happening, I'd suggest trying the Docker Community Forums, the Docker Community Slack, or Stack Overflow.

@tianon tianon closed this as completed Apr 24, 2019
@kwill4026
Copy link

None of this ever worked out for me, so what I had to do is run a .js file in the /docker-entrypoint-initdb.d to create the users.

can I see that .js file @asc-adean

@asc-adean
Copy link
Author

None of this ever worked out for me, so what I had to do is run a .js file in the /docker-entrypoint-initdb.d to create the users.

can I see that .js file @asc-adean
Sure, I actually went with a bash script instead of the .js file as I didn't want it to run every time

#!/bin/bash

# Check if I am the master mongo server, if not, delete myself and do not run
if [[ ${HOSTNAME} -ne "mongodb-0" ]]; then
    rm -rf /docker-entrypoint-initdb.d/init.sh
    exit 0
fi

# Check if file exists, if so, do not run this script, just delete it
if [[ -f /data/configdb/preconfigured ]]; then
    rm -rf /docker-entrypoint-initdb.d/init.sh
    exit 0
fi

# Configure replicaSet, create admin user and app user

mongo -- <<EOF
var cfg = { _id: "replicaSetName", version: 1, protocolVersion: 1, members: [{ _id: 0, host: "localhost:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 10 }]}
rs.initiate(cfg);
EOF

sleep 20

mongo -- <<EOF
use admin
db.createUser({user: "mongodb_cluster_admin",pwd: "${MONGODB_CLUSTER_ADMIN_PASSWORD}",roles: [{ role: "userAdminAnyDatabase", db: "admin"},{ role: "dbAdminAnyDatabase", db: "admin"},{ role: "readWriteAnyDatabase", db: "admin"},{ role: "root", db: "admin"}]});
EOF

sleep 5

mongo -u mongodb_cluster_admin -p ${MONGODB_CLUSTER_ADMIN_PASSWORD} <<EOF
use admin
db.createUser({user: "app_admin", pwd: "${APP_ADMIN_PASSWORD}", roles: [{ role: "readWrite", db: "dbname1" },{ role: "dbAdmin", db: "dbname1" },{ role: "readWrite", db: "dbname2" },{ role: "dbAdmin", db: "dbname2" }]});

db.createUser({user: "app_analytics", pwd: "${APP_ANALYTICS_PASSWORD}", roles: [{ role: "read", db: "dbname1" },{ role: "read", db: "dbname2" }]});
EOF

# Create file in /data/configdb when this is complete to ensure this script is not re-run if container is deleted
if [[ $? -eq 0 ]]; then
    echo "Creating lock file..."
    touch /data/configdb/preconfigured
    echo "Done"
fi

# Delete myself
if [[ $? -eq 0 ]]; then
    echo "Deleting script..."
    rm -rf /docker-entrypoint-initdb.d/init.sh
    echo "Done"
fi

@kwill4026
Copy link

thanks! I'll check it out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Usability question, not directly related to an error with the image
Projects
None yet
Development

No branches or pull requests

5 participants