Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rocket.Chat Version: 1.0.1 --> Fail install #14303

Closed
fabien4455 opened this issue Apr 29, 2019 · 23 comments · Fixed by #14320
Closed

Rocket.Chat Version: 1.0.1 --> Fail install #14303

fabien4455 opened this issue Apr 29, 2019 · 23 comments · Fixed by #14320
Milestone

Comments

@fabien4455
Copy link

fabien4455 commented Apr 29, 2019

Description:

I was in 0.73.2. I tried to update to 1.0.1 i got SERVER ERROR.

I restored my snapshot and i tried to update 0.74.3. Success. I think that the problem is the new version of RocketChat that does not recognize my mongodb version

Steps to reproduce:

  1. Be on version < 1.0.0 ( 0.74.3 is the lastest )
  2. Modify the docker-compose to install the 1.0.1
  3. Run it.
  4. See the logs.

Expected behavior:

RocketChat installing successfully

Actual behavior:

rocketchat    | ➔ +---------------------------------------------------------------------------+
rocketchat    | ➔ |                                SERVER ERROR                               |
rocketchat    | ➔ +---------------------------------------------------------------------------+
rocketchat    | ➔ |                                                                           |
rocketchat    | ➔ |  Rocket.Chat Version: 1.0.1                                               |
rocketchat    | ➔ |       NodeJS Version: 8.11.4 - x64                                        |
rocketchat    | ➔ |      MongoDB Version: Error getting version                               |
rocketchat    | ➔ |       MongoDB Engine: undefined                                           |
rocketchat    | ➔ |             Platform: linux                                               |
rocketchat    | ➔ |         Process Port: 3000                                                |
rocketchat    | ➔ |             Site URL: http://192.168.10.69:3000                           |
rocketchat    | ➔ |     ReplicaSet OpLog: Disabled                                            |
rocketchat    | ➔ |          Commit Hash: 4a3e6315c7                                          |
rocketchat    | ➔ |        Commit Branch: HEAD                                                |
rocketchat    | ➔ |                                                                           |
rocketchat    | ➔ |  OPLOG / REPLICASET IS REQUIRED TO RUN ROCKET.CHAT, MORE INFORMATION AT:  |
rocketchat    | ➔ |  https://go.rocket.chat/i/oplog-required                                  |
rocketchat    | ➔ |                                                                           |
rocketchat    | ➔ +---------------------------------------------------------------------------+

Server Setup Information:

  • Version of Rocket.Chat Server: 0.74.3 --> 1.0.1 and tried 0.73.2 --> 1.0.1
  • Operating System: Ubuntu 18.04
  • Deployment Method: docker-compose
  • Number of Running Instances: 1
  • DB Replicaset Oplog: Disabled
  • NodeJS Version: 8.11.4 - x64
  • MongoDB Version: 4.0.8

Relevant logs:

Please see my files. the zip is how i deploy rocketchat ( i removed passwords so don't try to hack me, plus it's internal so... ;) .... )
The log is how rocketchat react...

docker-compose up.log

rocketchat.zip

Other informations:

I was before in 3.2.. i upgraded to 3.4 then 3.6 then 4.0.8 ! But the upgrade from 0.73.2 to 0.74.3 was a success ! So... I don't think that's the problem is from mongodb upgrade.. I think rocketchat 1.0 doesn't support 4.0.8 mongodb ??

@reetp
Copy link

reetp commented Apr 29, 2019

Read your log error.

https://go.rocket.chat/i/oplog-required

Which actually goes here:
https://rocket.chat/docs/installation/manual-installation/mongo-replicas/

A decision was made to enforce oplogs, but unless you read the release notes (you really should) you won't notice.

#14227

@fabien4455
Copy link
Author

fabien4455 commented Apr 29, 2019

That's so smart that when you want to downgrade to avoid 1.0.1 it's fucked up. Rocket Chat is broken... Actually I'm happy to do this in a test environment... Rocket Chat is going to be less and less good with these stupid modifications... Why if it doesn't work we can't downgrade to 0.X.X ??

Before downgrading was working... This 1.0.X is so bad...

@wreiske
Copy link
Contributor

wreiske commented Apr 29, 2019

Before upgrading you should always do a full database backup, and make a copy of the old Rocket.Chat server folder. That's all you need to revert back to any previous version.

@fabien4455
Copy link
Author

fabien4455 commented Apr 29, 2019

I did it ;) my rocketchat is safe. Thank's for the advice BTW

@fabien4455
Copy link
Author

Ok i added OPLOG.. but not working ;)

root@fabacula0:~/rocketchat# docker-compose up
rocketchat_mongo is up-to-date
Recreating rocketchat
Attaching to rocketchat_mongo, rocketchat
mongo_1       | 2019-04-29T09:09:29.996+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=0ff97b4a79e0
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] db version v4.0.8
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] modules: none
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] build environment:
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongo_1       | 2019-04-29T09:09:30.002+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs0" }, security: { authorization: "enabled" }, storage: { mmapv1: { smallFiles: true } } }
mongo_1       | 2019-04-29T09:09:30.002+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo_1       | 2019-04-29T09:09:30.002+0000 I STORAGE  [initandlisten]
mongo_1       | 2019-04-29T09:09:30.002+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongo_1       | 2019-04-29T09:09:30.002+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongo_1       | 2019-04-29T09:09:30.002+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=3476M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongo_1       | 2019-04-29T09:09:30.780+0000 I STORAGE  [initandlisten] WiredTiger message [1556528970:780019][1:0x7fed49756a40], txn-recover: Main recovery loop: starting at 107/1195008 to 108/256
mongo_1       | 2019-04-29T09:09:30.850+0000 I STORAGE  [initandlisten] WiredTiger message [1556528970:850125][1:0x7fed49756a40], txn-recover: Recovering log 107 through 108
mongo_1       | 2019-04-29T09:09:30.893+0000 I STORAGE  [initandlisten] WiredTiger message [1556528970:893646][1:0x7fed49756a40], txn-recover: Recovering log 108 through 108
mongo_1       | 2019-04-29T09:09:30.930+0000 I STORAGE  [initandlisten] WiredTiger message [1556528970:930669][1:0x7fed49756a40], txn-recover: Set global recovery timestamp: 0
mongo_1       | 2019-04-29T09:09:30.954+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongo_1       | 2019-04-29T09:09:31.162+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
mongo_1       | 2019-04-29T09:09:32.697+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo_1       | 2019-04-29T09:09:32.699+0000 I STORAGE  [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 095bd01f-f3bf-4da1-a9f1-a70a11212a14
mongo_1       | 2019-04-29T09:09:32.715+0000 I STORAGE  [initandlisten] createCollection: local.replset.minvalid with generated UUID: e5f2676c-dfff-4112-9a0a-a44bd61e963e
mongo_1       | 2019-04-29T09:09:32.755+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
mongo_1       | 2019-04-29T09:09:32.756+0000 I REPL     [initandlisten] Did not find local Rollback ID document at startup. Creating one.
mongo_1       | 2019-04-29T09:09:32.756+0000 I STORAGE  [initandlisten] createCollection: local.system.rollback.id with generated UUID: f85b983c-5079-4314-8475-60f1b4f76732
mongo_1       | 2019-04-29T09:09:32.787+0000 I REPL     [initandlisten] Initialized the rollback ID to 1
mongo_1       | 2019-04-29T09:09:32.787+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
mongo_1       | 2019-04-29T09:09:32.788+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongo_1       | 2019-04-29T09:09:32.788+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
mongo_1       | 2019-04-29T09:09:33.487+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51632 #1 (1 connection now open)
mongo_1       | 2019-04-29T09:09:33.493+0000 I NETWORK  [conn1] received client metadata from 172.20.0.1:51632 conn1: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:33.513+0000 I ACCESS   [conn1] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:38.595+0000 I NETWORK  [conn1] end connection 172.20.0.1:51632 (0 connections now open)
mongo_1       | 2019-04-29T09:09:40.405+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51636 #2 (1 connection now open)
mongo_1       | 2019-04-29T09:09:40.412+0000 I NETWORK  [conn2] received client metadata from 172.20.0.1:51636 conn2: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:40.431+0000 I ACCESS   [conn2] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:40.483+0000 I NETWORK  [conn2] end connection 172.20.0.1:51636 (0 connections now open)
mongo_1       | 2019-04-29T09:09:42.462+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51640 #3 (1 connection now open)
mongo_1       | 2019-04-29T09:09:42.469+0000 I NETWORK  [conn3] received client metadata from 172.20.0.1:51640 conn3: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:42.488+0000 I ACCESS   [conn3] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:42.539+0000 I NETWORK  [conn3] end connection 172.20.0.1:51640 (0 connections now open)
mongo_1       | 2019-04-29T09:09:45.134+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51644 #4 (1 connection now open)
mongo_1       | 2019-04-29T09:09:45.140+0000 I NETWORK  [conn4] received client metadata from 172.20.0.1:51644 conn4: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:45.160+0000 I ACCESS   [conn4] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:45.215+0000 I NETWORK  [conn4] end connection 172.20.0.1:51644 (0 connections now open)
mongo_1       | 2019-04-29T09:09:48.418+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51648 #5 (1 connection now open)
mongo_1       | 2019-04-29T09:09:48.425+0000 I NETWORK  [conn5] received client metadata from 172.20.0.1:51648 conn5: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:48.444+0000 I ACCESS   [conn5] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:48.490+0000 I NETWORK  [conn5] end connection 172.20.0.1:51648 (0 connections now open)
mongo_1       | 2019-04-29T09:09:53.445+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51652 #6 (1 connection now open)
mongo_1       | 2019-04-29T09:09:53.452+0000 I NETWORK  [conn6] received client metadata from 172.20.0.1:51652 conn6: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:09:53.471+0000 I ACCESS   [conn6] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:09:53.530+0000 I NETWORK  [conn6] end connection 172.20.0.1:51652 (0 connections now open)
mongo_1       | 2019-04-29T09:10:01.572+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51656 #7 (1 connection now open)
mongo_1       | 2019-04-29T09:10:01.578+0000 I NETWORK  [conn7] received client metadata from 172.20.0.1:51656 conn7: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:10:01.597+0000 I ACCESS   [conn7] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:10:01.707+0000 I NETWORK  [conn7] end connection 172.20.0.1:51656 (0 connections now open)
mongo_1       | 2019-04-29T09:10:16.233+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51660 #8 (1 connection now open)
mongo_1       | 2019-04-29T09:10:16.239+0000 I NETWORK  [conn8] received client metadata from 172.20.0.1:51660 conn8: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:10:16.260+0000 I ACCESS   [conn8] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:10:16.314+0000 I NETWORK  [conn8] end connection 172.20.0.1:51660 (0 connections now open)
mongo_1       | 2019-04-29T09:10:20.945+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongo_1       | 2019-04-29T09:10:20.945+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongo_1       | 2019-04-29T09:10:20.945+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongo_1       | 2019-04-29T09:10:20.947+0000 I REPL     [signalProcessingThread] shutting down replication subsystems
mongo_1       | 2019-04-29T09:10:20.947+0000 I ASIO     [Replication] Killing all outstanding egress activity.
mongo_1       | 2019-04-29T09:10:20.947+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongo_1       | 2019-04-29T09:10:20.947+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongo_1       | 2019-04-29T09:10:20.951+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongo_1       | 2019-04-29T09:10:20.952+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongo_1       | 2019-04-29T09:10:20.952+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongo_1       | 2019-04-29T09:10:21.043+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongo_1       | 2019-04-29T09:10:21.043+0000 I CONTROL  [signalProcessingThread] now exiting
mongo_1       | 2019-04-29T09:10:21.043+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongo_1       | 2019-04-29T09:10:43.039+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=0ff97b4a79e0
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] db version v4.0.8
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] modules: none
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] build environment:
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongo_1       | 2019-04-29T09:10:43.042+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs0" }, security: { authorization: "enabled" }, storage: { mmapv1: { smallFiles: true } } }
mongo_1       | 2019-04-29T09:10:43.043+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo_1       | 2019-04-29T09:10:43.043+0000 I STORAGE  [initandlisten]
mongo_1       | 2019-04-29T09:10:43.043+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongo_1       | 2019-04-29T09:10:43.043+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongo_1       | 2019-04-29T09:10:43.043+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=3476M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongo_1       | 2019-04-29T09:10:43.734+0000 I STORAGE  [initandlisten] WiredTiger message [1556529043:734775][1:0x7fdb5b294a40], txn-recover: Main recovery loop: starting at 108/329344 to 109/256
mongo_1       | 2019-04-29T09:10:43.803+0000 I STORAGE  [initandlisten] WiredTiger message [1556529043:803564][1:0x7fdb5b294a40], txn-recover: Recovering log 108 through 109
mongo_1       | 2019-04-29T09:10:43.847+0000 I STORAGE  [initandlisten] WiredTiger message [1556529043:847074][1:0x7fdb5b294a40], txn-recover: Recovering log 109 through 109
mongo_1       | 2019-04-29T09:10:43.881+0000 I STORAGE  [initandlisten] WiredTiger message [1556529043:881296][1:0x7fdb5b294a40], txn-recover: Set global recovery timestamp: 0
mongo_1       | 2019-04-29T09:10:43.896+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongo_1       | 2019-04-29T09:10:44.029+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
mongo_1       | 2019-04-29T09:10:45.111+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo_1       | 2019-04-29T09:10:45.112+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
mongo_1       | 2019-04-29T09:10:45.113+0000 I REPL     [initandlisten] Rollback ID is 1
mongo_1       | 2019-04-29T09:10:45.113+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
mongo_1       | 2019-04-29T09:10:45.113+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongo_1       | 2019-04-29T09:10:45.114+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
mongo_1       | 2019-04-29T09:10:59.379+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:38776 #1 (1 connection now open)
mongo_1       | 2019-04-29T09:10:59.379+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:38776 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:11:26.287+0000 I ACCESS   [conn1] Supported SASL mechanisms requested for unknown user 'admin@admin'
mongo_1       | 2019-04-29T09:11:26.288+0000 I ACCESS   [conn1] SASL SCRAM-SHA-1 authentication failed for admin on admin from client 127.0.0.1:38776 ; UserNotFound: Could not find user admin@admin
mongo_1       | 2019-04-29T09:11:44.725+0000 I ACCESS   [conn1] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:11:51.390+0000 I COMMAND  [conn1] initiate : no configuration specified. Using a default configuration for the set
mongo_1       | 2019-04-29T09:11:51.390+0000 I COMMAND  [conn1] created this configuration for initiation : { _id: "rs0", version: 1, members: [ { _id: 0, host: "0ff97b4a79e0:27017" } ] }
mongo_1       | 2019-04-29T09:11:51.390+0000 I REPL     [conn1] replSetInitiate admin command received from client
mongo_1       | 2019-04-29T09:11:51.392+0000 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
mongo_1       | 2019-04-29T09:11:51.393+0000 I REPL     [conn1] ******
mongo_1       | 2019-04-29T09:11:51.393+0000 I REPL     [conn1] creating replication oplog of size: 128MB...
mongo_1       | 2019-04-29T09:11:51.393+0000 I STORAGE  [conn1] createCollection: local.oplog.rs with generated UUID: 155d183e-07fb-46e2-88a4-eb07ec0dbe79
mongo_1       | 2019-04-29T09:11:51.432+0000 I STORAGE  [conn1] Starting OplogTruncaterThread local.oplog.rs
mongo_1       | 2019-04-29T09:11:51.432+0000 I STORAGE  [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
mongo_1       | 2019-04-29T09:11:51.432+0000 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for truncation
mongo_1       | 2019-04-29T09:11:51.450+0000 I REPL     [conn1] ******
mongo_1       | 2019-04-29T09:11:51.450+0000 I STORAGE  [conn1] createCollection: local.system.replset with generated UUID: b1533016-185f-408b-8000-bd579891d2a2
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] New replica set config in use: { _id: "rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "0ff97b4a79e0:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5cc6bfd73151a521425a33ac') } }
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] This node is 0ff97b4a79e0:27017 in the config
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] transition to STARTUP2 from STARTUP
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] Starting replication storage threads
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] transition to RECOVERING from STARTUP2
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] Starting replication fetcher thread
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] Starting replication applier thread
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [conn1] Starting replication reporter thread
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [rsSync-0] Starting oplog application
mongo_1       | 2019-04-29T09:11:51.492+0000 I COMMAND  [conn1] command local.system.replset appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: undefined, lsid: { id: UUID("e23ecb99-b4fa-4e7c-9517-f8a8217eed8f") }, $db: "admin" } numYields:0 reslen:146 locks:{ Global: { acquireCount: { r: 12, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 55 } }, Database: { acquireCount: { r: 2, w: 3, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, oplog: { acquireCount: { r: 1, w: 2 } } } protocol:op_msg 102ms
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [rsSync-0] transition to SECONDARY from RECOVERING
mongo_1       | 2019-04-29T09:11:51.492+0000 I REPL     [rsSync-0] conducting a dry run election to see if we could be elected. current term: 0
mongo_1       | 2019-04-29T09:11:51.493+0000 I REPL     [replexec-0] dry election run succeeded, running for election in term 1
mongo_1       | 2019-04-29T09:11:51.493+0000 I STORAGE  [replexec-1] createCollection: local.replset.election with generated UUID: 29faa618-8db6-466f-9ac0-e52dbc02a240
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] election succeeded, assuming primary role in term 1
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] transition to PRIMARY from SECONDARY
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] Resetting sync source to empty, which was :27017
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] Entering primary catch-up mode.
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] Exited primary catch-up mode.
mongo_1       | 2019-04-29T09:11:51.532+0000 I REPL     [replexec-1] Stopping replication producer
mongo_1       | 2019-04-29T09:11:53.497+0000 I STORAGE  [rsSync-0] createCollection: config.transactions with generated UUID: 66906b80-2616-470c-ba21-cad61aba01b6
mongo_1       | 2019-04-29T09:11:53.542+0000 I REPL     [rsSync-0] transition to primary complete; database writes are now permitted
mongo_1       | 2019-04-29T09:11:53.543+0000 I STORAGE  [monitoring keys for HMAC] createCollection: admin.system.keys with generated UUID: e2ccb9a3-b00c-44de-aa9a-11ddfe41bb40
mongo_1       | 2019-04-29T09:11:53.575+0000 I STORAGE  [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1556529111, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1556529113, 2)
mongo_1       | 2019-04-29T09:12:14.574+0000 I NETWORK  [conn1] end connection 127.0.0.1:38776 (0 connections now open)
mongo_1       | 2019-04-29T09:12:54.206+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51666 #2 (1 connection now open)
mongo_1       | 2019-04-29T09:12:54.212+0000 I NETWORK  [conn2] received client metadata from 172.20.0.1:51666 conn2: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:12:54.235+0000 I ACCESS   [conn2] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:12:54.300+0000 I NETWORK  [conn2] end connection 172.20.0.1:51666 (0 connections now open)
mongo_1       | 2019-04-29T09:12:56.126+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51670 #3 (1 connection now open)
mongo_1       | 2019-04-29T09:12:56.133+0000 I NETWORK  [conn3] received client metadata from 172.20.0.1:51670 conn3: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:12:56.153+0000 I ACCESS   [conn3] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:12:56.203+0000 I NETWORK  [conn3] end connection 172.20.0.1:51670 (0 connections now open)
mongo_1       | 2019-04-29T09:12:58.021+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51674 #4 (1 connection now open)
mongo_1       | 2019-04-29T09:12:58.028+0000 I NETWORK  [conn4] received client metadata from 172.20.0.1:51674 conn4: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:12:58.049+0000 I ACCESS   [conn4] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:12:58.098+0000 I NETWORK  [conn4] end connection 172.20.0.1:51674 (0 connections now open)
mongo_1       | 2019-04-29T09:13:00.084+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51678 #5 (1 connection now open)
mongo_1       | 2019-04-29T09:13:00.091+0000 I NETWORK  [conn5] received client metadata from 172.20.0.1:51678 conn5: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:13:00.112+0000 I ACCESS   [conn5] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:13:00.158+0000 I NETWORK  [conn5] end connection 172.20.0.1:51678 (0 connections now open)
mongo_1       | 2019-04-29T09:13:02.587+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51682 #6 (1 connection now open)
mongo_1       | 2019-04-29T09:13:02.595+0000 I NETWORK  [conn6] received client metadata from 172.20.0.1:51682 conn6: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:13:02.615+0000 I ACCESS   [conn6] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:13:02.679+0000 I NETWORK  [conn6] end connection 172.20.0.1:51682 (0 connections now open)
mongo_1       | 2019-04-29T09:13:05.915+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51686 #7 (1 connection now open)
mongo_1       | 2019-04-29T09:13:05.922+0000 I NETWORK  [conn7] received client metadata from 172.20.0.1:51686 conn7: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:13:05.946+0000 I ACCESS   [conn7] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:13:05.996+0000 I NETWORK  [conn7] end connection 172.20.0.1:51686 (0 connections now open)
mongo_1       | 2019-04-29T09:13:10.852+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51690 #8 (1 connection now open)
mongo_1       | 2019-04-29T09:13:10.859+0000 I NETWORK  [conn8] received client metadata from 172.20.0.1:51690 conn8: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:13:10.879+0000 I ACCESS   [conn8] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:13:10.939+0000 I NETWORK  [conn8] end connection 172.20.0.1:51690 (0 connections now open)
mongo_1       | 2019-04-29T09:13:12.806+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongo_1       | 2019-04-29T09:13:22.811+0000 I STORAGE  [signalProcessingThread] Failed to stepDown in non-command initiated shutdown path ExceededTimeLimit: No electable secondaries caught up as of 2019-04-29T09:13:22.806+0000Please use the replSetStepDown command with the argument {force: true} to force node to step down.
mongo_1       | 2019-04-29T09:13:22.811+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongo_1       | 2019-04-29T09:13:22.811+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongo_1       | 2019-04-29T09:13:22.812+0000 I REPL     [signalProcessingThread] shutting down replication subsystems
mongo_1       | 2019-04-29T09:13:22.812+0000 I REPL     [signalProcessingThread] Stopping replication reporter thread
mongo_1       | 2019-04-29T09:13:22.814+0000 I REPL     [signalProcessingThread] Stopping replication fetcher thread
mongo_1       | 2019-04-29T09:13:22.815+0000 I REPL     [signalProcessingThread] Stopping replication applier thread
mongo_1       | 2019-04-29T09:13:22.815+0000 I REPL     [rsSync-0] Finished oplog application
mongo_1       | 2019-04-29T09:13:30.374+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1       | 2019-04-29T09:13:30.376+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=0ff97b4a79e0
mongo_1       | 2019-04-29T09:13:30.376+0000 I CONTROL  [initandlisten] db version v4.0.8
mongo_1       | 2019-04-29T09:13:30.376+0000 I CONTROL  [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
mongo_1       | 2019-04-29T09:13:30.376+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten] modules: none
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten] build environment:
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongo_1       | 2019-04-29T09:13:30.377+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs0" }, security: { authorization: "enabled" }, storage: { mmapv1: { smallFiles: true } } }
mongo_1       | 2019-04-29T09:13:30.377+0000 W STORAGE  [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
mongo_1       | 2019-04-29T09:13:30.377+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo_1       | 2019-04-29T09:13:30.377+0000 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
mongo_1       | 2019-04-29T09:13:30.377+0000 I STORAGE  [initandlisten]
mongo_1       | 2019-04-29T09:13:30.377+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongo_1       | 2019-04-29T09:13:30.377+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongo_1       | 2019-04-29T09:13:30.377+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=3476M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongo_1       | 2019-04-29T09:13:31.080+0000 I STORAGE  [initandlisten] WiredTiger message [1556529211:80485][1:0x7f2ccc0e1a40], txn-recover: Main recovery loop: starting at 109/25856 to 110/256
mongo_1       | 2019-04-29T09:13:31.080+0000 I STORAGE  [initandlisten] WiredTiger message [1556529211:80836][1:0x7f2ccc0e1a40], txn-recover: Recovering log 109 through 110
mongo_1       | 2019-04-29T09:13:31.124+0000 I STORAGE  [initandlisten] WiredTiger message [1556529211:124275][1:0x7f2ccc0e1a40], file:collection-0--2869062926131813221.wt, txn-recover: Recovering log 110 through 110
mongo_1       | 2019-04-29T09:13:31.158+0000 I STORAGE  [initandlisten] WiredTiger message [1556529211:158825][1:0x7f2ccc0e1a40], file:collection-0--2869062926131813221.wt, txn-recover: Set global recovery timestamp: 5cc6c01500000001
mongo_1       | 2019-04-29T09:13:31.202+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1556529173, 1)
mongo_1       | 2019-04-29T09:13:31.203+0000 I STORAGE  [initandlisten] Triggering the first stable checkpoint. Initial Data: Timestamp(1556529173, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1556529173, 1)
mongo_1       | 2019-04-29T09:13:31.887+0000 I STORAGE  [initandlisten] Starting OplogTruncaterThread local.oplog.rs
mongo_1       | 2019-04-29T09:13:31.887+0000 I STORAGE  [initandlisten] The size storer reports that the oplog contains 10 records totaling to 1524 bytes
mongo_1       | 2019-04-29T09:13:31.887+0000 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
mongo_1       | 2019-04-29T09:13:32.016+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
mongo_1       | 2019-04-29T09:13:33.134+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo_1       | 2019-04-29T09:13:33.136+0000 I REPL     [initandlisten] Rollback ID is 1
mongo_1       | 2019-04-29T09:13:33.137+0000 I REPL     [initandlisten] Recovering from stable timestamp: Timestamp(1556529173, 1) (top of oplog: { ts: Timestamp(1556529183, 1), t: 1 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0))
mongo_1       | 2019-04-29T09:13:33.137+0000 I REPL     [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1556529173, 1)
mongo_1       | 2019-04-29T09:13:33.137+0000 I REPL     [initandlisten] Replaying stored operations from { : Timestamp(1556529173, 1) } (exclusive) to { : Timestamp(1556529183, 1) } (inclusive).
mongo_1       | 2019-04-29T09:13:33.138+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongo_1       | 2019-04-29T09:13:33.138+0000 I REPL     [replexec-0] New replica set config in use: { _id: "rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "0ff97b4a79e0:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5cc6bfd73151a521425a33ac') } }
mongo_1       | 2019-04-29T09:13:33.139+0000 I REPL     [replexec-0] This node is 0ff97b4a79e0:27017 in the config
mongo_1       | 2019-04-29T09:13:33.139+0000 I REPL     [replexec-0] transition to STARTUP2 from STARTUP
mongo_1       | 2019-04-29T09:13:33.139+0000 I REPL     [replexec-0] Starting replication storage threads
mongo_1       | 2019-04-29T09:13:33.139+0000 I NETWORK  [LogicalSessionCacheRefresh] Starting new replica set monitor for rs0/0ff97b4a79e0:27017
mongo_1       | 2019-04-29T09:13:33.140+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:56174 #2 (1 connection now open)
mongo_1       | 2019-04-29T09:13:33.140+0000 I NETWORK  [conn2] received client metadata from 172.20.0.2:56174 conn2: { driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:13:33.140+0000 I REPL     [replexec-0] transition to RECOVERING from STARTUP2
mongo_1       | 2019-04-29T09:13:33.140+0000 I REPL     [replexec-0] Starting replication fetcher thread
mongo_1       | 2019-04-29T09:13:33.140+0000 I REPL     [replexec-0] Starting replication applier thread
mongo_1       | 2019-04-29T09:13:33.140+0000 I REPL     [replexec-0] Starting replication reporter thread
mongo_1       | 2019-04-29T09:13:33.140+0000 I REPL     [rsSync-0] Starting oplog application
mongo_1       | 2019-04-29T09:13:33.141+0000 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to 0ff97b4a79e0:27017 (1 connections now open to 0ff97b4a79e0:27017 with a 5 second timeout)
mongo_1       | 2019-04-29T09:13:33.141+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:13:33.141+0000 I REPL     [rsSync-0] transition to SECONDARY from RECOVERING
mongo_1       | 2019-04-29T09:13:33.141+0000 I REPL     [rsSync-0] conducting a dry run election to see if we could be elected. current term: 1
mongo_1       | 2019-04-29T09:13:33.141+0000 I REPL     [replexec-0] dry election run succeeded, running for election in term 2
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] election succeeded, assuming primary role in term 2
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] transition to PRIMARY from SECONDARY
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] Resetting sync source to empty, which was :27017
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] Entering primary catch-up mode.
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] Exited primary catch-up mode.
mongo_1       | 2019-04-29T09:13:33.144+0000 I REPL     [replexec-0] Stopping replication producer
mongo_1       | 2019-04-29T09:13:33.641+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:13:34.017+0000 I FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
mongo_1       | 2019-04-29T09:13:34.175+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:13:34.676+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:13:35.181+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:13:35.182+0000 I REPL     [rsSync-0] transition to primary complete; database writes are now permitted
mongo_1       | 2019-04-29T09:13:35.684+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:56176 #4 (2 connections now open)
mongo_1       | 2019-04-29T09:13:35.684+0000 I NETWORK  [conn4] received client metadata from 172.20.0.2:56176 conn4: { driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:13:35.685+0000 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to 0ff97b4a79e0:27017 (1 connections now open to 0ff97b4a79e0:27017 with a 0 second timeout)
mongo_1       | 2019-04-29T09:14:08.663+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57205 #5 (3 connections now open)
mongo_1       | 2019-04-29T09:14:08.668+0000 I NETWORK  [conn5] received client metadata from 192.168.10.122:57205 conn5: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:14:08.689+0000 I ACCESS   [conn5] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:14:08.755+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57206 #6 (4 connections now open)
mongo_1       | 2019-04-29T09:14:08.759+0000 I ACCESS   [conn6] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:14:08.761+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57207 #7 (5 connections now open)
mongo_1       | 2019-04-29T09:14:08.763+0000 I ACCESS   [conn7] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:14:08.794+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57208 #8 (6 connections now open)
mongo_1       | 2019-04-29T09:14:08.801+0000 I ACCESS   [conn8] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:14:11.431+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57209 #9 (7 connections now open)
mongo_1       | 2019-04-29T09:14:11.442+0000 I ACCESS   [conn9] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:00.060+0000 I NETWORK  [conn5] end connection 192.168.10.122:57205 (6 connections now open)
mongo_1       | 2019-04-29T09:15:00.060+0000 I NETWORK  [conn7] end connection 192.168.10.122:57207 (5 connections now open)
mongo_1       | 2019-04-29T09:15:00.060+0000 I NETWORK  [conn6] end connection 192.168.10.122:57206 (4 connections now open)
mongo_1       | 2019-04-29T09:15:00.061+0000 I NETWORK  [conn8] end connection 192.168.10.122:57208 (3 connections now open)
mongo_1       | 2019-04-29T09:15:00.061+0000 I NETWORK  [conn9] end connection 192.168.10.122:57209 (2 connections now open)
mongo_1       | 2019-04-29T09:15:06.884+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57210 #10 (3 connections now open)
mongo_1       | 2019-04-29T09:15:06.885+0000 I NETWORK  [conn10] received client metadata from 192.168.10.122:57210 conn10: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:15:06.911+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57211 #11 (4 connections now open)
mongo_1       | 2019-04-29T09:15:06.913+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57212 #12 (5 connections now open)
mongo_1       | 2019-04-29T09:15:11.529+0000 I NETWORK  [conn11] end connection 192.168.10.122:57211 (4 connections now open)
mongo_1       | 2019-04-29T09:15:11.529+0000 I NETWORK  [conn12] end connection 192.168.10.122:57212 (3 connections now open)
mongo_1       | 2019-04-29T09:15:11.529+0000 I NETWORK  [conn10] end connection 192.168.10.122:57210 (2 connections now open)
mongo_1       | 2019-04-29T09:15:17.060+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57213 #13 (3 connections now open)
mongo_1       | 2019-04-29T09:15:17.060+0000 I NETWORK  [conn13] received client metadata from 192.168.10.122:57213 conn13: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:15:17.067+0000 I ACCESS   [conn13] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:17.085+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57214 #14 (4 connections now open)
mongo_1       | 2019-04-29T09:15:17.092+0000 I ACCESS   [conn14] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:17.093+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57215 #15 (5 connections now open)
mongo_1       | 2019-04-29T09:15:17.098+0000 I ACCESS   [conn15] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:17.119+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57216 #16 (6 connections now open)
mongo_1       | 2019-04-29T09:15:17.121+0000 I ACCESS   [conn16] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:19.162+0000 I NETWORK  [conn13] end connection 192.168.10.122:57213 (5 connections now open)
mongo_1       | 2019-04-29T09:15:19.162+0000 I NETWORK  [conn15] end connection 192.168.10.122:57215 (4 connections now open)
mongo_1       | 2019-04-29T09:15:19.162+0000 I NETWORK  [conn14] end connection 192.168.10.122:57214 (3 connections now open)
mongo_1       | 2019-04-29T09:15:19.163+0000 I NETWORK  [conn16] end connection 192.168.10.122:57216 (2 connections now open)
mongo_1       | 2019-04-29T09:15:27.988+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57217 #17 (3 connections now open)
mongo_1       | 2019-04-29T09:15:27.988+0000 I NETWORK  [conn17] received client metadata from 192.168.10.122:57217 conn17: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:15:27.997+0000 I ACCESS   [conn17] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:28.015+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57218 #18 (4 connections now open)
mongo_1       | 2019-04-29T09:15:28.027+0000 I ACCESS   [conn18] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:28.028+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57219 #19 (5 connections now open)
mongo_1       | 2019-04-29T09:15:28.029+0000 I ACCESS   [conn19] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:28.041+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57220 #20 (6 connections now open)
mongo_1       | 2019-04-29T09:15:28.045+0000 I ACCESS   [conn20] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:29.748+0000 I NETWORK  [conn19] end connection 192.168.10.122:57219 (5 connections now open)
mongo_1       | 2019-04-29T09:15:29.748+0000 I NETWORK  [conn18] end connection 192.168.10.122:57218 (4 connections now open)
mongo_1       | 2019-04-29T09:15:29.748+0000 I NETWORK  [conn17] end connection 192.168.10.122:57217 (2 connections now open)
mongo_1       | 2019-04-29T09:15:29.748+0000 I NETWORK  [conn20] end connection 192.168.10.122:57220 (3 connections now open)
mongo_1       | 2019-04-29T09:15:42.419+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57221 #21 (3 connections now open)
mongo_1       | 2019-04-29T09:15:42.419+0000 I NETWORK  [conn21] received client metadata from 192.168.10.122:57221 conn21: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:15:42.427+0000 I ACCESS   [conn21] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:42.446+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57222 #22 (4 connections now open)
mongo_1       | 2019-04-29T09:15:42.452+0000 I ACCESS   [conn22] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:42.453+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57223 #23 (5 connections now open)
mongo_1       | 2019-04-29T09:15:42.456+0000 I ACCESS   [conn23] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:42.477+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57224 #24 (6 connections now open)
mongo_1       | 2019-04-29T09:15:42.478+0000 I ACCESS   [conn24] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:47.458+0000 I NETWORK  [conn23] end connection 192.168.10.122:57223 (4 connections now open)
mongo_1       | 2019-04-29T09:15:47.458+0000 I NETWORK  [conn22] end connection 192.168.10.122:57222 (5 connections now open)
mongo_1       | 2019-04-29T09:15:47.458+0000 I NETWORK  [conn24] end connection 192.168.10.122:57224 (3 connections now open)
mongo_1       | 2019-04-29T09:15:47.458+0000 I NETWORK  [conn21] end connection 192.168.10.122:57221 (2 connections now open)
mongo_1       | 2019-04-29T09:15:50.067+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57225 #25 (3 connections now open)
mongo_1       | 2019-04-29T09:15:50.068+0000 I NETWORK  [conn25] received client metadata from 192.168.10.122:57225 conn25: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:15:50.074+0000 I ACCESS   [conn25] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:50.092+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57226 #26 (4 connections now open)
mongo_1       | 2019-04-29T09:15:50.100+0000 I ACCESS   [conn26] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:50.101+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57227 #27 (5 connections now open)
mongo_1       | 2019-04-29T09:15:50.103+0000 I ACCESS   [conn27] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:50.115+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57228 #28 (6 connections now open)
mongo_1       | 2019-04-29T09:15:50.116+0000 I ACCESS   [conn28] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:52.182+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57229 #29 (7 connections now open)
mongo_1       | 2019-04-29T09:15:52.187+0000 I ACCESS   [conn29] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:15:56.489+0000 I NETWORK  [conn27] end connection 192.168.10.122:57227 (5 connections now open)
mongo_1       | 2019-04-29T09:15:56.489+0000 I NETWORK  [conn26] end connection 192.168.10.122:57226 (6 connections now open)
mongo_1       | 2019-04-29T09:15:56.489+0000 I NETWORK  [conn29] end connection 192.168.10.122:57229 (2 connections now open)
mongo_1       | 2019-04-29T09:15:56.489+0000 I NETWORK  [conn28] end connection 192.168.10.122:57228 (4 connections now open)
mongo_1       | 2019-04-29T09:15:56.489+0000 I NETWORK  [conn25] end connection 192.168.10.122:57225 (3 connections now open)
mongo_1       | 2019-04-29T09:16:17.872+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57233 #30 (3 connections now open)
mongo_1       | 2019-04-29T09:16:17.876+0000 I NETWORK  [conn30] received client metadata from 192.168.10.122:57233 conn30: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:16:17.896+0000 I ACCESS   [conn30] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:16:17.962+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57234 #31 (4 connections now open)
mongo_1       | 2019-04-29T09:16:17.966+0000 I ACCESS   [conn31] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:16:17.967+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57235 #32 (5 connections now open)
mongo_1       | 2019-04-29T09:16:17.969+0000 I ACCESS   [conn32] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:16:17.985+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57236 #33 (6 connections now open)
mongo_1       | 2019-04-29T09:16:17.987+0000 I ACCESS   [conn33] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:16:19.778+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57237 #34 (7 connections now open)
mongo_1       | 2019-04-29T09:16:19.792+0000 I ACCESS   [conn34] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:16:46.811+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:38810 #35 (8 connections now open)
mongo_1       | 2019-04-29T09:16:46.811+0000 I NETWORK  [conn35] received client metadata from 127.0.0.1:38810 conn35: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:17:13.650+0000 I ACCESS   [conn35] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:17:35.542+0000 I COMMAND  [conn35] initiate : no configuration specified. Using a default configuration for the set
mongo_1       | 2019-04-29T09:17:35.542+0000 I COMMAND  [conn35] created this configuration for initiation : { _id: "rs0", version: 1, members: [ { _id: 0, host: "0ff97b4a79e0:27017" } ] }
mongo_1       | 2019-04-29T09:17:35.542+0000 I REPL     [conn35] replSetInitiate admin command received from client
mongo_1       | 2019-04-29T09:17:50.385+0000 I NETWORK  [conn35] end connection 127.0.0.1:38810 (7 connections now open)
mongo_1       | 2019-04-29T09:18:01.714+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51700 #36 (8 connections now open)
mongo_1       | 2019-04-29T09:18:01.721+0000 I NETWORK  [conn36] received client metadata from 172.20.0.1:51700 conn36: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:01.742+0000 I ACCESS   [conn36] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:01.863+0000 I NETWORK  [conn36] end connection 172.20.0.1:51700 (7 connections now open)
mongo_1       | 2019-04-29T09:18:03.670+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51704 #37 (8 connections now open)
mongo_1       | 2019-04-29T09:18:03.676+0000 I NETWORK  [conn37] received client metadata from 172.20.0.1:51704 conn37: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:03.698+0000 I ACCESS   [conn37] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:03.754+0000 I NETWORK  [conn37] end connection 172.20.0.1:51704 (7 connections now open)
mongo_1       | 2019-04-29T09:18:05.553+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51708 #38 (8 connections now open)
mongo_1       | 2019-04-29T09:18:05.569+0000 I NETWORK  [conn38] received client metadata from 172.20.0.1:51708 conn38: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:05.590+0000 I ACCESS   [conn38] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:05.650+0000 I NETWORK  [conn38] end connection 172.20.0.1:51708 (7 connections now open)
mongo_1       | 2019-04-29T09:18:07.807+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51712 #39 (8 connections now open)
mongo_1       | 2019-04-29T09:18:07.814+0000 I NETWORK  [conn39] received client metadata from 172.20.0.1:51712 conn39: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:07.837+0000 I ACCESS   [conn39] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:07.890+0000 I NETWORK  [conn39] end connection 172.20.0.1:51712 (7 connections now open)
mongo_1       | 2019-04-29T09:18:10.323+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51716 #40 (8 connections now open)
mongo_1       | 2019-04-29T09:18:10.329+0000 I NETWORK  [conn40] received client metadata from 172.20.0.1:51716 conn40: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:10.353+0000 I ACCESS   [conn40] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:10.419+0000 I NETWORK  [conn40] end connection 172.20.0.1:51716 (7 connections now open)
mongo_1       | 2019-04-29T09:18:13.725+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51720 #41 (8 connections now open)
mongo_1       | 2019-04-29T09:18:13.732+0000 I NETWORK  [conn41] received client metadata from 172.20.0.1:51720 conn41: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:13.755+0000 I ACCESS   [conn41] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:13.816+0000 I NETWORK  [conn41] end connection 172.20.0.1:51720 (7 connections now open)
mongo_1       | 2019-04-29T09:18:18.659+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51724 #42 (8 connections now open)
mongo_1       | 2019-04-29T09:18:18.665+0000 I NETWORK  [conn42] received client metadata from 172.20.0.1:51724 conn42: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:18.686+0000 I ACCESS   [conn42] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:18.827+0000 I NETWORK  [conn42] end connection 172.20.0.1:51724 (7 connections now open)
mongo_1       | 2019-04-29T09:18:26.792+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51728 #43 (8 connections now open)
mongo_1       | 2019-04-29T09:18:26.800+0000 I NETWORK  [conn43] received client metadata from 172.20.0.1:51728 conn43: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:18:26.823+0000 I ACCESS   [conn43] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:18:26.891+0000 I NETWORK  [conn43] end connection 172.20.0.1:51728 (7 connections now open)
mongo_1       | 2019-04-29T09:18:29.957+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongo_1       | 2019-04-29T09:18:33.138+0000 I NETWORK  [LogicalSessionCacheRefresh] Starting new replica set monitor for rs0/0ff97b4a79e0:27017
mongo_1       | 2019-04-29T09:18:33.139+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:18:33.139+0000 I NETWORK  [LogicalSessionCacheRefresh] Cannot reach any nodes for set rs0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
mongo_1       | 2019-04-29T09:18:33.139+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Server is shutting down
mongo_1       | 2019-04-29T09:18:47.437+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=0ff97b4a79e0
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] db version v4.0.8
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] modules: none
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] build environment:
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongo_1       | 2019-04-29T09:18:47.440+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs0" }, security: { authorization: "enabled" }, storage: { mmapv1: { smallFiles: true } } }
mongo_1       | 2019-04-29T09:18:47.440+0000 W STORAGE  [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
mongo_1       | 2019-04-29T09:18:47.440+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo_1       | 2019-04-29T09:18:47.440+0000 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
mongo_1       | 2019-04-29T09:18:47.440+0000 I STORAGE  [initandlisten]
mongo_1       | 2019-04-29T09:18:47.441+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongo_1       | 2019-04-29T09:18:47.441+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongo_1       | 2019-04-29T09:18:47.441+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=3476M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongo_1       | 2019-04-29T09:18:48.128+0000 I STORAGE  [initandlisten] WiredTiger message [1556529528:128618][1:0x7f1f3c5e0a40], txn-recover: Main recovery loop: starting at 110/20736 to 111/256
mongo_1       | 2019-04-29T09:18:48.128+0000 I STORAGE  [initandlisten] WiredTiger message [1556529528:128972][1:0x7f1f3c5e0a40], txn-recover: Recovering log 110 through 111
mongo_1       | 2019-04-29T09:18:48.173+0000 I STORAGE  [initandlisten] WiredTiger message [1556529528:173107][1:0x7f1f3c5e0a40], file:sizeStorer.wt, txn-recover: Recovering log 111 through 111
mongo_1       | 2019-04-29T09:18:48.207+0000 I STORAGE  [initandlisten] WiredTiger message [1556529528:207736][1:0x7f1f3c5e0a40], file:sizeStorer.wt, txn-recover: Set global recovery timestamp: 5cc6c16100000001
mongo_1       | 2019-04-29T09:18:48.248+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1556529505, 1)
mongo_1       | 2019-04-29T09:18:48.248+0000 I STORAGE  [initandlisten] Triggering the first stable checkpoint. Initial Data: Timestamp(1556529505, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1556529505, 1)
mongo_1       | 2019-04-29T09:18:48.935+0000 I STORAGE  [initandlisten] Starting OplogTruncaterThread local.oplog.rs
mongo_1       | 2019-04-29T09:18:48.935+0000 I STORAGE  [initandlisten] The size storer reports that the oplog contains 42 records totaling to 5560 bytes
mongo_1       | 2019-04-29T09:18:48.935+0000 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
mongo_1       | 2019-04-29T09:18:49.064+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
mongo_1       | 2019-04-29T09:18:50.180+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo_1       | 2019-04-29T09:18:50.182+0000 I REPL     [initandlisten] Rollback ID is 1
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [initandlisten] Recovering from stable timestamp: Timestamp(1556529505, 1) (top of oplog: { ts: Timestamp(1556529505, 1), t: 2 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0))
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1556529505, 1)
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [initandlisten] No oplog entries to apply for recovery. Start point is at the top of the oplog.
mongo_1       | 2019-04-29T09:18:50.183+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [replexec-0] New replica set config in use: { _id: "rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "0ff97b4a79e0:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5cc6bfd73151a521425a33ac') } }
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [replexec-0] This node is 0ff97b4a79e0:27017 in the config
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [replexec-0] transition to STARTUP2 from STARTUP
mongo_1       | 2019-04-29T09:18:50.183+0000 I REPL     [replexec-0] Starting replication storage threads
mongo_1       | 2019-04-29T09:18:50.185+0000 I NETWORK  [LogicalSessionCacheRefresh] Starting new replica set monitor for rs0/0ff97b4a79e0:27017
mongo_1       | 2019-04-29T09:18:50.185+0000 I REPL     [replexec-0] transition to RECOVERING from STARTUP2
mongo_1       | 2019-04-29T09:18:50.185+0000 I REPL     [replexec-0] Starting replication fetcher thread
mongo_1       | 2019-04-29T09:18:50.185+0000 I REPL     [replexec-0] Starting replication applier thread
mongo_1       | 2019-04-29T09:18:50.185+0000 I REPL     [replexec-0] Starting replication reporter thread
mongo_1       | 2019-04-29T09:18:50.185+0000 I REPL     [rsSync-0] Starting oplog application
mongo_1       | 2019-04-29T09:18:50.186+0000 I REPL     [rsSync-0] transition to SECONDARY from RECOVERING
mongo_1       | 2019-04-29T09:18:50.186+0000 I REPL     [rsSync-0] conducting a dry run election to see if we could be elected. current term: 2
mongo_1       | 2019-04-29T09:18:50.186+0000 I REPL     [replexec-0] dry election run succeeded, running for election in term 3
mongo_1       | 2019-04-29T09:18:50.186+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:56212 #2 (1 connection now open)
mongo_1       | 2019-04-29T09:18:50.186+0000 I NETWORK  [conn2] received client metadata from 172.20.0.2:56212 conn2: { driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:18:50.186+0000 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to 0ff97b4a79e0:27017 (1 connections now open to 0ff97b4a79e0:27017 with a 5 second timeout)
mongo_1       | 2019-04-29T09:18:50.187+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:18:50.188+0000 I REPL     [replexec-0] election succeeded, assuming primary role in term 3
mongo_1       | 2019-04-29T09:18:50.188+0000 I REPL     [replexec-0] transition to PRIMARY from SECONDARY
mongo_1       | 2019-04-29T09:18:50.188+0000 I REPL     [replexec-0] Resetting sync source to empty, which was :27017
mongo_1       | 2019-04-29T09:18:50.188+0000 I REPL     [replexec-0] Entering primary catch-up mode.
mongo_1       | 2019-04-29T09:18:50.189+0000 I REPL     [replexec-0] Exited primary catch-up mode.
mongo_1       | 2019-04-29T09:18:50.189+0000 I REPL     [replexec-0] Stopping replication producer
mongo_1       | 2019-04-29T09:18:50.688+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:18:51.047+0000 I FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
mongo_1       | 2019-04-29T09:18:51.197+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:18:51.699+0000 W NETWORK  [LogicalSessionCacheRefresh] Unable to reach primary for set rs0
mongo_1       | 2019-04-29T09:18:52.202+0000 I REPL     [rsSync-0] transition to primary complete; database writes are now permitted
mongo_1       | 2019-04-29T09:18:52.203+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:56214 #4 (2 connections now open)
mongo_1       | 2019-04-29T09:18:52.203+0000 I NETWORK  [conn4] received client metadata from 172.20.0.2:56214 conn4: { driver: { name: "MongoDB Internal Client", version: "4.0.8" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo_1       | 2019-04-29T09:18:52.204+0000 I NETWORK  [LogicalSessionCacheRefresh] Successfully connected to 0ff97b4a79e0:27017 (1 connections now open to 0ff97b4a79e0:27017 with a 0 second timeout)
mongo_1       | 2019-04-29T09:18:52.865+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57255 #5 (3 connections now open)
mongo_1       | 2019-04-29T09:18:52.886+0000 I ACCESS   [conn5] Successfully authenticated as principal mongoadmin on admin
mongo_1       | 2019-04-29T09:19:01.474+0000 I NETWORK  [conn5] end connection 192.168.10.122:57255 (2 connections now open)
mongo_1       | 2019-04-29T09:19:07.215+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57258 #6 (3 connections now open)
mongo_1       | 2019-04-29T09:19:07.215+0000 I NETWORK  [conn6] received client metadata from 192.168.10.122:57258 conn6: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Windows_NT", name: "win32", architecture: "x64", version: "10.0.17763" }, platform: "Node.js v8.9.3, LE, mongodb-core: 3.1.5", application: { name: "MongoDB Compass Community" } }
mongo_1       | 2019-04-29T09:19:07.240+0000 I ACCESS   [conn6] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:19:07.261+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57261 #7 (4 connections now open)
mongo_1       | 2019-04-29T09:19:07.269+0000 I ACCESS   [conn7] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:19:07.270+0000 I ACCESS   [conn7] Unauthorized: not authorized on admin to execute command { hostInfo: 1, $clusterTime: { clusterTime: Timestamp(1556529542, 1), signature: { hash: BinData(0, 00000DC953B257E8FA34BEDF6F2548C6EDEA5B9C), keyId: 6685241635606888449 } }, lsid: { id: UUID("ad06f54b-a524-4f77-acc9-e01ef7cff9db") }, $db: "admin" }
mongo_1       | 2019-04-29T09:19:07.271+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57262 #8 (5 connections now open)
mongo_1       | 2019-04-29T09:19:07.274+0000 I ACCESS   [conn8] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:19:07.306+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57263 #9 (6 connections now open)
mongo_1       | 2019-04-29T09:19:07.312+0000 I ACCESS   [conn9] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:19:13.891+0000 I ACCESS   [conn7] Unauthorized: not authorized on local to execute command { aggregate: "oplog.rs", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("ad06f54b-a524-4f77-acc9-e01ef7cff9db") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:13.891+0000 I NETWORK  [listener] connection accepted from 192.168.10.122:57265 #10 (7 connections now open)
mongo_1       | 2019-04-29T09:19:13.901+0000 I ACCESS   [conn10] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:19:13.902+0000 I ACCESS   [conn6] Unauthorized: not authorized on local to execute command { aggregate: "oplog.rs", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("c8122e17-1749-44f8-ad53-cd2701003aa7") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:14.996+0000 I ACCESS   [conn6] Unauthorized: not authorized on local to execute command { aggregate: "replset.election", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("bbc734d6-4df5-4b67-aaac-ae626c0cdd0f") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:15.004+0000 I ACCESS   [conn9] Unauthorized: not authorized on local to execute command { aggregate: "replset.election", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("91300004-25ba-4fa8-85bf-30acbbb754ef") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:15.892+0000 I ACCESS   [conn8] Unauthorized: not authorized on local to execute command { aggregate: "replset.minvalid", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("91300004-25ba-4fa8-85bf-30acbbb754ef") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:15.892+0000 I ACCESS   [conn6] Unauthorized: not authorized on local to execute command { aggregate: "replset.minvalid", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("6b7eead1-cc52-4a69-83b7-a69ab88ceb38") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:16.967+0000 I ACCESS   [conn7] Unauthorized: not authorized on local to execute command { aggregate: "replset.oplogTruncateAfterPoint", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("e2ebcea3-757f-4019-aafc-dd0cdad3c913") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:16.969+0000 I ACCESS   [conn10] Unauthorized: not authorized on local to execute command { aggregate: "replset.oplogTruncateAfterPoint", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("6b7eead1-cc52-4a69-83b7-a69ab88ceb38") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:17.857+0000 I ACCESS   [conn9] Unauthorized: not authorized on local to execute command { aggregate: "startup_log", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("e2ebcea3-757f-4019-aafc-dd0cdad3c913") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:17.858+0000 I ACCESS   [conn9] Unauthorized: not authorized on local to execute command { aggregate: "startup_log", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("bbc734d6-4df5-4b67-aaac-ae626c0cdd0f") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:20.525+0000 I ACCESS   [conn7] Unauthorized: not authorized on local to execute command { aggregate: "oplog.rs", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("c8122e17-1749-44f8-ad53-cd2701003aa7") }, $db: "local" }
mongo_1       | 2019-04-29T09:19:20.526+0000 I ACCESS   [conn6] Unauthorized: not authorized on local to execute command { aggregate: "oplog.rs", pipeline: [ { $indexStats: {} }, { $project: { name: 1, usageHost: "$host", usageCount: "$accesses.ops", usageSince: "$accesses.since" } } ], cursor: {}, $clusterTime: { clusterTime: Timestamp(1556529552, 1), signature: { hash: BinData(0, 1ACF603C72448BD8F0EE2A1CDA59EF5AB872D6B5), keyId: 6685241635606888449 } }, lsid: { id: UUID("aff972f0-a5b6-4798-8b02-c670d909b275") }, $db: "local" }
mongo_1       | 2019-04-29T09:22:54.196+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51736 #11 (8 connections now open)
mongo_1       | 2019-04-29T09:22:54.205+0000 I NETWORK  [conn11] received client metadata from 172.20.0.1:51736 conn11: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:22:54.227+0000 I ACCESS   [conn11] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:22:54.237+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51740 #12 (9 connections now open)
mongo_1       | 2019-04-29T09:22:54.238+0000 I NETWORK  [conn12] received client metadata from 172.20.0.1:51740 conn12: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:22:54.240+0000 I NETWORK  [conn12] end connection 172.20.0.1:51740 (8 connections now open)
mongo_1       | 2019-04-29T09:22:54.272+0000 I NETWORK  [listener] connection accepted from 172.20.0.3:36834 #13 (9 connections now open)
mongo_1       | 2019-04-29T09:22:54.272+0000 I NETWORK  [conn13] received client metadata from 172.20.0.3:36834 conn13: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:22:54.290+0000 I ACCESS   [conn13] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:22:54.294+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51746 #14 (10 connections now open)
mongo_1       | 2019-04-29T09:22:54.295+0000 I NETWORK  [conn14] received client metadata from 172.20.0.1:51746 conn14: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:22:54.296+0000 I NETWORK  [conn14] end connection 172.20.0.1:51746 (9 connections now open)
mongo_1       | 2019-04-29T09:22:54.299+0000 I NETWORK  [listener] connection accepted from 172.20.0.3:36840 #15 (10 connections now open)
mongo_1       | 2019-04-29T09:22:54.300+0000 I NETWORK  [conn15] received client metadata from 172.20.0.3:36840 conn15: { driver: { name: "nodejs", version: "3.1.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.15.0-47-generic" }, platform: "Node.js v8.11.4, LE, mongodb-core: 3.1.5" }
mongo_1       | 2019-04-29T09:22:54.305+0000 I ACCESS   [conn15] Successfully authenticated as principal oploguser on admin
mongo_1       | 2019-04-29T09:22:55.005+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51752 #16 (11 connections now open)
mongo_1       | 2019-04-29T09:22:55.009+0000 I ACCESS   [conn16] Successfully authenticated as principal rocket on admin
rocketchat    | LocalStore: store created at
rocketchat    | LocalStore: store created at
rocketchat    | LocalStore: store created at
rocketchat    | Setting default file store to GridFS
rocketchat    | Warning: connect.session() MemoryStore is not
rocketchat    | designed for a production environment, as it will leak
rocketchat    | memory, and will not scale past a single process.
mongo_1       | 2019-04-29T09:23:05.500+0000 I STORAGE  [conn11] createCollection: rocketchat.rocketchat_raw_imports with generated UUID: edc20608-422e-42de-a635-8685fd99fe3b
mongo_1       | 2019-04-29T09:23:05.517+0000 I INDEX    [conn11] build index on: rocketchat.rocketchat_raw_imports properties: { v: 2, key: { _updatedAt: 1 }, name: "_updatedAt_1", ns: "rocketchat.rocketchat_raw_imports" }
mongo_1       | 2019-04-29T09:23:05.517+0000 I INDEX    [conn11]         building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongo_1       | 2019-04-29T09:23:05.524+0000 I INDEX    [conn11] build index done.  scanned 0 total records. 0 secs
mongo_1       | 2019-04-29T09:23:06.954+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51756 #17 (12 connections now open)
mongo_1       | 2019-04-29T09:23:06.958+0000 I ACCESS   [conn17] Successfully authenticated as principal rocket on admin
rocketchat    | {"line":"121","file":"migrations.js","message":"Migrations: Not migrating, already at version 143","time":{"$date":1556529788779},"level":"info"}
rocketchat    | ufs: temp directory created at "/tmp/ufs"
rocketchat    | Using GridFS for custom sounds storage
rocketchat    | Using GridFS for custom emoji storage
mongo_1       | 2019-04-29T09:23:09.493+0000 I COMMAND  [conn17] CMD: drop rocketchat.rocketchat_raw_imports
mongo_1       | 2019-04-29T09:23:09.495+0000 I STORAGE  [conn17] dropCollection: rocketchat.rocketchat_raw_imports (edc20608-422e-42de-a635-8685fd99fe3b) - renaming to drop-pending collection: rocketchat.system.drop.1556529789i94t3.rocketchat_raw_imports with drop optime { ts: Timestamp(1556529789, 94), t: 3 }
mongo_1       | 2019-04-29T09:23:09.495+0000 I STORAGE  [conn17] renameCollection: renaming collection edc20608-422e-42de-a635-8685fd99fe3b from rocketchat.rocketchat_raw_imports to rocketchat.system.drop.1556529789i94t3.rocketchat_raw_imports
mongo_1       | 2019-04-29T09:23:09.501+0000 I REPL     [replication-0] Completing collection drop for rocketchat.system.drop.1556529789i94t3.rocketchat_raw_imports with drop optime { ts: Timestamp(1556529789, 94), t: 3 } (notification optime: { ts: Timestamp(1556529789, 94), t: 3 })
mongo_1       | 2019-04-29T09:23:09.504+0000 I STORAGE  [replication-0] Finishing collection drop for rocketchat.system.drop.1556529789i94t3.rocketchat_raw_imports (edc20608-422e-42de-a635-8685fd99fe3b).
mongo_1       | 2019-04-29T09:23:09.504+0000 I REPL     [replication-1] Completing collection drop for rocketchat.system.drop.1556529789i94t3.rocketchat_raw_imports with drop optime { ts: Timestamp(1556529789, 94), t: 3 } (notification optime: { ts: Timestamp(1556529789, 95), t: 3 })
rocketchat    | Updating process.env.MAIL_URL
mongo_1       | 2019-04-29T09:23:09.818+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51760 #18 (13 connections now open)
mongo_1       | 2019-04-29T09:23:09.860+0000 I ACCESS   [conn18] Successfully authenticated as principal rocket on admin
mongo_1       | 2019-04-29T09:23:10.018+0000 I NETWORK  [listener] connection accepted from 172.20.0.1:51764 #19 (14 connections now open)
mongo_1       | 2019-04-29T09:23:10.021+0000 I ACCESS   [conn19] Successfully authenticated as principal rocket on admin
rocketchat    | Users with admin role already exist; Ignoring environment variables ADMIN_PASS
mongo_1       | 2019-04-29T09:23:11.156+0000 I ACCESS   [conn18] Unauthorized: not authorized on rocketchat to execute command { serverStatus: 1, $clusterTime: { clusterTime: Timestamp(1556529791, 29), signature: { hash: BinData(0, DCCAF715D8BF2A95ECDCBDF3D24766F95B40CFA1), keyId: 6685241635606888449 } }, lsid: { id: UUID("244e66fa-2798-4955-9285-cfce5fe5a558") }, $db: "rocketchat" }
rocketchat    | Error getting MongoDB version
rocketchat    | Exception in setTimeout callback: TypeError: Invalid Version: Error getting version
rocketchat    |     at new SemVer (/app/bundle/programs/server/npm/node_modules/semver/semver.js:312:11)
rocketchat    |     at Range.test (/app/bundle/programs/server/npm/node_modules/semver/semver.js:1137:15)
rocketchat    |     at Function.satisfies (/app/bundle/programs/server/npm/node_modules/semver/semver.js:1189:16)
rocketchat    |     at server/startup/serverRunning.js:65:15
rocketchat    |     at Meteor.EnvironmentVariable.EVp.withValue (packages/meteor.js:1304:12)
rocketchat    |     at packages/meteor.js:620:25
rocketchat    |     at runWithEnvironment (packages/meteor.js:1356:24)
mongo_1       | 2019-04-29T09:23:11.418+0000 I ACCESS   [conn18] Unauthorized: not authorized on rocketchat to execute command { serverStatus: 1, $clusterTime: { clusterTime: Timestamp(1556529791, 29), signature: { hash: BinData(0, DCCAF715D8BF2A95ECDCBDF3D24766F95B40CFA1), keyId: 6685241635606888449 } }, lsid: { id: UUID("90d5bbec-c2e9-45db-b62d-eeb134ea948e") }, $db: "rocketchat" }
rocketchat    | Error getting MongoDB info

@fabien4455
Copy link
Author

The problem is not anymore OPLOG.. But rocketchat that can't see mongodb version.. But why ?

@fabien4455
Copy link
Author

i tried to switch with mongoadmin account at the line :

So now it's

it works but that's really unsecure... Why it's not working with rocket user now ? ...

@reetp
Copy link

reetp commented Apr 29, 2019

Can you paste long logs like that on a gist or something ? It clutters the issue and makes it hard to read.

Just paste relevant snippets - probably this:

Error getting MongoDB info

Then you would search Issues and probably find this.

#14298

@fabien4455
Copy link
Author

I'll upload the logs please wait.

@fabien4455
Copy link
Author

fabien4455 commented Apr 29, 2019

Ok even if i use Rocket user, it works. the only problem is :

RocketChat don't say that rocketchat is up on IP:3000 ( in terminal )
And i don't have informations in administration about mongodb. But it works.

@fabien4455
Copy link
Author

Docker composing with admin user // in rocketchat administration :

image

Docker composing with rocket user // in rocketchat administration :

image

Here the files :

docker-compose up-rocket_user.log
docker-compose up-admin_user.log
docker-compose.yml.log

@fabien4455
Copy link
Author

fabien4455 commented Apr 29, 2019

I think the problem is : adding mongo version check in administration make trouble for docker compose.

If rocketchat can't access to admin user he can't know mongo version

So it doesn't tell that rocketchat is working..

@rodrigok
Copy link
Member

@fabien4455 you can add the role clusterMonitor to your user and it will have access to that info.

More information in this thread DataDog/dd-agent#318 (comment)

@rodrigok rodrigok added this to the 1.0.2 milestone Apr 29, 2019
@dimm0
Copy link

dimm0 commented Apr 29, 2019

Before upgrading you should always do a full database backup, and make a copy of the old Rocket.Chat server folder. That's all you need to revert back to any previous version.

Note to self: never run rocketchat in kubernetes with :latest tag.. A simple restart might kill everything

@fabien4455
Copy link
Author

Ok it works ! I ran :

db.runCommand({grantRolesToUser:"rocket",roles:["clusterMonitor"]})

And now it's working ;)

thank's

@rodrigok
Copy link
Member

@fabien4455 great, we will add that to the error logs and our documentation.

@fabien4455
Copy link
Author

Perfect, you can close the issue :)

@rodrigok
Copy link
Member

@fabien4455 will keep it open until we add the info into the error log and docs, then will close.

@10RUPTiV
Copy link

Just a stupid question.... people using rocket.chat on a single VM... how do we deal with the new "replica" requirement ?

@10RUPTiV
Copy link

@fabien4455 Did you "migrate" from a rocketchat setup using a single VM or you already have a mongo setup using multiple instance ?

@rodrigok
Copy link
Member

@PointPubMedia you do not need multiple mongodb instances, just need to enable the replicaset mode in your single instance to enable the oplog stream where Rocket.Chat listen for data changes.

@10RUPTiV
Copy link

@rodrigok Would really need some help with our setup on Debian :(

@rodrigok
Copy link
Member

@PointPubMedia contact me at https://open.rocket.chat/direct/rodrigo.nascimento please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants