-
Notifications
You must be signed in to change notification settings - Fork 214
Home
Welcome to the elasticsearch-river-mongodb wiki!
This river uses MongoDB as datasource. It support attachment (GridFS).
The main branch support ElasticSearch 0.20.5 and MongoDB 2.2.3
The current implementation monitors oplog collection from the local database. Make sure to enable replica set 0. It does not support master / slave replication. Monitoring is done using tailable cursor 1. All operations are supported: insert, update, delete.
Mongo document is stored within ElasticSearch without transformation. The new document stored in ElasticSearch will use the same id as mongoDB.
GridFS Mongo document (with large binary content) requires transformation before being stored in ElasticSearch. It requires mapper-attachment plugin 2 installed in ElasticSearch. A specific mapping is specified to support GridFS Mongo document in ElasticSearch:
- Maping for attachment
{ “testindex” : { “files” : { “properties” : { “content” : { “path” : “full”, “type” : “attachment”, “fields” : { “content” : {"type": "string"}, “author” : {"type": "string"}, “title” : {"type": "string"}, “keywords” : { “type” : “string” }, “date” : { “format” : “dateOptionalTime”, “type” : “date” }, “content_type” : { “type” : “string” } } }, “chunkSize” : { “type” : “long” }, “md5” : { “type” : “string” }, “length” : { “type” : “long” }, “filename” : { “type” : “string” }, “contentType” : { “type” : “string” }, “uploadDate” : { “format” : “dateOptionalTime”, “type” : “date” }, “metadata” : { “type” : “object” } } } } }
- content.content is the base 64 encoded binary content.
- content.title is GridFS filename property.
- content.content_type is GridFS content type property.
- chunkSize, md5, length, filename, contentType and uploadDate map the same property from GridFS MongoDB document.
- metadata (optional) map metadata properties from GridFS MongoDB document.
The plugin has a dependency to elasticsearch-mapper-attachment
- In order to install this plugin, simply run:
%ES_HOME%\bin\plugin.bat -install elasticsearch/elasticsearch-mapper-attachments/1.6.0
- Simply run:
%ES_HOME%\bin\plugin.bat -install richardwilly98/elasticsearch-river-mongodb/1.6.4
- Since ES 0.20.2 the plugins cannot be deployed from the command line above. Please execute as temporary workaround:
%ES_HOME%\bin\plugin.bat -url https://github.com/downloads/richardwilly98/elasticsearch-river-mongodb/elasticsearch-river-mongodb-1.6.4.zip -install river-mongodb
- Restart ElasticSearch.
- Create path ES_HOME\plugins\river-mongodb
- Copy files: mongo-java-driver-2.10.1.jar and elasticsearch-river-mongodb-1.6.4.jar in ES_HOME\plugins\river-mongodb.
- Restart ElasticSearch.
Create a new river for each MongoDB collection that should be indexed by ElasticSearch.
- Replace ${es.river.name}, ${mongo.db.name}, ${mongo.collection.name}, ${mongo.is.gridfs.collection}, ${es.index.name} and ${es.type.name} by the correct values. Parameters servers, options and credentials are optional.
- ${es.river.name} is the Elasticsearch river name
- servers is an array of mongo instances. If not specify the ${mongo.instance1.host} (default value: localhost) and ${mongo.instance1.port} (default value: 27017)
- options define the mongo options settings used by the driver (only secondary_read_preference is implemented).
- credentials is an array of the credential required by the databases. db can be ‘admin’ or ‘local’. The credentials are used to connect to local and ${mongo.db.name}. See example 4.
- In index a throttle size can be defined ${es.throttle.size} default value is 500.
- In mongo a custom filter can be added in ${mongo.filter}. For more details see 5.
- From shell execute the command:
$ curl -XPUT "localhost:9200/_river/${es.river.name}/_meta" -d '
{
"type": "mongodb",
"mongodb": {
"servers":
[
{ "host": ${mongo.instance1.host}, "port": ${mongo.instance1.port} },
{ "host": ${mongo.instance2.host}, "port": ${mongo.instance2.port} }
],
"options": { "secondary_read_preference" : true},
"credentials":
[
{ "db": "local", "user": ${mongo.local.user}, "password": ${mongo.local.password} },
{ "db": "admin", "user": ${mongo.db.user}, "password": ${mongo.db.password} }
],
"db": ${mongo.db.name},
"collection": ${mongo.collection.name},
"gridfs": ${mongo.is.gridfs.collection},
"filter": ${mongo.filter}
},
"index": {
"name": ${es.index.name},
"throttle_size": ${es.throttle.size},
"type": ${es.type.name}
}
}'
- Example:
$ curl -XPUT "localhost:9200/_river/mongogridfs/_meta" -d'
{
"type": "mongodb",
"mongodb": {
"db": "testmongo",
"collection": "files",
"gridfs": true
},
"index": {
"name": "testmongo",
"type": "files"
}
}'
- Import a PDF file using mongofiles utility.
%MONGO_HOME%\bin\mongofiles.exe —host localhost:27017 —db testmongo —collection files put test-large-document.pdf connected to: localhost:27017 added file: { _id: ObjectId(‘4f244b4528a039f8f1178fdd’), filename: “test-large-document.pdf”, chunkSize: 262144, uploadDate: new Date(1327778630447), md5: “8ae3c6998db4ebbaf69421464e0c3ff9”, length: 50255626 } done!
- Retrieve the indexed document by the id:
$ curl -XGET “localhost:9200/testmongo/files/4f244b4528a039f8f1178fdd?pretty=true”
- The imported PDF contains the word ‘Reference’.
$ curl -XGET “localhost:9200/testmongo/files/_search?q=Reference&pretty=true”
The plugin should point to one of more mongos instance. It will discover automatically the shards available (looking at config.shards collection).
It will create one thread monitoring oplog.rs for each shard.
This feature requires “lang-javascript” plugin.
- In order to install this plugin, simply run:
%ES_HOME%\bin\plugin.bat -install elasticsearch/elasticsearch-lang-javascript/1.2.0
A new attribute “script” should be added to “mongodb” attribute.
- Example:
Assuming the document in MongoDB has an attribute “title_from_mongo” and this attribute should mapped to the attribute “title”.
$ curl -XPUT "localhost:9200/_river/mongoscriptfilter/_meta" -d'
{
"type": "mongodb",
"mongodb": {
"db": "testmongo",
"collection": "documents",
"script": "ctx.document.title = ctx.document.title_from_mongo; delete ctx.document.title_from_mongo;"
},
"index": {
"name": "testmongo",
"type": "documents"
}
}'
Assuming the document in MongoDB has an attribute “state” if it’s value is ‘CLOSED’ the document should be deleted from ES index.
$ curl -XPUT "localhost:9200/_river/mongoscriptfilter/_meta" -d'
{
"type": "mongodb",
"mongodb": {
"db": "testmongo",
"collection": "documents",
"script": "if( ctx.document.state == 'CLOSED' ) { ctx.deleted = true; }"
},
"index": {
"name": "testmongo",
"type": "documents"
}
}'
Assuming the document in MongoDB has an attribute “scorce”. Only documents with score > 100 should be indexed.
$ curl -XPUT "localhost:9200/_river/mongoscriptfilter/_meta" -d'
{
"type": "mongodb",
"mongodb": {
"db": "testmongo",
"collection": "documents",
"script": "if( ctx.document.score > 100 ) { ctx.ignore = true; }"
},
"index": {
"name": "testmongo",
"type": "documents"
}
}'
0 http://www.mongodb.org/display/DOCS/Replica+Set+Tutorial
1 http://www.mongodb.org/display/DOCS/Tailable+Cursors
2 http://www.elasticsearch.org/guide/reference/mapping/attachment-type.html
3 Ubuntu install script for MongoDB and ElasticSearch
4 Example river settings with replica set and database authentication
- Much more details about oplog collection: http://www.snailinaturtleneck.com/blog/2010/10/12/replication-internals/