Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to MongoDB Java driver 4.0.0 #7868

Merged
merged 1 commit into from
Mar 24, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 2 additions & 8 deletions bom/runtime/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,7 @@
<snakeyaml.version>1.25</snakeyaml.version>
<osgi.version>6.0.0</osgi.version>
<neo4j-java-driver.version>4.0.0</neo4j-java-driver.version>
<mongo-client.version>3.12.0</mongo-client.version>
<mongo-reactivestreams-client.version>1.13.0</mongo-reactivestreams-client.version>
<mongo-client.version>4.0.0</mongo-client.version>
<mongo-crypt.version>1.0.0</mongo-crypt.version>
<artemis.version>2.11.0</artemis.version>
<proton-j.version>0.33.3</proton-j.version>
Expand Down Expand Up @@ -1637,11 +1636,6 @@
<artifactId>mongodb-driver-sync</artifactId>
<version>${mongo-client.version}</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-async</artifactId>
<version>${mongo-client.version}</version>
</dependency>
<!-- mongodb-driver-legacy is not needed for Quarkus but we add the dependency for backward compatibility -->
<dependency>
<groupId>org.mongodb</groupId>
Expand All @@ -1651,7 +1645,7 @@
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-reactivestreams</artifactId>
<version>${mongo-reactivestreams-client.version}</version>
<version>${mongo-client.version}</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,35 @@

import com.mongodb.MongoNamespace;
import com.mongodb.bulk.BulkWriteResult;
import com.mongodb.client.model.*;
import com.mongodb.client.model.BulkWriteOptions;
import com.mongodb.client.model.CountOptions;
import com.mongodb.client.model.CreateIndexOptions;
import com.mongodb.client.model.DeleteOptions;
import com.mongodb.client.model.DropIndexOptions;
import com.mongodb.client.model.EstimatedDocumentCountOptions;
import com.mongodb.client.model.FindOneAndDeleteOptions;
import com.mongodb.client.model.FindOneAndReplaceOptions;
import com.mongodb.client.model.FindOneAndUpdateOptions;
import com.mongodb.client.model.IndexModel;
import com.mongodb.client.model.IndexOptions;
import com.mongodb.client.model.InsertManyOptions;
import com.mongodb.client.model.InsertOneOptions;
import com.mongodb.client.model.RenameCollectionOptions;
import com.mongodb.client.model.ReplaceOptions;
import com.mongodb.client.model.UpdateOptions;
import com.mongodb.client.model.WriteModel;
import com.mongodb.client.model.changestream.ChangeStreamDocument;
import com.mongodb.client.result.DeleteResult;
import com.mongodb.client.result.InsertManyResult;
import com.mongodb.client.result.InsertOneResult;
import com.mongodb.client.result.UpdateResult;
import com.mongodb.reactivestreams.client.*;
import com.mongodb.reactivestreams.client.AggregatePublisher;
import com.mongodb.reactivestreams.client.ChangeStreamPublisher;
import com.mongodb.reactivestreams.client.ClientSession;
import com.mongodb.reactivestreams.client.DistinctPublisher;
import com.mongodb.reactivestreams.client.FindPublisher;
import com.mongodb.reactivestreams.client.ListIndexesPublisher;
import com.mongodb.reactivestreams.client.MapReducePublisher;

/**
* A reactive API to interact with a Mongo collection.
Expand Down Expand Up @@ -478,7 +502,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param pipeline the aggregate pipeline
* @return a stream containing the result of the aggregation operation
*/
AggregatePublisher<Document> aggregateAsPublisher(List<? extends Bson> pipeline);
AggregatePublisher<T> aggregateAsPublisher(List<? extends Bson> pipeline);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this should be considered as a breaking change.
Initially, it only contained Document, now it can be any type.

First I would use TDocument as in the driver to indicate that it's not any type, but must be somehow a document. The lack of constraint in the driver is a bit odd.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cescoffier TDocument means any type T that can be converted to a document. I agree this is a bit odd as it didn't convey any constraint. As we register the Pojo Codec, almost any type can be converted automatically (except for some deep generic types). So maybe T is OK.


/**
* Aggregates documents according to the specified aggregation pipeline.
Expand All @@ -497,7 +521,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param pipeline the aggregate pipeline
* @return a stream containing the result of the aggregation operation
*/
AggregatePublisher<Document> aggregateAsPublisher(ClientSession clientSession, List<? extends Bson> pipeline);
AggregatePublisher<T> aggregateAsPublisher(ClientSession clientSession, List<? extends Bson> pipeline);

/**
* Aggregates documents according to the specified aggregation pipeline.
Expand All @@ -516,7 +540,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param pipeline the aggregate pipeline
* @return a stream containing the result of the aggregation operation
*/
PublisherBuilder<Document> aggregate(List<? extends Bson> pipeline);
PublisherBuilder<T> aggregate(List<? extends Bson> pipeline);

/**
* Aggregates documents according to the specified aggregation pipeline.
Expand All @@ -535,7 +559,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param pipeline the aggregate pipeline
* @return a stream containing the result of the aggregation operation
*/
PublisherBuilder<Document> aggregate(ClientSession clientSession, List<? extends Bson> pipeline);
PublisherBuilder<T> aggregate(ClientSession clientSession, List<? extends Bson> pipeline);

/**
* Aggregates documents according to the specified aggregation pipeline.
Expand All @@ -557,7 +581,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param options the stream options
* @return a stream containing the result of the aggregation operation
*/
PublisherBuilder<Document> aggregate(List<? extends Bson> pipeline, AggregateOptions options);
PublisherBuilder<T> aggregate(List<? extends Bson> pipeline, AggregateOptions options);

/**
* Aggregates documents according to the specified aggregation pipeline.
Expand All @@ -578,7 +602,7 @@ <D> PublisherBuilder<D> distinct(ClientSession clientSession, String fieldName,
* @param options the stream options
* @return a stream containing the result of the aggregation operation
*/
PublisherBuilder<Document> aggregate(ClientSession clientSession, List<? extends Bson> pipeline, AggregateOptions options);
PublisherBuilder<T> aggregate(ClientSession clientSession, List<? extends Bson> pipeline, AggregateOptions options);

/**
* Aggregates documents according to the specified aggregation pipeline.
Expand Down Expand Up @@ -825,28 +849,28 @@ PublisherBuilder<ChangeStreamDocument<Document>> watch(ClientSession clientSessi
<D> PublisherBuilder<ChangeStreamDocument<D>> watch(ClientSession clientSession, List<? extends Bson> pipeline,
Class<D> clazz, ChangeStreamOptions options);

MapReducePublisher<Document> mapReduceAsPublisher(String mapFunction, String reduceFunction);
MapReducePublisher<T> mapReduceAsPublisher(String mapFunction, String reduceFunction);

<D> MapReducePublisher<D> mapReduceAsPublisher(String mapFunction, String reduceFunction, Class<D> clazz);

MapReducePublisher<Document> mapReduceAsPublisher(ClientSession clientSession, String mapFunction, String reduceFunction);
MapReducePublisher<T> mapReduceAsPublisher(ClientSession clientSession, String mapFunction, String reduceFunction);

<D> MapReducePublisher<D> mapReduceAsPublisher(ClientSession clientSession, String mapFunction, String reduceFunction,
Class<D> clazz);

PublisherBuilder<Document> mapReduce(String mapFunction, String reduceFunction);
PublisherBuilder<T> mapReduce(String mapFunction, String reduceFunction);

<D> PublisherBuilder<D> mapReduce(String mapFunction, String reduceFunction, Class<D> clazz);

PublisherBuilder<Document> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction);
PublisherBuilder<T> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction);

<D> PublisherBuilder<D> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction, Class<D> clazz);

PublisherBuilder<Document> mapReduce(String mapFunction, String reduceFunction, MapReduceOptions options);
PublisherBuilder<T> mapReduce(String mapFunction, String reduceFunction, MapReduceOptions options);

<D> PublisherBuilder<D> mapReduce(String mapFunction, String reduceFunction, Class<D> clazz, MapReduceOptions options);

PublisherBuilder<Document> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction,
PublisherBuilder<T> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction,
MapReduceOptions options);

<D> PublisherBuilder<D> mapReduce(ClientSession clientSession, String mapFunction, String reduceFunction, Class<D> clazz,
Expand Down Expand Up @@ -896,7 +920,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertOne(T document);
CompletionStage<InsertOneResult> insertOne(T document);

/**
* Inserts the provided document. If the document is missing an identifier, the driver should generate one.
Expand All @@ -906,7 +930,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertOne(T document, InsertOneOptions options);
CompletionStage<InsertOneResult> insertOne(T document, InsertOneOptions options);

/**
* Inserts the provided document. If the document is missing an identifier, the driver should generate one.
Expand All @@ -916,7 +940,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertOne(ClientSession clientSession, T document);
CompletionStage<InsertOneResult> insertOne(ClientSession clientSession, T document);

/**
* Inserts the provided document. If the document is missing an identifier, the driver should generate one.
Expand All @@ -927,7 +951,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertOne(ClientSession clientSession, T document, InsertOneOptions options);
CompletionStage<InsertOneResult> insertOne(ClientSession clientSession, T document, InsertOneOptions options);

/**
* Inserts a batch of documents. The preferred way to perform bulk inserts is to use the BulkWrite API.
Expand All @@ -936,7 +960,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertMany(List<? extends T> documents);
CompletionStage<InsertManyResult> insertMany(List<? extends T> documents);

/**
* Inserts a batch of documents. The preferred way to perform bulk inserts is to use the BulkWrite API.
Expand All @@ -946,7 +970,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertMany(List<? extends T> documents, InsertManyOptions options);
CompletionStage<InsertManyResult> insertMany(List<? extends T> documents, InsertManyOptions options);

/**
* Inserts a batch of documents. The preferred way to perform bulk inserts is to use the BulkWrite API.
Expand All @@ -956,7 +980,7 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertMany(ClientSession clientSession, List<? extends T> documents);
CompletionStage<InsertManyResult> insertMany(ClientSession clientSession, List<? extends T> documents);

/**
* Inserts a batch of documents. The preferred way to perform bulk inserts is to use the BulkWrite API.
Expand All @@ -967,7 +991,8 @@ CompletionStage<BulkWriteResult> bulkWrite(ClientSession clientSession, List<? e
* @return a completion stage completed successfully when the operation completes, or completed exceptionally with
* either a {@link com.mongodb.DuplicateKeyException} or {@link com.mongodb.MongoException}
*/
CompletionStage<Void> insertMany(ClientSession clientSession, List<? extends T> documents, InsertManyOptions options);
CompletionStage<InsertManyResult> insertMany(ClientSession clientSession, List<? extends T> documents,
InsertManyOptions options);

/**
* Removes at most one document from the collection that matches the given filter.
Expand Down
Loading