-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bugs related to JDBC driver precedence and PostgreSQL JDBC subprotocol #143
Conversation
@yhuai @marmbrus, the JDBC driver precedence issue is not necessarily a bug given our user-facing documentation (if you carefully read what the documentation actually promises), but the current behavior is nonintuitive to me and I think that the new behavior implemented here is easier to reason about and more in line with what users might expect. I'd like to discuss whether we should make a similar change in Spark's JDBC data source, since I think that the new behavior implemented in this patch would allow us to work around some of the other JDBC driver precedence issues that we've seen in Spark (such as the issue with the Redshift driver taking precedence over the Postgres one for This change was motivated by the difficulties that I had while trying to write the Postgres integration suite that's included in this patch. |
Current coverage is
|
looks good to me |
Cool. Let me take a look at the README / code comments to make sure that they reflect this change, then I'll merge. I'm going to open a PR against Spark so that we mirror the same behavior there. |
// At the same time, we don't want to create a driver-per-connection, so we use the | ||
// DriverManager's driver instances to handle that singleton logic for us. | ||
val driver: Driver = DriverManager.getDrivers.asScala.collectFirst { | ||
case d if d.getClass.getCanonicalName == driverClass => d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized that there's a bit of a bug here: if d
is an instance of DriverWrapper
then we need to compare against d.wrapped.getClass.getCanonicalName
. I'll open a followup PR to fix this.
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to load (using the `driver` argument), but in the current code it's possible that the user-specified driver will not be used when it comes time to actually create a JDBC connection. In a nutshell, the problem is that you might have multiple JDBC drivers on the classpath that claim to be able to handle the same subprotocol, so simply registering the user-provided driver class with the our `DriverRegistry` and JDBC's `DriverManager` is not sufficient to ensure that it's actually used when creating the JDBC connection. This patch addresses this issue by first registering the user-specified driver with the DriverManager, then iterating over the driver manager's loaded drivers in order to obtain the correct driver and use it to create a connection (previously, we just called `DriverManager.getConnection()` directly). If a user did not specify a JDBC driver to use, then we call `DriverManager.getDriver` to figure out the class of the driver to use, then pass that class's name to executors; this guards against corner-case bugs in situations where the driver and executor JVMs might have different sets of JDBC drivers on their classpaths (previously, there was the (rare) potential for `DriverManager.getConnection()` to use different drivers on the driver and executors if the user had not explicitly specified a JDBC driver class and the classpaths were different). This patch is inspired by a similar patch that I made to the `spark-redshift` library (databricks/spark-redshift#143), which contains its own modified fork of some of Spark's JDBC data source code (for cross-Spark-version compatibility reasons). Author: Josh Rosen <[email protected]> Closes apache#10519 from JoshRosen/jdbc-driver-precedence.
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to load (using the `driver` argument), but in the current code it's possible that the user-specified driver will not be used when it comes time to actually create a JDBC connection. In a nutshell, the problem is that you might have multiple JDBC drivers on the classpath that claim to be able to handle the same subprotocol, so simply registering the user-provided driver class with the our `DriverRegistry` and JDBC's `DriverManager` is not sufficient to ensure that it's actually used when creating the JDBC connection. This patch addresses this issue by first registering the user-specified driver with the DriverManager, then iterating over the driver manager's loaded drivers in order to obtain the correct driver and use it to create a connection (previously, we just called `DriverManager.getConnection()` directly). If a user did not specify a JDBC driver to use, then we call `DriverManager.getDriver` to figure out the class of the driver to use, then pass that class's name to executors; this guards against corner-case bugs in situations where the driver and executor JVMs might have different sets of JDBC drivers on their classpaths (previously, there was the (rare) potential for `DriverManager.getConnection()` to use different drivers on the driver and executors if the user had not explicitly specified a JDBC driver class and the classpaths were different). This patch is inspired by a similar patch that I made to the `spark-redshift` library (databricks/spark-redshift#143), which contains its own modified fork of some of Spark's JDBC data source code (for cross-Spark-version compatibility reasons). Author: Josh Rosen <[email protected]> Closes #10519 from JoshRosen/jdbc-driver-precedence. (cherry picked from commit 6c83d93) Signed-off-by: Yin Huai <[email protected]>
…etConnector() This is a followup to #143 which fixes a corner-case bug in our scanning of registered JDBC drivers: we need to properly handle Spark's `DriverWrapper` drivers, which are used to wrap JDBC drivers in order to make them accessible from the root classloader so that the `DriverManager` can find them. A simpler, reflection-free version of this change was incorporated into apache/spark#10519 Author: Josh Rosen <[email protected]> Closes #147 from JoshRosen/jdbc-driver-precedence-round-2.
This patch fixes two bugs related to JDBC drivers:
RedshiftJDBCWrapper.getDriverClass
method mistakenly usedpostgres
as the PostgreSQL JDBC subprotocol, whereas the correct subprotocol is actuallypostgresql
(see java.lang.IllegalArgumentException: Unsupported JDBC protocol: 'postgresql' #126). This was straightforward to fix.Fixes #126.