Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove usage of getter method for ShardingSphereMetaData.databases #33943

Merged
merged 4 commits into from
Dec 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/document/content/features/sharding/limitation.cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ MySQL、PostgreSQL 和 openGauss 都支持 LIMIT 分页,无需子查询:
SELECT * FROM t_order o ORDER BY id LIMIT ? OFFSET ?
```

### 聚合查询

支持 `MAX`, `MIN`, `SUM`, `COUNT`, `AVG`, `BIT_XOR`, `GROUP_CONCAT` 聚合语法。

### 运算表达式中包含分片键

当分片键处于运算表达式中时,无法通过 SQL `字面` 提取用于分片的值,将导致全路由。
Expand All @@ -113,10 +117,6 @@ SELECT * FROM t_order WHERE to_date(create_time, 'yyyy-mm-dd') = '2019-01-01';
5. 支持基于广播表和单表创建、修改和删除视图;
6. 支持 MySQL `SHOW CREATE TABLE viewName` 查看视图的创建语句。

### 聚合查询

支持 MySQL `MAX`, `MIN`, `SUM`, `COUNT`, `AVG`, `BIT_XOR`, `GROUP_CONCAT` 聚合语法

## 实验性支持

实验性支持特指使用 Federation 执行引擎提供支持。 该引擎处于快速开发中,用户虽基本可用,但仍需大量优化,是实验性产品。
Expand Down Expand Up @@ -159,22 +159,22 @@ SELECT * FROM t_user u RIGHT JOIN t_user_role r ON u.user_id = r.user_id WHERE u

### CASE WHEN

以下 CASE WHEN 语句不支持:
以下 `CASE WHEN` 语句不支持:

- CASE WHEN 中包含子查询
- CASE WHEN 中使用逻辑表名(请使用表别名)
- `CASE WHEN` 中包含子查询
- `CASE WHEN` 中使用逻辑表名(请使用表别名)

### 分页查询

Oracle 和 SQLServer 由于分页查询较为复杂,目前有部分分页查询不支持,具体如下:

- Oracle

目前不支持 rownum + BETWEEN 的分页方式。
目前不支持 `rownum + BETWEEN` 的分页方式。

- SQLServer

目前不支持使用 WITH xxx AS (SELECT …) 的方式进行分页。由于 Hibernate 自动生成的 SQLServer 分页语句使用了 WITH 语句,因此目前并不支持基于 Hibernate 的 SQLServer 分页。 目前也不支持使用两个 TOP + 子查询的方式实现分页。
目前不支持使用 `WITH xxx AS (SELECT …)` 的方式进行分页。由于 Hibernate 自动生成的 SQLServer 分页语句使用了 `WITH` 语句,因此目前并不支持基于 Hibernate 的 SQLServer 分页。 目前也不支持使用两个 TOP + 子查询的方式实现分页。

### LOAD DATA / LOAD XML

Expand Down
6 changes: 5 additions & 1 deletion docs/document/content/features/sharding/limitation.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,10 @@ SELECT * FROM t_order o ORDER BY id OFFSET ? ROW FETCH NEXT ? ROWS ONLY
SELECT * FROM t_order o ORDER BY id LIMIT ? OFFSET ?
```

### Aggregation

Support `MAX`, `MIN`, `SUM`, `COUNT`, `AVG`, `BIT_XOR`, `GROUP_CONCAT` and so on.

### Shard keys included in operation expressions

When the sharding key is contained in an expression, the value used for sharding cannot be extracted through the SQL letters and will result in full routing.
Expand Down Expand Up @@ -161,7 +165,7 @@ Due to the complexity of paging queries, there are currently some paging queries
The paging method of rownum + BETWEEN is not supported at present

- SQLServer
Currently, pagination with WITH xxx AS (SELECT ...) is not supported. Since the SQLServer paging statement automatically generated by Hibernate uses the WITH statement, Hibernate-based SQLServer paging is not supported at this moment. Pagination using two TOP + subquery also cannot be supported at this time.
Currently, pagination with `WITH xxx AS (SELECT ...)` is not supported. Since the SQLServer paging statement automatically generated by Hibernate uses the `WITH` statement, Hibernate-based SQLServer paging is not supported at this moment. Pagination using two TOP + subquery also cannot be supported at this time.

### LOAD DATA / LOAD XML

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ void assertDropDatabase() {
ShardingSphereMetaData metaData = new ShardingSphereMetaData(new LinkedList<>(Collections.singleton(mockDatabase(resourceMetaData, dataSource, databaseRule1, databaseRule2))),
mock(ResourceMetaData.class), new RuleMetaData(Collections.singleton(globalRule)), new ConfigurationProperties(new Properties()));
metaData.dropDatabase("foo_db");
assertTrue(metaData.getDatabases().isEmpty());
assertTrue(metaData.getAllDatabases().isEmpty());
Awaitility.await().pollDelay(10L, TimeUnit.MILLISECONDS).until(dataSource::isClosed);
assertTrue(dataSource.isClosed());
verify(globalRule).refresh(metaData.getAllDatabases(), GlobalRuleChangedType.DATABASE_CHANGED);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,8 +110,8 @@ void assertCreateWithJDBCInstanceMetaData() throws SQLException {
try (MetaDataContexts actual = MetaDataContextsFactory.create(metaDataPersistService, createContextManagerBuilderParameter(), computeNodeInstanceContext)) {
assertThat(actual.getMetaData().getGlobalRuleMetaData().getRules().size(), is(1));
assertThat(actual.getMetaData().getGlobalRuleMetaData().getRules().iterator().next(), instanceOf(MockedRule.class));
assertTrue(actual.getMetaData().getDatabases().containsKey("foo_db"));
assertThat(actual.getMetaData().getDatabases().size(), is(1));
assertTrue(actual.getMetaData().containsDatabase("foo_db"));
assertThat(actual.getMetaData().getAllDatabases().size(), is(1));
}
}

Expand All @@ -122,8 +122,8 @@ void assertCreateWithProxyInstanceMetaData() throws SQLException {
try (MetaDataContexts actual = MetaDataContextsFactory.create(metaDataPersistService, createContextManagerBuilderParameter(), mock(ComputeNodeInstanceContext.class, RETURNS_DEEP_STUBS))) {
assertThat(actual.getMetaData().getGlobalRuleMetaData().getRules().size(), is(1));
assertThat(actual.getMetaData().getGlobalRuleMetaData().getRules().iterator().next(), instanceOf(MockedRule.class));
assertTrue(actual.getMetaData().getDatabases().containsKey("foo_db"));
assertThat(actual.getMetaData().getDatabases().size(), is(1));
assertTrue(actual.getMetaData().containsDatabase("foo_db"));
assertThat(actual.getMetaData().getAllDatabases().size(), is(1));
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@
package org.apache.shardingsphere.proxy.backend.handler.database;

import org.apache.shardingsphere.infra.exception.dialect.exception.syntax.database.DatabaseCreateExistsException;
import org.apache.shardingsphere.infra.metadata.database.ShardingSphereDatabase;
import org.apache.shardingsphere.mode.manager.ContextManager;
import org.apache.shardingsphere.mode.metadata.MetaDataContexts;
import org.apache.shardingsphere.proxy.backend.context.ProxyContext;
Expand Down Expand Up @@ -84,7 +83,7 @@ void assertExecuteCreateExistDatabaseWithIfNotExists() throws SQLException {
private ContextManager mockContextManager() {
ContextManager result = mock(ContextManager.class, RETURNS_DEEP_STUBS);
MetaDataContexts metaDataContexts = mock(MetaDataContexts.class, RETURNS_DEEP_STUBS);
when(metaDataContexts.getMetaData().getDatabases()).thenReturn(Collections.singletonMap("foo_db", mock(ShardingSphereDatabase.class)));
when(metaDataContexts.getMetaData().getAllDatabases()).thenReturn(Collections.singleton(mock()));
when(result.getMetaDataContexts()).thenReturn(metaDataContexts);
return result;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
package org.apache.shardingsphere.proxy.backend.handler.database;

import org.apache.shardingsphere.authority.rule.AuthorityRule;
import org.apache.shardingsphere.infra.exception.dialect.exception.syntax.database.DatabaseDropNotExistsException;
import org.apache.shardingsphere.infra.database.core.type.DatabaseType;
import org.apache.shardingsphere.infra.exception.dialect.exception.syntax.database.DatabaseDropNotExistsException;
import org.apache.shardingsphere.infra.metadata.database.ShardingSphereDatabase;
import org.apache.shardingsphere.infra.metadata.database.rule.RuleMetaData;
import org.apache.shardingsphere.infra.spi.type.typed.TypedSPILoader;
Expand All @@ -41,9 +41,8 @@
import org.mockito.quality.Strictness;

import java.sql.SQLException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

import static org.hamcrest.CoreMatchers.instanceOf;
import static org.hamcrest.MatcherAssert.assertThat;
Expand Down Expand Up @@ -79,15 +78,15 @@ void setUp() {
}

private ContextManager mockContextManager() {
Map<String, ShardingSphereDatabase> databases = new HashMap<>(2, 1F);
ShardingSphereDatabase database = mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS);
databases.put("foo_db", database);
databases.put("bar_db", database);
ShardingSphereDatabase database1 = mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS);
when(database1.getName()).thenReturn("foo_db");
ShardingSphereDatabase database2 = mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS);
when(database2.getName()).thenReturn("bar_db");
MetaDataContexts metaDataContexts = mock(MetaDataContexts.class, RETURNS_DEEP_STUBS);
when(metaDataContexts.getMetaData().getDatabases()).thenReturn(databases);
when(metaDataContexts.getMetaData().getDatabase("foo_db")).thenReturn(database);
when(metaDataContexts.getMetaData().getDatabase("bar_db")).thenReturn(database);
when(metaDataContexts.getMetaData().getDatabase("test_not_exist_db")).thenReturn(database);
when(metaDataContexts.getMetaData().getAllDatabases()).thenReturn(Arrays.asList(database1, database2));
when(metaDataContexts.getMetaData().getDatabase("foo_db")).thenReturn(database1);
when(metaDataContexts.getMetaData().getDatabase("bar_db")).thenReturn(database2);
when(metaDataContexts.getMetaData().getDatabase("test_not_exist_db")).thenReturn(mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS));
when(metaDataContexts.getMetaData().getGlobalRuleMetaData()).thenReturn(new RuleMetaData(Collections.singleton(mock(AuthorityRule.class))));
ContextManager result = mock(ContextManager.class, RETURNS_DEEP_STUBS);
when(result.getMetaDataContexts()).thenReturn(metaDataContexts);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,7 @@ private void init(final String databaseName) {
private ContextManager mockContextManager(final String databaseName) {
ContextManager result = mock(ContextManager.class, RETURNS_DEEP_STUBS);
ShardingSphereDatabase database = mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS);
when(database.getName()).thenReturn(databaseName);
ResourceMetaData resourceMetaData = mock(ResourceMetaData.class);
when(database.getResourceMetaData()).thenReturn(resourceMetaData);
when(database.getProtocolType()).thenReturn(TypedSPILoader.getService(DatabaseType.class, "FIXTURE"));
Expand All @@ -148,7 +149,7 @@ private ContextManager mockContextManager(final String databaseName) {
when(database.getResourceMetaData().getStorageUnits()).thenReturn(new HashMap<>(Collections.singletonMap("foo_ds", storageUnit)));
when(database.getResourceMetaData().getDataSourceMap()).thenReturn(Collections.singletonMap("foo_ds", dataSource));
when(database.getRuleMetaData().getAttributes(DataSourceMapperRuleAttribute.class)).thenReturn(Collections.emptyList());
when(result.getMetaDataContexts().getMetaData().getDatabases()).thenReturn(Collections.singletonMap(databaseName, database));
when(result.getMetaDataContexts().getMetaData().getAllDatabases()).thenReturn(Collections.singleton(database));
when(result.getMetaDataContexts().getMetaData().getDatabase(databaseName)).thenReturn(database);
when(result.getMetaDataContexts().getMetaData().getProps()).thenReturn(new ConfigurationProperties(createProperties()));
return result;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,10 +128,11 @@ private ContextManager mockContextManager(final String feature) {
.thenReturn(new ConfigurationProperties(PropertiesBuilder.build(new Property(ConfigurationPropertyKey.PROXY_FRONTEND_DATABASE_PROTOCOL_TYPE.getKey(), "MySQL"))));
if (null != feature) {
ShardingSphereDatabase database = mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS);
when(database.getName()).thenReturn(feature);
when(database.getSchema("foo_db")).thenReturn(new ShardingSphereSchema("foo_db", createTables(), Collections.emptyList()));
Map<String, StorageUnit> storageUnits = createStorageUnits();
when(database.getResourceMetaData().getStorageUnits()).thenReturn(storageUnits);
when(result.getMetaDataContexts().getMetaData().getDatabases()).thenReturn(Collections.singletonMap(feature, database));
when(result.getMetaDataContexts().getMetaData().getAllDatabases()).thenReturn(Collections.singleton(database));
when(result.getMetaDataContexts().getMetaData().getDatabase(feature)).thenReturn(database);
}
return result;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ void assertSetVersionWhenStorageTypeDifferentWithProtocolType() throws SQLExcept
private ContextManager mockContextManager(final String databaseProductName, final String databaseProductVersion) throws SQLException {
ContextManager result = mock(ContextManager.class, RETURNS_DEEP_STUBS);
ShardingSphereDatabase database = mockDatabase(databaseProductName, databaseProductVersion);
when(result.getMetaDataContexts().getMetaData().getDatabases()).thenReturn(Collections.singletonMap("foo_db", database));
when(result.getMetaDataContexts().getMetaData().getAllDatabases()).thenReturn(Collections.singleton(database));
return result;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -344,8 +344,8 @@ void assertDescribeSelectPreparedStatement() throws SQLException {
List<Integer> parameterIndexes = IntStream.range(0, sqlStatement.getParameterCount()).boxed().collect(Collectors.toList());
ConnectionContext connectionContext = mockConnectionContext();
when(connectionSession.getConnectionContext()).thenReturn(connectionContext);
connectionSession.getServerPreparedStatementRegistry().addPreparedStatement(statementId,
new PostgreSQLServerPreparedStatement(sql, sqlStatementContext, new HintValueContext(), parameterTypes, parameterIndexes));
connectionSession.getServerPreparedStatementRegistry().addPreparedStatement(
statementId, new PostgreSQLServerPreparedStatement(sql, sqlStatementContext, new HintValueContext(), parameterTypes, parameterIndexes));
Collection<DatabasePacket> actual = executor.execute();
assertThat(actual.size(), is(2));
Iterator<DatabasePacket> actualPacketsIterator = actual.iterator();
Expand Down Expand Up @@ -387,7 +387,6 @@ private ContextManager mockContextManager() {
RuleMetaData globalRuleMetaData = new RuleMetaData(Arrays.asList(
new SQLTranslatorRule(new DefaultSQLTranslatorRuleConfigurationBuilder().build()), new LoggingRule(new DefaultLoggingRuleConfigurationBuilder().build())));
when(result.getMetaDataContexts().getMetaData().getGlobalRuleMetaData()).thenReturn(globalRuleMetaData);
when(result.getMetaDataContexts().getMetaData().getDatabases()).thenReturn(Collections.singletonMap(DATABASE_NAME, mock(ShardingSphereDatabase.class, RETURNS_DEEP_STUBS)));
Collection<ShardingSphereColumn> columnMetaData = Arrays.asList(
new ShardingSphereColumn("id", Types.INTEGER, true, false, false, true, false, false),
new ShardingSphereColumn("k", Types.INTEGER, true, false, false, true, false, false),
Expand All @@ -401,8 +400,7 @@ private ContextManager mockContextManager() {
when(result.getMetaDataContexts().getMetaData().getDatabase(DATABASE_NAME).getProtocolType()).thenReturn(TypedSPILoader.getService(DatabaseType.class, "PostgreSQL"));
StorageUnit storageUnit = mock(StorageUnit.class, RETURNS_DEEP_STUBS);
when(storageUnit.getStorageType()).thenReturn(TypedSPILoader.getService(DatabaseType.class, "PostgreSQL"));
when(result.getMetaDataContexts().getMetaData().getDatabase(DATABASE_NAME).getResourceMetaData().getStorageUnits())
.thenReturn(Collections.singletonMap("ds_0", storageUnit));
when(result.getMetaDataContexts().getMetaData().getDatabase(DATABASE_NAME).getResourceMetaData().getStorageUnits()).thenReturn(Collections.singletonMap("ds_0", storageUnit));
when(result.getMetaDataContexts().getMetaData().containsDatabase(DATABASE_NAME)).thenReturn(true);
when(result.getMetaDataContexts().getMetaData().getDatabase(DATABASE_NAME).containsSchema("public")).thenReturn(true);
when(result.getMetaDataContexts().getMetaData().getDatabase(DATABASE_NAME).getSchema("public").containsTable(TABLE_NAME)).thenReturn(true);
Expand Down
Loading