Skip to content

Commit

Permalink
[Feature][Core] Rename result_table_name/source_table_name to `pl…
Browse files Browse the repository at this point in the history
…ugin_input/plugin_output` (apache#8072)
  • Loading branch information
hailin0 authored Nov 23, 2024
1 parent b6e7a42 commit c7bbd32
Show file tree
Hide file tree
Showing 638 changed files with 1,745 additions and 1,553 deletions.
2 changes: 1 addition & 1 deletion config/v2.batch.config.template
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ source {
# This is a example source plugin **only for test and demonstrate the feature source plugin**
FakeSource {
parallelism = 2
result_table_name = "fake"
plugin_output = "fake"
row.num = 16
schema = {
fields {
Expand Down
2 changes: 1 addition & 1 deletion config/v2.streaming.conf.template
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ source {
# This is a example source plugin **only for test and demonstrate the feature source plugin**
FakeSource {
parallelism = 2
result_table_name = "fake"
plugin_output = "fake"
row.num = 16
schema = {
fields {
Expand Down
52 changes: 29 additions & 23 deletions docs/en/concept/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,12 @@ config directory.

The config file is similar to the below one:

:::warn

The old configuration name `source_table_name`/`result_table_name` is deprecated, please migrate to the new name `plugin_input`/`plugin_output` as soon as possible.

:::

### hocon

```hocon
Expand All @@ -28,7 +34,7 @@ env {
source {
FakeSource {
result_table_name = "fake"
plugin_output = "fake"
row.num = 100
schema = {
fields {
Expand All @@ -42,8 +48,8 @@ source {
transform {
Filter {
source_table_name = "fake"
result_table_name = "fake1"
plugin_input = "fake"
plugin_output = "fake1"
fields = [name, card]
}
}
Expand All @@ -56,7 +62,7 @@ sink {
fields = ["name", "card"]
username = "default"
password = ""
source_table_name = "fake1"
plugin_input = "fake1"
}
}
```
Expand All @@ -80,7 +86,7 @@ Source is used to define where SeaTunnel needs to fetch data, and use the fetche
Multiple sources can be defined at the same time. The supported source can be found
in [Source of SeaTunnel](../connector-v2/source). Each source has its own specific parameters to define how to
fetch data, and SeaTunnel also extracts the parameters that each source will use, such as
the `result_table_name` parameter, which is used to specify the name of the data generated by the current
the `plugin_output` parameter, which is used to specify the name of the data generated by the current
source, which is convenient for follow-up used by other modules.

### transform
Expand All @@ -96,7 +102,7 @@ env {
source {
FakeSource {
result_table_name = "fake"
plugin_output = "fake"
row.num = 100
schema = {
fields {
Expand All @@ -116,7 +122,7 @@ sink {
fields = ["name", "age", "card"]
username = "default"
password = ""
source_table_name = "fake"
plugin_input = "fake"
}
}
```
Expand All @@ -134,11 +140,11 @@ and efficiently. Sink and source are very similar, but the difference is reading
### Other Information

You will find that when multiple sources and multiple sinks are defined, which data is read by each sink, and
which is the data read by each transform? We introduce two key configurations called `result_table_name` and
`source_table_name`. Each source module will be configured with a `result_table_name` to indicate the name of the
data source generated by the data source, and other transform and sink modules can use `source_table_name` to
which is the data read by each transform? We introduce two key configurations called `plugin_output` and
`plugin_input`. Each source module will be configured with a `plugin_output` to indicate the name of the
data source generated by the data source, and other transform and sink modules can use `plugin_input` to
refer to the corresponding data source name, indicating that I want to read the data for processing. Then
transform, as an intermediate processing module, can use both `result_table_name` and `source_table_name`
transform, as an intermediate processing module, can use both `plugin_output` and `plugin_input`
configurations at the same time. But you will find that in the above example config, not every module is
configured with these two parameters, because in SeaTunnel, there is a default convention, if these two
parameters are not configured, then the generated data from the last module of the previous node will be used.
Expand Down Expand Up @@ -170,7 +176,7 @@ Before writing the config file, please make sure that the name of the config fil
"source": [
{
"plugin_name": "FakeSource",
"result_table_name": "fake",
"plugin_output": "fake",
"row.num": 100,
"schema": {
"fields": {
Expand All @@ -184,8 +190,8 @@ Before writing the config file, please make sure that the name of the config fil
"transform": [
{
"plugin_name": "Filter",
"source_table_name": "fake",
"result_table_name": "fake1",
"plugin_input": "fake",
"plugin_output": "fake1",
"fields": ["name", "card"]
}
],
Expand All @@ -198,7 +204,7 @@ Before writing the config file, please make sure that the name of the config fil
"fields": ["name", "card"],
"username": "default",
"password": "",
"source_table_name": "fake1"
"plugin_input": "fake1"
}
]
}
Expand Down Expand Up @@ -234,7 +240,7 @@ env {
source {
FakeSource {
result_table_name = "${resName:fake_test}_table"
plugin_output = "${resName:fake_test}_table"
row.num = "${rowNum:50}"
string.template = ${strTemplate}
int.template = [20, 21]
Expand All @@ -249,16 +255,16 @@ source {
transform {
sql {
source_table_name = "${resName:fake_test}_table"
result_table_name = "sql"
plugin_input = "${resName:fake_test}_table"
plugin_output = "sql"
query = "select * from ${resName:fake_test}_table where name = '${nameVal}' "
}
}
sink {
Console {
source_table_name = "sql"
plugin_input = "sql"
username = ${username}
password = ${password}
}
Expand Down Expand Up @@ -291,7 +297,7 @@ env {
source {
FakeSource {
result_table_name = "fake_test_table"
plugin_output = "fake_test_table"
row.num = 50
string.template = ['abc','d~f','hi']
int.template = [20, 21]
Expand All @@ -306,16 +312,16 @@ source {
transform {
sql {
source_table_name = "fake_test_table"
result_table_name = "sql"
plugin_input = "fake_test_table"
plugin_output = "sql"
query = "select * from fake_test_table where name = 'abc' "
}
}
sink {
Console {
source_table_name = "sql"
plugin_input = "sql"
username = "seatunnel=2.3.1"
password = "$a^b%c.d~e0*9("
}
Expand Down
8 changes: 4 additions & 4 deletions docs/en/concept/schema-evolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ env {
source {
# This is a example source plugin **only for test and demonstrate the feature source plugin**
Oracle-CDC {
result_table_name = "customers"
plugin_output = "customers"
username = "dbzuser"
password = "dbz"
database-names = ["ORCLCDB"]
Expand All @@ -93,7 +93,7 @@ source {
sink {
Jdbc {
source_table_name = "customers"
plugin_input = "customers"
driver = "oracle.jdbc.driver.OracleDriver"
url = "jdbc:oracle:thin:@oracle-host:1521/ORCLCDB"
user = "dbzuser"
Expand All @@ -120,7 +120,7 @@ env {
source {
# This is a example source plugin **only for test and demonstrate the feature source plugin**
Oracle-CDC {
result_table_name = "customers"
plugin_output = "customers"
username = "dbzuser"
password = "dbz"
database-names = ["ORCLCDB"]
Expand All @@ -138,7 +138,7 @@ source {
sink {
jdbc {
source_table_name = "customers"
plugin_input = "customers"
url = "jdbc:mysql://oracle-host:3306/oracle_sink"
driver = "com.mysql.cj.jdbc.Driver"
user = "st_user_sink"
Expand Down
4 changes: 2 additions & 2 deletions docs/en/concept/schema-feature.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ constraintKeys = [
source {
FakeSource {
parallelism = 2
result_table_name = "fake"
plugin_output = "fake"
row.num = 16
schema {
table = "FakeDatabase.FakeTable"
Expand Down Expand Up @@ -234,7 +234,7 @@ If you only need to define the column, you can use fields to define the column,
source {
FakeSource {
parallelism = 2
result_table_name = "fake"
plugin_output = "fake"
row.num = 16
schema = {
fields {
Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/Config-Encryption-Decryption.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Next, I'll show how to quickly use SeaTunnel's own `base64` encryption:
source {
MySQL-CDC {
result_table_name = "fake"
plugin_output = "fake"
parallelism = 1
server-id = 5656
port = 56725
Expand Down Expand Up @@ -96,7 +96,7 @@ Next, I'll show how to quickly use SeaTunnel's own `base64` encryption:
"port" : 56725,
"database-name" : "inventory_vwyw0n",
"parallelism" : 1,
"result_table_name" : "fake",
"plugin_output" : "fake",
"table-name" : "products",
"plugin_name" : "MySQL-CDC",
"server-id" : 5656,
Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/formats/avro.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ source {
}
}
}
result_table_name = "fake"
plugin_output = "fake"
}
}

Expand All @@ -76,7 +76,7 @@ source {
Kafka {
bootstrap.servers = "kafkaCluster:9092"
topic = "test_avro_topic"
result_table_name = "kafka_table"
plugin_output = "kafka_table"
start_mode = "earliest"
format = avro
format_error_handle_way = skip
Expand Down Expand Up @@ -104,7 +104,7 @@ source {

sink {
Console {
source_table_name = "kafka_table"
plugin_input = "kafka_table"
}
}
```
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/canal-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ source {
Kafka {
bootstrap.servers = "kafkaCluster:9092"
topic = "products_binlog"
result_table_name = "kafka_name"
plugin_output = "kafka_name"
start_mode = earliest
schema = {
fields {
Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ env {

source {
MySQL-CDC {
result_table_name = "table1"
plugin_output = "table1"

base-url="jdbc:mysql://localhost:3306/test"
"startup.mode"=INITIAL
Expand All @@ -43,7 +43,7 @@ source {

sink {
Kafka {
source_table_name = "table1"
plugin_input = "table1"

bootstrap.servers = "localhost:9092"

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/debezium-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ source {
Kafka {
bootstrap.servers = "kafkaCluster:9092"
topic = "products_binlog"
result_table_name = "kafka_name"
plugin_output = "kafka_name"
start_mode = earliest
schema = {
fields {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ source {
Kafka {
bootstrap.servers = "localhost:9092"
topic = "jdbc_source_record"
result_table_name = "kafka_table"
plugin_output = "kafka_table"
start_mode = earliest
schema = {
fields {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/maxwell-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ source {
Kafka {
bootstrap.servers = "kafkaCluster:9092"
topic = "products_binlog"
result_table_name = "kafka_name"
plugin_output = "kafka_name"
start_mode = earliest
schema = {
fields {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/ogg-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ source {
Kafka {
bootstrap.servers = "127.0.0.1:9092"
topic = "ogg"
result_table_name = "kafka_name"
plugin_output = "kafka_name"
start_mode = earliest
schema = {
fields {
Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/formats/protobuf.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ env {
source {
FakeSource {
parallelism = 1
result_table_name = "fake"
plugin_output = "fake"
row.num = 16
schema = {
fields {
Expand Down Expand Up @@ -151,13 +151,13 @@ source {
}
bootstrap.servers = "kafkaCluster:9092"
start_mode = "earliest"
result_table_name = "kafka_table"
plugin_output = "kafka_table"
}
}
sink {
Console {
source_table_name = "kafka_table"
plugin_input = "kafka_table"
}
}
```
Loading

0 comments on commit c7bbd32

Please sign in to comment.