diff --git a/docs/en/connector-v2/formats/avro.md b/docs/en/connector-v2/formats/avro.md
index b9ee961dafd3..638657b34567 100644
--- a/docs/en/connector-v2/formats/avro.md
+++ b/docs/en/connector-v2/formats/avro.md
@@ -2,7 +2,7 @@
Avro is very popular in streaming data pipeline. Now seatunnel supports Avro format in kafka connector.
-# How to use Avro format
+# How To Use
## Kafka uses example
diff --git a/docs/en/connector-v2/formats/canal-json.md b/docs/en/connector-v2/formats/canal-json.md
index 9412e1c5f289..1697a8c61893 100644
--- a/docs/en/connector-v2/formats/canal-json.md
+++ b/docs/en/connector-v2/formats/canal-json.md
@@ -15,14 +15,14 @@ SeaTunnel also supports to encode the INSERT/UPDATE/DELETE messages in SeaTunnel
# Format Options
-| option | default | required | Description |
+| Option | Default | Required | Description |
|--------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be 'canal_json'. |
| canal_json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |
| canal_json.database.include | (none) | no | An optional regular expression to only read the specific databases changelog rows by regular matching the "database" meta field in the Canal record. The pattern string is compatible with Java's Pattern. |
| canal_json.table.include | (none) | no | An optional regular expression to only read the specific tables changelog rows by regular matching the "table" meta field in the Canal record. The pattern string is compatible with Java's Pattern. |
-# How to use Canal format
+# How to use
## Kafka uses example
diff --git a/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md b/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
index 86683090f631..b35501a62a70 100644
--- a/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
+++ b/docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
@@ -1,12 +1,12 @@
-# CDC compatible debezium-json
+# CDC Compatible Debezium-json
SeaTunnel supports to interpret cdc record as Debezium-JSON messages publish to mq(kafka) system.
This is useful in many cases to leverage this feature, such as compatible with the debezium ecosystem.
-# How to use
+# How To Use
-## MySQL-CDC output to Kafka
+## MySQL-CDC Sink Kafka
```bash
env {
diff --git a/docs/en/connector-v2/formats/debezium-json.md b/docs/en/connector-v2/formats/debezium-json.md
index 73813d2a836c..a01e6c70d65b 100644
--- a/docs/en/connector-v2/formats/debezium-json.md
+++ b/docs/en/connector-v2/formats/debezium-json.md
@@ -15,14 +15,14 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE messages in Seatunnel
# Format Options
-| option | default | required | Description |
+| Option | Default | Required | Description |
|-----------------------------------|---------|----------|------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be 'debezium_json'. |
| debezium-json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |
-# How to use Debezium format
+# How To Use
-## Kafka uses example
+## Kafka Uses example
Debezium provides a unified format for changelog, here is a simple example for an update operation captured from a MySQL products table:
diff --git a/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md b/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
index af5e23d426bf..def638367ca5 100644
--- a/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
+++ b/docs/en/connector-v2/formats/kafka-compatible-kafkaconnect-json.md
@@ -2,9 +2,9 @@
Seatunnel connector kafka supports parsing data extracted through kafka connect source, especially data extracted from kafka connect jdbc and kafka connect debezium
-# How to use
+# How To Use
-## Kafka output to mysql
+## Kafka Sink Mysql
```bash
env {
diff --git a/docs/en/connector-v2/formats/ogg-json.md b/docs/en/connector-v2/formats/ogg-json.md
index e01817cec961..629edde72e5d 100644
--- a/docs/en/connector-v2/formats/ogg-json.md
+++ b/docs/en/connector-v2/formats/ogg-json.md
@@ -13,7 +13,7 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE messages in Seatunnel
# Format Options
-| option | default | required | Description |
+| Option | Default | Required | Description |
|------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be '-json'. |
| ogg_json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |
diff --git a/docs/en/connector-v2/sink/AmazonDynamoDB.md b/docs/en/connector-v2/sink/AmazonDynamoDB.md
index 6e880fb4af42..63211077c740 100644
--- a/docs/en/connector-v2/sink/AmazonDynamoDB.md
+++ b/docs/en/connector-v2/sink/AmazonDynamoDB.md
@@ -6,13 +6,13 @@
Write data to Amazon DynamoDB
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
## Options
-| name | type | required | default value |
+| Name | Type | Required | Default value |
|-------------------|--------|----------|---------------|
| url | string | yes | - |
| region | string | yes | - |
diff --git a/docs/en/connector-v2/sink/Assert.md b/docs/en/connector-v2/sink/Assert.md
index dff2657eafce..8257ff8f653c 100644
--- a/docs/en/connector-v2/sink/Assert.md
+++ b/docs/en/connector-v2/sink/Assert.md
@@ -6,43 +6,43 @@
A flink sink plugin which can assert illegal data by user defined rules
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
## Options
-| name | type | required | default value |
-|------------------------------------------------------------------------------------------------|------------|----------|---------------|
-| rules | ConfigMap | yes | - |
-| rules.field_rules | string | yes | - |
-| rules.field_rules.field_name | string | yes | - |
-| rules.field_rules.field_type | string | no | - |
-| rules.field_rules.field_value | ConfigList | no | - |
-| rules.field_rules.field_value.rule_type | string | no | - |
-| rules.field_rules.field_value.rule_value | double | no | - |
-| rules.row_rules | string | yes | - |
-| rules.row_rules.rule_type | string | no | - |
-| rules.row_rules.rule_value | string | no | - |
-| rules.catalog_table_rule | ConfigMap | no | - |
-| rules.catalog_table_rule.primary_key_rule | ConfigMap | no | - |
-| rules.catalog_table_rule.primary_key_rule.primary_key_name | string | no | - |
-| rules.catalog_table_rule.primary_key_rule.primary_key_columns | list | no | - |
-| rules.catalog_table_rule.constraint_key_rule | ConfigList | no | - |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_name | string | no | - |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_type | string | no | - |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns | ConfigList | no | - |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name | string | no | - |
-| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type | string | no | - |
-| rules.catalog_table_rule.column_rule | ConfigList | no | - |
-| rules.catalog_table_rule.column_rule.name | string | no | - |
-| rules.catalog_table_rule.column_rule.type | string | no | - |
-| rules.catalog_table_rule.column_rule.column_length | int | no | - |
-| rules.catalog_table_rule.column_rule.nullable | boolean | no | - |
-| rules.catalog_table_rule.column_rule.default_value | string | no | - |
-| rules.catalog_table_rule.column_rule.comment | comment | no | - |
-| rules.table-names | list | no | - |
-| common-options | | no | - |
+| Name | Type | Required | Default |
+|------------------------------------------------------------------------------------------------|------------|----------|---------|
+| rules | ConfigMap | yes | - |
+| rules.field_rules | string | yes | - |
+| rules.field_rules.field_name | string | yes | - |
+| rules.field_rules.field_type | string | no | - |
+| rules.field_rules.field_value | ConfigList | no | - |
+| rules.field_rules.field_value.rule_type | string | no | - |
+| rules.field_rules.field_value.rule_value | double | no | - |
+| rules.row_rules | string | yes | - |
+| rules.row_rules.rule_type | string | no | - |
+| rules.row_rules.rule_value | string | no | - |
+| rules.catalog_table_rule | ConfigMap | no | - |
+| rules.catalog_table_rule.primary_key_rule | ConfigMap | no | - |
+| rules.catalog_table_rule.primary_key_rule.primary_key_name | string | no | - |
+| rules.catalog_table_rule.primary_key_rule.primary_key_columns | list | no | - |
+| rules.catalog_table_rule.constraint_key_rule | ConfigList | no | - |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_name | string | no | - |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_type | string | no | - |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns | ConfigList | no | - |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name | string | no | - |
+| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type | string | no | - |
+| rules.catalog_table_rule.column_rule | ConfigList | no | - |
+| rules.catalog_table_rule.column_rule.name | string | no | - |
+| rules.catalog_table_rule.column_rule.type | string | no | - |
+| rules.catalog_table_rule.column_rule.column_length | int | no | - |
+| rules.catalog_table_rule.column_rule.nullable | boolean | no | - |
+| rules.catalog_table_rule.column_rule.default_value | string | no | - |
+| rules.catalog_table_rule.column_rule.comment | comment | no | - |
+| rules.table-names | list | no | - |
+| common-options | | no | - |
### rules [ConfigMap]
diff --git a/docs/en/connector-v2/sink/Clickhouse.md b/docs/en/connector-v2/sink/Clickhouse.md
index 2b2b55e1a6a6..3798e2baae34 100644
--- a/docs/en/connector-v2/sink/Clickhouse.md
+++ b/docs/en/connector-v2/sink/Clickhouse.md
@@ -30,7 +30,7 @@ They can be downloaded via install-plugin.sh or from the Maven central repositor
## Data Type Mapping
-| SeaTunnel Data type | Clickhouse Data type |
+| SeaTunnel Data Type | Clickhouse Data Type |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| STRING | String / Int128 / UInt128 / Int256 / UInt256 / Point / Ring / Polygon MultiPolygon |
| INT | Int8 / UInt8 / Int16 / UInt16 / Int32 |
diff --git a/docs/en/connector-v2/sink/ClickhouseFile.md b/docs/en/connector-v2/sink/ClickhouseFile.md
index cf53ce8b3dc5..ebafbc016282 100644
--- a/docs/en/connector-v2/sink/ClickhouseFile.md
+++ b/docs/en/connector-v2/sink/ClickhouseFile.md
@@ -20,7 +20,7 @@ Write data to Clickhouse can also be done using JDBC
## Options
-| name | type | required | default value |
+| Name | Type | Required | Default |
|------------------------|---------|----------|----------------------------------------|
| host | string | yes | - |
| database | string | yes | - |
diff --git a/docs/en/connector-v2/sink/Console.md b/docs/en/connector-v2/sink/Console.md
index f23d6d92403f..5d83c8102635 100644
--- a/docs/en/connector-v2/sink/Console.md
+++ b/docs/en/connector-v2/sink/Console.md
@@ -18,7 +18,7 @@ Used to send data to Console. Both support streaming and batch mode.
> For example, if the data from upstream is [`age: 12, name: jared`], the content send to console is the following: `{"name":"jared","age":17}`
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/sink/CosFile.md b/docs/en/connector-v2/sink/CosFile.md
index 0535401734c8..f0d6517a055b 100644
--- a/docs/en/connector-v2/sink/CosFile.md
+++ b/docs/en/connector-v2/sink/CosFile.md
@@ -16,7 +16,7 @@ To use this connector you need put hadoop-cos-{hadoop.version}-{version}.jar and
:::
-## Key features
+## Key Features
- [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -32,7 +32,7 @@ By default, we use 2PC commit to ensure `exactly-once`
## Options
-| name | type | required | default value | remarks |
+| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| path | string | yes | - | |
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. Need a COS dir. |
diff --git a/docs/en/connector-v2/sink/DB2.md b/docs/en/connector-v2/sink/DB2.md
index 583dd0021d25..72baba870327 100644
--- a/docs/en/connector-v2/sink/DB2.md
+++ b/docs/en/connector-v2/sink/DB2.md
@@ -34,7 +34,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| DB2 Data type | SeaTunnel Data type |
+| DB2 Data Type | SeaTunnel Data Type |
|------------------------------------------------------------------------------------------------------|---------------------|
| BOOLEAN | BOOLEAN |
| SMALLINT | SHORT |
diff --git a/docs/en/connector-v2/sink/Doris.md b/docs/en/connector-v2/sink/Doris.md
index a485eaf8c702..620e9e8fa563 100644
--- a/docs/en/connector-v2/sink/Doris.md
+++ b/docs/en/connector-v2/sink/Doris.md
@@ -2,6 +2,12 @@
> Doris sink connector
+## Support Doris Version
+
+- exactly-once & cdc supported `Doris version is >= 1.1.x`
+- Array data type supported `Doris version is >= 1.2.x`
+- Map data type will be support in `Doris version is 2.x`
+
## Support Those Engines
> Spark
@@ -18,18 +24,6 @@
Used to send data to Doris. Both support streaming and batch mode.
The internal implementation of Doris sink connector is cached and imported by stream load in batches.
-## Supported DataSource Info
-
-:::tip
-
-Version Supported
-
-* exactly-once & cdc supported `Doris version is >= 1.1.x`
-* Array data type supported `Doris version is >= 1.2.x`
-* Map data type will be support in `Doris version is 2.x`
-
-:::
-
## Sink Options
| Name | Type | Required | Default | Description |
@@ -120,7 +114,7 @@ You can use the following placeholders
## Data Type Mapping
-| Doris Data type | SeaTunnel Data type |
+| Doris Data Type | SeaTunnel Data Type |
|-----------------|-----------------------------------------|
| BOOLEAN | BOOLEAN |
| TINYINT | TINYINT |
diff --git a/docs/en/connector-v2/sink/Feishu.md b/docs/en/connector-v2/sink/Feishu.md
index 5573086db3e4..b965d8413f0f 100644
--- a/docs/en/connector-v2/sink/Feishu.md
+++ b/docs/en/connector-v2/sink/Feishu.md
@@ -23,7 +23,7 @@ Used to launch Feishu web hooks using data.
## Data Type Mapping
-| Seatunnel Data type | Feishu Data type |
+| Seatunnel Data Type | Feishu Data Type |
|-----------------------------|------------------|
| ROW
MAP | Json |
| NULL | null |
diff --git a/docs/en/connector-v2/sink/FtpFile.md b/docs/en/connector-v2/sink/FtpFile.md
index 3233fc3c6d63..cdc3512485e6 100644
--- a/docs/en/connector-v2/sink/FtpFile.md
+++ b/docs/en/connector-v2/sink/FtpFile.md
@@ -30,7 +30,7 @@ By default, we use 2PC commit to ensure `exactly-once`
## Options
-| name | type | required | default value | remarks |
+| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| host | string | yes | - | |
| port | int | yes | - | |
diff --git a/docs/en/connector-v2/sink/Greenplum.md b/docs/en/connector-v2/sink/Greenplum.md
index acddeb9763ab..6d4622b437d2 100644
--- a/docs/en/connector-v2/sink/Greenplum.md
+++ b/docs/en/connector-v2/sink/Greenplum.md
@@ -6,7 +6,7 @@
Write data to Greenplum using [Jdbc connector](Jdbc.md).
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/sink/IoTDB.md b/docs/en/connector-v2/sink/IoTDB.md
index ebf1a9e38f1d..8ace6724cb8a 100644
--- a/docs/en/connector-v2/sink/IoTDB.md
+++ b/docs/en/connector-v2/sink/IoTDB.md
@@ -35,7 +35,7 @@ There is a conflict of thrift version between IoTDB and Spark.Therefore, you nee
## Data Type Mapping
-| IotDB Data type | SeaTunnel Data type |
+| IotDB Data Type | SeaTunnel Data Type |
|-----------------|---------------------|
| BOOLEAN | BOOLEAN |
| INT32 | TINYINT |
@@ -98,9 +98,6 @@ source {
}
}
}
-
-...
-
```
Upstream SeaTunnelRow data format is the following:
diff --git a/docs/en/connector-v2/sink/Jdbc.md b/docs/en/connector-v2/sink/Jdbc.md
index 4e4a8b704eda..ef7458014aff 100644
--- a/docs/en/connector-v2/sink/Jdbc.md
+++ b/docs/en/connector-v2/sink/Jdbc.md
@@ -15,7 +15,7 @@ e.g. If you use MySQL, should download and copy `mysql-connector-java-xxx.jar` t
:::
-## Key features
+## Key Features
- [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -26,7 +26,7 @@ support `Xa transactions`. You can set `is_exactly_once=true` to enable it.
## Options
-| name | type | required | default value |
+| Name | Type | Required | Default |
|-------------------------------------------|---------|----------|------------------------------|
| url | String | Yes | - |
| driver | String | Yes | - |
diff --git a/docs/en/connector-v2/sink/Kingbase.md b/docs/en/connector-v2/sink/Kingbase.md
index c2204d0209e2..361ca9a728dd 100644
--- a/docs/en/connector-v2/sink/Kingbase.md
+++ b/docs/en/connector-v2/sink/Kingbase.md
@@ -36,7 +36,7 @@
## Data Type Mapping
-| Kingbase Data type | SeaTunnel Data type |
+| Kingbase Data Type | SeaTunnel Data Type |
|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| BOOL | BOOLEAN |
| INT2 | SHORT |
diff --git a/docs/en/connector-v2/sink/Kudu.md b/docs/en/connector-v2/sink/Kudu.md
index 08518d7c7294..aa43a72522dd 100644
--- a/docs/en/connector-v2/sink/Kudu.md
+++ b/docs/en/connector-v2/sink/Kudu.md
@@ -12,14 +12,14 @@
> Flink
> SeaTunnel Zeta
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
- [x] [cdc](../../concept/connector-v2-features.md)
## Data Type Mapping
-| SeaTunnel Data type | kudu Data type |
+| SeaTunnel Data Type | Kudu Data Type |
|---------------------|--------------------------|
| BOOLEAN | BOOL |
| INT | INT8
INT16
INT32 |
diff --git a/docs/en/connector-v2/sink/LocalFile.md b/docs/en/connector-v2/sink/LocalFile.md
index e9d798505129..2f88f0fe720c 100644
--- a/docs/en/connector-v2/sink/LocalFile.md
+++ b/docs/en/connector-v2/sink/LocalFile.md
@@ -14,7 +14,7 @@ If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you
:::
-## Key features
+## Key Features
- [x] [exactly-once](../../concept/connector-v2-features.md)
@@ -30,7 +30,7 @@ By default, we use 2PC commit to ensure `exactly-once`
## Options
-| name | type | required | default value | remarks |
+| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|---------------------------------------------------------------------------------------------------|
| path | string | yes | - | |
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. |
diff --git a/docs/en/connector-v2/sink/MongoDB.md b/docs/en/connector-v2/sink/MongoDB.md
index 31dc46743a30..e1cfd34ebad0 100644
--- a/docs/en/connector-v2/sink/MongoDB.md
+++ b/docs/en/connector-v2/sink/MongoDB.md
@@ -8,8 +8,7 @@
> Flink
> SeaTunnel Zeta
-Key Features
-------------
+## Key features
- [x] [exactly-once](../../concept/connector-v2-features.md)
- [x] [cdc](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/sink/Mysql.md b/docs/en/connector-v2/sink/Mysql.md
index 10dd1c526dcf..ab18ca2dc335 100644
--- a/docs/en/connector-v2/sink/Mysql.md
+++ b/docs/en/connector-v2/sink/Mysql.md
@@ -38,7 +38,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| Mysql Data type | SeaTunnel Data type |
+| Mysql Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| BIT(1)
INT UNSIGNED | BOOLEAN |
| TINYINT
TINYINT UNSIGNED
SMALLINT
SMALLINT UNSIGNED
MEDIUMINT
MEDIUMINT UNSIGNED
INT
INTEGER
YEAR | INT |
diff --git a/docs/en/connector-v2/sink/Oracle.md b/docs/en/connector-v2/sink/Oracle.md
index e99b9ba89d3c..0d2b7ab50418 100644
--- a/docs/en/connector-v2/sink/Oracle.md
+++ b/docs/en/connector-v2/sink/Oracle.md
@@ -35,7 +35,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| Oracle Data type | SeaTunnel Data type |
+| Oracle Data Type | SeaTunnel Data Type |
|--------------------------------------------------------------------------------------|---------------------|
| INTEGER | INT |
| FLOAT | DECIMAL(38, 18) |
diff --git a/docs/en/connector-v2/sink/OssFile.md b/docs/en/connector-v2/sink/OssFile.md
index f9e817ba562b..7cbab4347de4 100644
--- a/docs/en/connector-v2/sink/OssFile.md
+++ b/docs/en/connector-v2/sink/OssFile.md
@@ -39,7 +39,7 @@ If write to `csv`, `text` file type, All column will be string.
### Orc File Type
-| SeaTunnel Data type | Orc Data type |
+| SeaTunnel Data Type | Orc Data Type |
|----------------------|-----------------------|
| STRING | STRING |
| BOOLEAN | BOOLEAN |
@@ -61,7 +61,7 @@ If write to `csv`, `text` file type, All column will be string.
### Parquet File Type
-| SeaTunnel Data type | Parquet Data type |
+| SeaTunnel Data Type | Parquet Data Type |
|----------------------|-----------------------|
| STRING | STRING |
| BOOLEAN | BOOLEAN |
@@ -83,7 +83,7 @@ If write to `csv`, `text` file type, All column will be string.
## Options
-| name | type | required | default value | remarks |
+| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| path | string | yes | The oss path to write file in. | |
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. Need a OSS dir. |
diff --git a/docs/en/connector-v2/sink/OssJindoFile.md b/docs/en/connector-v2/sink/OssJindoFile.md
index eb4e81a8fbd9..40441ea83ec6 100644
--- a/docs/en/connector-v2/sink/OssJindoFile.md
+++ b/docs/en/connector-v2/sink/OssJindoFile.md
@@ -36,7 +36,7 @@ By default, we use 2PC commit to ensure `exactly-once`
## Options
-| name | type | required | default value | remarks |
+| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| path | string | yes | - | |
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. Need a OSS dir. |
diff --git a/docs/en/connector-v2/sink/PostgreSql.md b/docs/en/connector-v2/sink/PostgreSql.md
index 0868d64dc520..3e056376bd20 100644
--- a/docs/en/connector-v2/sink/PostgreSql.md
+++ b/docs/en/connector-v2/sink/PostgreSql.md
@@ -36,7 +36,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| PostgreSQL Data type | SeaTunnel Data type |
+| PostgreSQL Data Type | SeaTunnel Data Type |
|--------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| BOOL
| BOOLEAN |
| _BOOL
| ARRAY<BOOLEAN> |
diff --git a/docs/en/connector-v2/sink/RocketMQ.md b/docs/en/connector-v2/sink/RocketMQ.md
index 60ccf49c4cf5..a31534ec26bb 100644
--- a/docs/en/connector-v2/sink/RocketMQ.md
+++ b/docs/en/connector-v2/sink/RocketMQ.md
@@ -12,7 +12,7 @@
> Flink
> SeaTunnel Zeta
-## Key features
+## Key Features
- [x] [exactly-once](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/sink/Snowflake.md b/docs/en/connector-v2/sink/Snowflake.md
index 62f9bd86ea7d..b6da5f6ed2e6 100644
--- a/docs/en/connector-v2/sink/Snowflake.md
+++ b/docs/en/connector-v2/sink/Snowflake.md
@@ -8,7 +8,7 @@
> Flink
> SeaTunnel Zeta
-## Key features
+## Key Features
- [ ] [exactly-once](../../concept/connector-v2-features.md)
- [x] [cdc](../../concept/connector-v2-features.md)
@@ -19,7 +19,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
## Supported DataSource list
-| datasource | supported versions | driver | url | maven |
+| Datasource | Supported Versions | Driver | Url | Maven |
|------------|----------------------------------------------------------|-------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------|
| snowflake | Different dependency version has different driver class. | net.snowflake.client.jdbc.SnowflakeDriver | jdbc:snowflake://.snowflakecomputing.com | [Download](https://mvnrepository.com/artifact/net.snowflake/snowflake-jdbc) |
@@ -30,7 +30,7 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
## Data Type Mapping
-| Snowflake Data type | SeaTunnel Data type |
+| Snowflake Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------|---------------------|
| BOOLEAN | BOOLEAN |
| TINYINT
SMALLINT
BYTEINT
| SHORT_TYPE |
@@ -48,26 +48,26 @@ Write data through jdbc. Support Batch mode and Streaming mode, support concurre
## Options
-| name | type | required | default value | description |
-|-------------------------------------------|---------|----------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| url | String | Yes | - | The URL of the JDBC connection. Refer to a case: jdbc:snowflake://.snowflakecomputing.com |
-| driver | String | Yes | - | The jdbc class name used to connect to the remote data source,
if you use Snowflake the value is `net.snowflake.client.jdbc.SnowflakeDriver`. |
-| user | String | No | - | Connection instance user name |
-| password | String | No | - | Connection instance password |
-| query | String | No | - | Use this sql write upstream input datas to database. e.g `INSERT ...`,`query` have the higher priority |
-| database | String | No | - | Use this `database` and `table-name` auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with `query` and has a higher priority. |
-| table | String | No | - | Use database and this table-name auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with `query` and has a higher priority. |
-| primary_keys | Array | No | - | This option is used to support operations such as `insert`, `delete`, and `update` when automatically generate sql. |
-| support_upsert_by_query_primary_key_exist | Boolean | No | false | Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupport upsert syntax. **Note**: that this method has low performance |
-| connection_check_timeout_sec | Int | No | 30 | The time in seconds to wait for the database operation used to validate the connection to complete. |
-| max_retries | Int | No | 0 | The number of retries to submit failed (executeBatch) |
-| batch_size | Int | No | 1000 | For batch writing, when the number of buffered records reaches the number of `batch_size` or the time reaches `checkpoint.interval`
, the data will be flushed into the database |
-| max_commit_attempts | Int | No | 3 | The number of retries for transaction commit failures |
-| transaction_timeout_sec | Int | No | -1 | The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect
exactly-once semantics |
-| auto_commit | Boolean | No | true | Automatic transaction commit is enabled by default |
-| properties | Map | No | - | Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the
specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. |
-| common-options | | No | - | Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details |
-| enable_upsert | Boolean | No | true | Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to `false` can speed up data import |
+| Name | Type | Required | Default | Description |
+|-------------------------------------------|---------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| url | String | Yes | - | The URL of the JDBC connection. Refer to a case: jdbc:snowflake://.snowflakecomputing.com |
+| driver | String | Yes | - | The jdbc class name used to connect to the remote data source,
if you use Snowflake the value is `net.snowflake.client.jdbc.SnowflakeDriver`. |
+| user | String | No | - | Connection instance user name |
+| password | String | No | - | Connection instance password |
+| query | String | No | - | Use this sql write upstream input datas to database. e.g `INSERT ...`,`query` have the higher priority |
+| database | String | No | - | Use this `database` and `table-name` auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with `query` and has a higher priority. |
+| table | String | No | - | Use database and this table-name auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with `query` and has a higher priority. |
+| primary_keys | Array | No | - | This option is used to support operations such as `insert`, `delete`, and `update` when automatically generate sql. |
+| support_upsert_by_query_primary_key_exist | Boolean | No | false | Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupport upsert syntax. **Note**: that this method has low performance |
+| connection_check_timeout_sec | Int | No | 30 | The time in seconds to wait for the database operation used to validate the connection to complete. |
+| max_retries | Int | No | 0 | The number of retries to submit failed (executeBatch) |
+| batch_size | Int | No | 1000 | For batch writing, when the number of buffered records reaches the number of `batch_size` or the time reaches `checkpoint.interval`
, the data will be flushed into the database |
+| max_commit_attempts | Int | No | 3 | The number of retries for transaction commit failures |
+| transaction_timeout_sec | Int | No | -1 | The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect
exactly-once semantics |
+| auto_commit | Boolean | No | true | Automatic transaction commit is enabled by default |
+| properties | Map | No | - | Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the
specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. |
+| common-options | | No | - | Sink plugin common parameters, please refer to [Sink Common Options](common-options.md) for details |
+| enable_upsert | Boolean | No | true | Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to `false` can speed up data import |
## tips
diff --git a/docs/en/connector-v2/sink/SqlServer.md b/docs/en/connector-v2/sink/SqlServer.md
index 761af9c1ee2e..72ad7ff29f5f 100644
--- a/docs/en/connector-v2/sink/SqlServer.md
+++ b/docs/en/connector-v2/sink/SqlServer.md
@@ -12,7 +12,7 @@
> Flink
> Seatunnel Zeta
-## Key features
+## Key Features
- [x] [exactly-once](../../concept/connector-v2-features.md)
- [x] [cdc](../../concept/connector-v2-features.md)
@@ -27,7 +27,7 @@ semantics (using XA transaction guarantee).
## Supported DataSource Info
-| datasource | supported versions | driver | url | maven |
+| Datasource | Supported Versions | Driver | Url | Maven |
|------------|-------------------------|----------------------------------------------|---------------------------------|-----------------------------------------------------------------------------------|
| SQL Server | support version >= 2008 | com.microsoft.sqlserver.jdbc.SQLServerDriver | jdbc:sqlserver://localhost:1433 | [Download](https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) |
@@ -38,7 +38,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| SQLserver Data type | Seatunnel Data type |
+| SQLserver Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| BIT | BOOLEAN |
| TINYINT
SMALLINT | SHORT |
diff --git a/docs/en/connector-v2/sink/Vertica.md b/docs/en/connector-v2/sink/Vertica.md
index f8c6c4a74682..620e8c045760 100644
--- a/docs/en/connector-v2/sink/Vertica.md
+++ b/docs/en/connector-v2/sink/Vertica.md
@@ -34,7 +34,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping
-| Vertica Data type | SeaTunnel Data type |
+| Vertica Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| BIT(1)
INT UNSIGNED | BOOLEAN |
| TINYINT
TINYINT UNSIGNED
SMALLINT
SMALLINT UNSIGNED
MEDIUMINT
MEDIUMINT UNSIGNED
INT
INTEGER
YEAR | INT |
diff --git a/docs/en/connector-v2/source/Clickhouse.md b/docs/en/connector-v2/source/Clickhouse.md
index 284b7b14cb22..c23b25e92e79 100644
--- a/docs/en/connector-v2/source/Clickhouse.md
+++ b/docs/en/connector-v2/source/Clickhouse.md
@@ -34,7 +34,7 @@ They can be downloaded via install-plugin.sh or from the Maven central repositor
## Data Type Mapping
-| Clickhouse Data type | SeaTunnel Data type |
+| Clickhouse Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
| String / Int128 / UInt128 / Int256 / UInt256 / Point / Ring / Polygon MultiPolygon | STRING |
| Int8 / UInt8 / Int16 / UInt16 / Int32 | INT |
diff --git a/docs/en/connector-v2/source/DB2.md b/docs/en/connector-v2/source/DB2.md
index 0c512588ac8d..a5a69928451a 100644
--- a/docs/en/connector-v2/source/DB2.md
+++ b/docs/en/connector-v2/source/DB2.md
@@ -36,7 +36,7 @@ Read external data source data through JDBC.
## Data Type Mapping
-| DB2 Data type | SeaTunnel Data type |
+| DB2 Data Type | SeaTunnel Data Type |
|------------------------------------------------------------------------------------------------------|---------------------|---|
| BOOLEAN | BOOLEAN |
| SMALLINT | SHORT |
diff --git a/docs/en/connector-v2/source/FakeSource.md b/docs/en/connector-v2/source/FakeSource.md
index dff5e61bfaa2..43cc8dc671ed 100644
--- a/docs/en/connector-v2/source/FakeSource.md
+++ b/docs/en/connector-v2/source/FakeSource.md
@@ -2,6 +2,12 @@
> FakeSource connector
+## Support Those Engines
+
+> Spark
+> Flink
+> SeaTunnel Zeta
+
## Description
The FakeSource is a virtual data source, which randomly generates the number of rows according to the data structure of the user-defined schema,
@@ -371,14 +377,20 @@ rows = [
### Options `table-names` Case
-```agsl
-FakeSource {
- table-names = ["test.table1", "test.table2"]
+```hocon
+
+source {
+ # This is a example source plugin **only for test and demonstrate the feature source plugin**
+ FakeSource {
+ table-names = ["test.table1", "test.table2", "test.table3"]
+ parallelism = 1
schema = {
- table = "database.schema.table"
- ...
+ fields {
+ name = "string"
+ age = "int"
+ }
}
- ...
+ }
}
```
diff --git a/docs/en/connector-v2/source/Hive-jdbc.md b/docs/en/connector-v2/source/Hive-jdbc.md
index b301ea02f53c..e30db04d323b 100644
--- a/docs/en/connector-v2/source/Hive-jdbc.md
+++ b/docs/en/connector-v2/source/Hive-jdbc.md
@@ -41,7 +41,7 @@ Read external data source data through JDBC.
## Data Type Mapping
-| Hive Data type | SeaTunnel Data type |
+| Hive Data Type | SeaTunnel Data Type |
|-------------------------------------------------------------------------------------------|---------------------|
| BOOLEAN | BOOLEAN |
| TINYINT
SMALLINT | SHORT |
diff --git a/docs/en/connector-v2/source/Hudi.md b/docs/en/connector-v2/source/Hudi.md
index 46a2815b5cef..353142a8e403 100644
--- a/docs/en/connector-v2/source/Hudi.md
+++ b/docs/en/connector-v2/source/Hudi.md
@@ -33,7 +33,7 @@ In order to use this connector, You must ensure your spark/flink cluster already
## Data Type Mapping
-| Hudi Data type | Seatunnel Data type |
+| Hudi Data Type | Seatunnel Data Type |
|----------------|---------------------|
| ALL TYPE | STRING |
diff --git a/docs/en/connector-v2/source/IoTDB.md b/docs/en/connector-v2/source/IoTDB.md
index 1dda73e59c0d..7969f366f9ea 100644
--- a/docs/en/connector-v2/source/IoTDB.md
+++ b/docs/en/connector-v2/source/IoTDB.md
@@ -38,7 +38,7 @@ There is a conflict of thrift version between IoTDB and Spark.Therefore, you nee
## Data Type Mapping
-| IotDB Data type | SeaTunnel Data type |
+| IotDB Data Type | SeaTunnel Data Type |
|-----------------|---------------------|
| BOOLEAN | BOOLEAN |
| INT32 | TINYINT |
diff --git a/docs/en/connector-v2/source/Kudu.md b/docs/en/connector-v2/source/Kudu.md
index ac836b970aec..4d834e5e2d67 100644
--- a/docs/en/connector-v2/source/Kudu.md
+++ b/docs/en/connector-v2/source/Kudu.md
@@ -28,7 +28,7 @@ The tested kudu version is 1.11.1.
## Data Type Mapping
-| kudu Data type | SeaTunnel Data type |
+| kudu Data Type | SeaTunnel Data Type |
|--------------------------|---------------------|
| BOOL | BOOLEAN |
| INT8
INT16
INT32 | INT |
@@ -75,14 +75,14 @@ env {
source {
# This is a example source plugin **only for test and demonstrate the feature source plugin**
- kudu{
- kudu_masters = "kudu-master:7051"
- table_name = "kudu_source_table"
- result_table_name = "kudu"
- enable_kerberos = true
- kerberos_principal = "xx@xx.COM"
- kerberos_keytab = "xx.keytab"
-}
+ kudu {
+ kudu_masters = "kudu-master:7051"
+ table_name = "kudu_source_table"
+ result_table_name = "kudu"
+ enable_kerberos = true
+ kerberos_principal = "xx@xx.COM"
+ kerberos_keytab = "xx.keytab"
+ }
}
transform {
@@ -93,14 +93,15 @@ sink {
source_table_name = "kudu"
}
- kudu{
+ kudu {
source_table_name = "kudu"
kudu_masters = "kudu-master:7051"
table_name = "kudu_sink_table"
enable_kerberos = true
kerberos_principal = "xx@xx.COM"
kerberos_keytab = "xx.keytab"
- }
+ }
+}
```
### Multiple Table
diff --git a/docs/en/connector-v2/source/MongoDB-CDC.md b/docs/en/connector-v2/source/MongoDB-CDC.md
index 14e240f50a39..a7bd980b6d32 100644
--- a/docs/en/connector-v2/source/MongoDB-CDC.md
+++ b/docs/en/connector-v2/source/MongoDB-CDC.md
@@ -75,7 +75,7 @@ db.createUser(
The following table lists the field data type mapping from MongoDB BSON type to Seatunnel data type.
-| MongoDB BSON type | Seatunnel Data type |
+| MongoDB BSON Type | SeaTunnel Data Type |
|-------------------|---------------------|
| ObjectId | STRING |
| String | STRING |
@@ -92,7 +92,7 @@ The following table lists the field data type mapping from MongoDB BSON type to
For specific types in MongoDB, we use Extended JSON format to map them to Seatunnel STRING type.
-| MongoDB BSON type | Seatunnel STRING |
+| MongoDB BSON type | SeaTunnel STRING |
|-------------------|----------------------------------------------------------------------------------------------|
| Symbol | {"_value": {"$symbol": "12"}} |
| RegularExpression | {"_value": {"$regularExpression": {"pattern": "^9$", "options": "i"}}} |
diff --git a/docs/en/connector-v2/source/MySQL-CDC.md b/docs/en/connector-v2/source/MySQL-CDC.md
index bc562213a2d0..499830f7fa90 100644
--- a/docs/en/connector-v2/source/MySQL-CDC.md
+++ b/docs/en/connector-v2/source/MySQL-CDC.md
@@ -55,7 +55,7 @@ mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIE
mysql> FLUSH PRIVILEGES;
```
-### Enabling the MySQL binlog
+### Enabling the MySQL Binlog
You must enable binary logging for MySQL replication. The binary logs record transaction updates for replication tools to propagate changes.
@@ -127,7 +127,7 @@ When an initial consistent snapshot is made for large databases, your establishe
## Data Type Mapping
-| Mysql Data type | SeaTunnel Data type |
+| Mysql Data Type | SeaTunnel Data Type |
|------------------------------------------------------------------------------------------|---------------------|
| BIT(1)
TINYINT(1) | BOOLEAN |
| TINYINT | TINYINT |
diff --git a/docs/en/connector-v2/source/Mysql.md b/docs/en/connector-v2/source/Mysql.md
index e1fe6a3e8eda..216d3874bf8f 100644
--- a/docs/en/connector-v2/source/Mysql.md
+++ b/docs/en/connector-v2/source/Mysql.md
@@ -41,7 +41,7 @@ Read external data source data through JDBC.
## Data Type Mapping
-| Mysql Data type | SeaTunnel Data type |
+| Mysql Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| BIT(1)
INT UNSIGNED | BOOLEAN |
| TINYINT
TINYINT UNSIGNED
SMALLINT
SMALLINT UNSIGNED
MEDIUMINT
MEDIUMINT UNSIGNED
INT
INTEGER
YEAR | INT |
diff --git a/docs/en/connector-v2/source/Oracle.md b/docs/en/connector-v2/source/Oracle.md
index 46d17619673d..eec999fbcfb8 100644
--- a/docs/en/connector-v2/source/Oracle.md
+++ b/docs/en/connector-v2/source/Oracle.md
@@ -25,7 +25,7 @@ Read external data source data through JDBC.
## Supported DataSource Info
-| Datasource | Supported versions | Driver | Url | Maven |
+| Datasource | Supported Versions | Driver | Url | Maven |
|------------|----------------------------------------------------------|--------------------------|----------------------------------------|--------------------------------------------------------------------|
| Oracle | Different dependency version has different driver class. | oracle.jdbc.OracleDriver | jdbc:oracle:thin:@datasource01:1523:xe | https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 |
@@ -37,7 +37,7 @@ Read external data source data through JDBC.
## Data Type Mapping
-| Oracle Data type | SeaTunnel Data type |
+| Oracle Data Type | SeaTunnel Data Type |
|--------------------------------------------------------------------------------------|---------------------|
| INTEGER | INT |
| FLOAT | DECIMAL(38, 18) |
diff --git a/docs/en/connector-v2/source/PostgreSQL.md b/docs/en/connector-v2/source/PostgreSQL.md
index e991f22c1f8e..34dcd5ec103a 100644
--- a/docs/en/connector-v2/source/PostgreSQL.md
+++ b/docs/en/connector-v2/source/PostgreSQL.md
@@ -25,7 +25,7 @@ Read external data source data through JDBC.
## Supported DataSource Info
-| Datasource | Supported versions | Driver | Url | Maven |
+| Datasource | Supported Versions | Driver | Url | Maven |
|------------|------------------------------------------------------------|-----------------------|---------------------------------------|--------------------------------------------------------------------------|
| PostgreSQL | Different dependency version has different driver class. | org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | [Download](https://mvnrepository.com/artifact/org.postgresql/postgresql) |
| PostgreSQL | If you want to manipulate the GEOMETRY type in PostgreSQL. | org.postgresql.Driver | jdbc:postgresql://localhost:5432/test | [Download](https://mvnrepository.com/artifact/net.postgis/postgis-jdbc) |
diff --git a/docs/en/connector-v2/source/RocketMQ.md b/docs/en/connector-v2/source/RocketMQ.md
index 4e903dc900b8..d496a259bdb6 100644
--- a/docs/en/connector-v2/source/RocketMQ.md
+++ b/docs/en/connector-v2/source/RocketMQ.md
@@ -12,7 +12,7 @@
> Flink
> SeaTunnel Zeta
-## Key features
+## Key Features
- [x] [batch](../../concept/connector-v2-features.md)
- [x] [stream](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/SftpFile.md b/docs/en/connector-v2/source/SftpFile.md
index 05b3bc4f38f3..b6096606a21f 100644
--- a/docs/en/connector-v2/source/SftpFile.md
+++ b/docs/en/connector-v2/source/SftpFile.md
@@ -8,7 +8,7 @@
> Flink
> SeaTunnel Zeta
-## Key features
+## Key Features
- [x] [batch](../../concept/connector-v2-features.md)
- [ ] [stream](../../concept/connector-v2-features.md)
diff --git a/docs/en/connector-v2/source/SqlServer-CDC.md b/docs/en/connector-v2/source/SqlServer-CDC.md
index 62b788ac155a..02cc4c21ac78 100644
--- a/docs/en/connector-v2/source/SqlServer-CDC.md
+++ b/docs/en/connector-v2/source/SqlServer-CDC.md
@@ -37,7 +37,7 @@ Please download and put SqlServer driver in `${SEATUNNEL_HOME}/lib/` dir. For ex
## Data Type Mapping
-| SQLserver Data type | SeaTunnel Data type |
+| SQLserver Data Type | SeaTunnel Data Type |
|---------------------------------------------------------------------------------------------------|----------------------------------------------------|
| CHAR
VARCHAR
NCHAR
NVARCHAR
STRUCT
CLOB
LONGVARCHAR
LONGNVARCHAR
| STRING |
| BLOB | BYTES |
diff --git a/docs/en/connector-v2/source/Vertica.md b/docs/en/connector-v2/source/Vertica.md
index 9e945356ffa5..c78625dab09d 100644
--- a/docs/en/connector-v2/source/Vertica.md
+++ b/docs/en/connector-v2/source/Vertica.md
@@ -36,7 +36,7 @@ Read external data source data through JDBC.
## Data Type Mapping
-| Vertical Data type | SeaTunnel Data type |
+| Vertical Data Type | SeaTunnel Data Type |
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| BIT | BOOLEAN |
| TINYINT
TINYINT UNSIGNED
SMALLINT
SMALLINT UNSIGNED
MEDIUMINT
MEDIUMINT UNSIGNED
INT
INTEGER
YEAR | INT |