Skip to content

Commit

Permalink
[Improve][Connector-V2][doc]Modify some document title specifications (
Browse files Browse the repository at this point in the history
  • Loading branch information
zhilinli123 authored and chaorongzhi committed Aug 21, 2024
1 parent 4cd2197 commit 78bdf9e
Show file tree
Hide file tree
Showing 48 changed files with 153 additions and 150 deletions.
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/avro.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Avro is very popular in streaming data pipeline. Now seatunnel supports Avro format in kafka connector.

# How to use Avro format
# How To Use

## Kafka uses example

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/formats/canal-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ SeaTunnel also supports to encode the INSERT/UPDATE/DELETE messages in SeaTunnel

# Format Options

| option | default | required | Description |
| Option | Default | Required | Description |
|--------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be 'canal_json'. |
| canal_json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |
| canal_json.database.include | (none) | no | An optional regular expression to only read the specific databases changelog rows by regular matching the "database" meta field in the Canal record. The pattern string is compatible with Java's Pattern. |
| canal_json.table.include | (none) | no | An optional regular expression to only read the specific tables changelog rows by regular matching the "table" meta field in the Canal record. The pattern string is compatible with Java's Pattern. |

# How to use Canal format
# How to use

## Kafka uses example

Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/formats/cdc-compatible-debezium-json.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# CDC compatible debezium-json
# CDC Compatible Debezium-json

SeaTunnel supports to interpret cdc record as Debezium-JSON messages publish to mq(kafka) system.

This is useful in many cases to leverage this feature, such as compatible with the debezium ecosystem.

# How to use
# How To Use

## MySQL-CDC output to Kafka
## MySQL-CDC Sink Kafka

```bash
env {
Expand Down
6 changes: 3 additions & 3 deletions docs/en/connector-v2/formats/debezium-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE messages in Seatunnel

# Format Options

| option | default | required | Description |
| Option | Default | Required | Description |
|-----------------------------------|---------|----------|------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be 'debezium_json'. |
| debezium-json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |

# How to use Debezium format
# How To Use

## Kafka uses example
## Kafka Uses example

Debezium provides a unified format for changelog, here is a simple example for an update operation captured from a MySQL products table:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

Seatunnel connector kafka supports parsing data extracted through kafka connect source, especially data extracted from kafka connect jdbc and kafka connect debezium

# How to use
# How To Use

## Kafka output to mysql
## Kafka Sink Mysql

```bash
env {
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/formats/ogg-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Seatunnel also supports to encode the INSERT/UPDATE/DELETE messages in Seatunnel

# Format Options

| option | default | required | Description |
| Option | Default | Required | Description |
|------------------------------|---------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| format | (none) | yes | Specify what format to use, here should be '-json'. |
| ogg_json.ignore-parse-errors | false | no | Skip fields and rows with parse errors instead of failing. Fields are set to null in case of errors. |
Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/AmazonDynamoDB.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@

Write data to Amazon DynamoDB

## Key features
## Key Features

- [ ] [exactly-once](../../concept/connector-v2-features.md)

## Options

| name | type | required | default value |
| Name | Type | Required | Default value |
|-------------------|--------|----------|---------------|
| url | string | yes | - |
| region | string | yes | - |
Expand Down
64 changes: 32 additions & 32 deletions docs/en/connector-v2/sink/Assert.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,43 +6,43 @@

A flink sink plugin which can assert illegal data by user defined rules

## Key features
## Key Features

- [ ] [exactly-once](../../concept/connector-v2-features.md)

## Options

| name | type | required | default value |
|------------------------------------------------------------------------------------------------|------------|----------|---------------|
| rules | ConfigMap | yes | - |
| rules.field_rules | string | yes | - |
| rules.field_rules.field_name | string | yes | - |
| rules.field_rules.field_type | string | no | - |
| rules.field_rules.field_value | ConfigList | no | - |
| rules.field_rules.field_value.rule_type | string | no | - |
| rules.field_rules.field_value.rule_value | double | no | - |
| rules.row_rules | string | yes | - |
| rules.row_rules.rule_type | string | no | - |
| rules.row_rules.rule_value | string | no | - |
| rules.catalog_table_rule | ConfigMap | no | - |
| rules.catalog_table_rule.primary_key_rule | ConfigMap | no | - |
| rules.catalog_table_rule.primary_key_rule.primary_key_name | string | no | - |
| rules.catalog_table_rule.primary_key_rule.primary_key_columns | list | no | - |
| rules.catalog_table_rule.constraint_key_rule | ConfigList | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_name | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_type | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns | ConfigList | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type | string | no | - |
| rules.catalog_table_rule.column_rule | ConfigList | no | - |
| rules.catalog_table_rule.column_rule.name | string | no | - |
| rules.catalog_table_rule.column_rule.type | string | no | - |
| rules.catalog_table_rule.column_rule.column_length | int | no | - |
| rules.catalog_table_rule.column_rule.nullable | boolean | no | - |
| rules.catalog_table_rule.column_rule.default_value | string | no | - |
| rules.catalog_table_rule.column_rule.comment | comment | no | - |
| rules.table-names | list | no | - |
| common-options | | no | - |
| Name | Type | Required | Default |
|------------------------------------------------------------------------------------------------|------------|----------|---------|
| rules | ConfigMap | yes | - |
| rules.field_rules | string | yes | - |
| rules.field_rules.field_name | string | yes | - |
| rules.field_rules.field_type | string | no | - |
| rules.field_rules.field_value | ConfigList | no | - |
| rules.field_rules.field_value.rule_type | string | no | - |
| rules.field_rules.field_value.rule_value | double | no | - |
| rules.row_rules | string | yes | - |
| rules.row_rules.rule_type | string | no | - |
| rules.row_rules.rule_value | string | no | - |
| rules.catalog_table_rule | ConfigMap | no | - |
| rules.catalog_table_rule.primary_key_rule | ConfigMap | no | - |
| rules.catalog_table_rule.primary_key_rule.primary_key_name | string | no | - |
| rules.catalog_table_rule.primary_key_rule.primary_key_columns | list | no | - |
| rules.catalog_table_rule.constraint_key_rule | ConfigList | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_name | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_type | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns | ConfigList | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_column_name | string | no | - |
| rules.catalog_table_rule.constraint_key_rule.constraint_key_columns.constraint_key_sort_type | string | no | - |
| rules.catalog_table_rule.column_rule | ConfigList | no | - |
| rules.catalog_table_rule.column_rule.name | string | no | - |
| rules.catalog_table_rule.column_rule.type | string | no | - |
| rules.catalog_table_rule.column_rule.column_length | int | no | - |
| rules.catalog_table_rule.column_rule.nullable | boolean | no | - |
| rules.catalog_table_rule.column_rule.default_value | string | no | - |
| rules.catalog_table_rule.column_rule.comment | comment | no | - |
| rules.table-names | list | no | - |
| common-options | | no | - |

### rules [ConfigMap]

Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ They can be downloaded via install-plugin.sh or from the Maven central repositor

## Data Type Mapping

| SeaTunnel Data type | Clickhouse Data type |
| SeaTunnel Data Type | Clickhouse Data Type |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| STRING | String / Int128 / UInt128 / Int256 / UInt256 / Point / Ring / Polygon MultiPolygon |
| INT | Int8 / UInt8 / Int16 / UInt16 / Int32 |
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/ClickhouseFile.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Write data to Clickhouse can also be done using JDBC

## Options

| name | type | required | default value |
| Name | Type | Required | Default |
|------------------------|---------|----------|----------------------------------------|
| host | string | yes | - |
| database | string | yes | - |
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/Console.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Used to send data to Console. Both support streaming and batch mode.

> For example, if the data from upstream is [`age: 12, name: jared`], the content send to console is the following: `{"name":"jared","age":17}`
## Key features
## Key Features

- [ ] [exactly-once](../../concept/connector-v2-features.md)

Expand Down
4 changes: 2 additions & 2 deletions docs/en/connector-v2/sink/CosFile.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To use this connector you need put hadoop-cos-{hadoop.version}-{version}.jar and

:::

## Key features
## Key Features

- [x] [exactly-once](../../concept/connector-v2-features.md)

Expand All @@ -32,7 +32,7 @@ By default, we use 2PC commit to ensure `exactly-once`

## Options

| name | type | required | default value | remarks |
| Name | Type | Required | Default | Description |
|----------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| path | string | yes | - | |
| tmp_path | string | no | /tmp/seatunnel | The result file will write to a tmp path first and then use `mv` to submit tmp dir to target dir. Need a COS dir. |
Expand Down
2 changes: 1 addition & 1 deletion docs/en/connector-v2/sink/DB2.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ semantics (using XA transaction guarantee).
## Data Type Mapping

| DB2 Data type | SeaTunnel Data type |
| DB2 Data Type | SeaTunnel Data Type |
|------------------------------------------------------------------------------------------------------|---------------------|
| BOOLEAN | BOOLEAN |
| SMALLINT | SHORT |
Expand Down
20 changes: 7 additions & 13 deletions docs/en/connector-v2/sink/Doris.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

> Doris sink connector
## Support Doris Version

- exactly-once & cdc supported `Doris version is >= 1.1.x`
- Array data type supported `Doris version is >= 1.2.x`
- Map data type will be support in `Doris version is 2.x`

## Support Those Engines

> Spark<br/>
Expand All @@ -18,18 +24,6 @@
Used to send data to Doris. Both support streaming and batch mode.
The internal implementation of Doris sink connector is cached and imported by stream load in batches.

## Supported DataSource Info

:::tip

Version Supported

* exactly-once & cdc supported `Doris version is >= 1.1.x`
* Array data type supported `Doris version is >= 1.2.x`
* Map data type will be support in `Doris version is 2.x`

:::

## Sink Options

| Name | Type | Required | Default | Description |
Expand Down Expand Up @@ -120,7 +114,7 @@ You can use the following placeholders

## Data Type Mapping

| Doris Data type | SeaTunnel Data type |
| Doris Data Type | SeaTunnel Data Type |
|-----------------|-----------------------------------------|
| BOOLEAN | BOOLEAN |
| TINYINT | TINYINT |
Expand Down
Loading

0 comments on commit 78bdf9e

Please sign in to comment.