From 18d2d41229ad32e6b06dbb15d4e9296b26951d3a Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 15 Jun 2022 10:34:33 +0800 Subject: [PATCH] deploy using tiup: align three PRs (#8603) --- clinic/clinic-user-guide-for-tiup.md | 2 +- hardware-and-software-requirements.md | 2 +- migration-tools.md | 2 +- production-deployment-using-tiup.md | 212 ++++++++++++++------------ scale-tidb-using-tiup.md | 151 ++++++++++-------- tiup/tiup-cluster.md | 4 +- upgrade-tidb-using-tiup.md | 2 +- 7 files changed, 212 insertions(+), 163 deletions(-) diff --git a/clinic/clinic-user-guide-for-tiup.md b/clinic/clinic-user-guide-for-tiup.md index 485d2f9a74e2d..2ad1f98043924 100644 --- a/clinic/clinic-user-guide-for-tiup.md +++ b/clinic/clinic-user-guide-for-tiup.md @@ -48,7 +48,7 @@ Before using PingCAP Clinic, you need to install Diag (a component to collect da > **Note:** > - > - For clusters without an internet connection, you need to deploy Diag offline. For details, refer to [Deploy TiUP offline: Method 2](/production-deployment-using-tiup.md#method-2-deploy-tiup-offline). + > - For clusters without an internet connection, you need to deploy Diag offline. For details, refer to [Deploy TiUP offline: Method 2](/production-deployment-using-tiup.md#deploy-tiup-offline). > - Diag is **only** provided in the TiDB Server offline mirror package of v5.4.0 or later. 2. Get and set an access token (token) to upload data. diff --git a/hardware-and-software-requirements.md b/hardware-and-software-requirements.md index acca98f40672c..c4850c7036748 100644 --- a/hardware-and-software-requirements.md +++ b/hardware-and-software-requirements.md @@ -39,7 +39,7 @@ Other Linux OS versions such as Debian Linux and Fedora Linux might work but are > **Note:** > -> It is required that you [deploy TiUP on the control machine](/production-deployment-using-tiup.md#step-2-install-tiup-on-the-control-machine) to operate and manage TiDB clusters. +> It is required that you [deploy TiUP on the control machine](/production-deployment-using-tiup.md#step-2-deploy-tiup-on-the-control-machine) to operate and manage TiDB clusters. ### Target machines diff --git a/migration-tools.md b/migration-tools.md index c0e997ef7555e..823ba74ab74e9 100644 --- a/migration-tools.md +++ b/migration-tools.md @@ -101,5 +101,5 @@ tiup update --self && tiup update dm ## See also -- [Deploy TiUP offline](/production-deployment-using-tiup.md#method-2-deploy-tiup-offline) +- [Deploy TiUP offline](/production-deployment-using-tiup.md#deploy-tiup-offline) - [Download and install tools in binary](/download-ecosystem-tools.md) diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md index 0e0d99ed65100..a6120da1ef1ac 100644 --- a/production-deployment-using-tiup.md +++ b/production-deployment-using-tiup.md @@ -13,22 +13,25 @@ TiUP supports deploying TiDB, TiFlash, TiDB Binlog, TiCDC, and the monitoring sy > > TiDB, TiUP and TiDB Dashboard share usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see [Telemetry](/telemetry.md). -## Step 1: Prerequisites and precheck +## Step 1. Prerequisites and precheck Make sure that you have read the following documents: - [Hardware and software requirements](/hardware-and-software-requirements.md) - [Environment and system configuration check](/check-before-deployment.md) -## Step 2: Install TiUP on the control machine +## Step 2. Deploy TiUP on the control machine -You can install TiUP on the control machine in either of the two ways: online deployment and offline deployment. +You can deploy TiUP on the control machine in either of the two ways: online deployment and offline deployment. -### Method 1: Deploy TiUP online + +
-Log in to the control machine using a regular user account (take the `tidb` user as an example). All the following TiUP installation and cluster management operations can be performed by the `tidb` user. +### Deploy TiUP online -1. Install TiUP by executing the following command: +Log in to the control machine using a regular user account (take the `tidb` user as an example). Subsequent TiUP installation and cluster management can be performed by the `tidb` user. + +1. Install TiUP by running the following command: {{< copyable "shell-regular" >}} @@ -36,23 +39,23 @@ Log in to the control machine using a regular user account (take the `tidb` user curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh ``` -2. Set the TiUP environment variables: +2. Set TiUP environment variables: - Redeclare the global environment variables: + 1. Redeclare the global environment variables: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - source .bash_profile - ``` + ```shell + source .bash_profile + ``` - Confirm whether TiUP is installed: + 2. Confirm whether TiUP is installed: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - which tiup - ``` + ```shell + which tiup + ``` 3. Install the TiUP cluster component: @@ -70,7 +73,7 @@ Log in to the control machine using a regular user account (take the `tidb` user tiup update --self && tiup update cluster ``` - Expected output includes `"Update successfully!"`. + If `“Update successfully!”` is displayed, the TiUP cluster is updated successfully. 5. Verify the current version of your TiUP cluster: @@ -80,13 +83,17 @@ Log in to the control machine using a regular user account (take the `tidb` user tiup --binary cluster ``` -### Method 2: Deploy TiUP offline +
+ +
+ +### Deploy TiUP offline Perform the following steps in this section to deploy a TiDB cluster offline using TiUP: -#### Step 1: Prepare the TiUP offline component package +#### Prepare the TiUP offline component package -To prepare the TiUP offline component package, manually pack an offline component package using `tiup mirror clone`. +To prepare the TiUP offline component package, you can manually pack an offline component package using `tiup mirror clone`. 1. Install the TiUP package manager online. @@ -182,7 +189,7 @@ To prepare the TiUP offline component package, manually pack an offline componen 5. When the above steps are completed, check the result by running the `tiup list` command. In this document's example, the outputs of both `tiup list tiup` and `tiup list cluster` show that the corresponding components of `v1.10.0` are available. -#### Step 2: Deploy the offline TiUP component +#### Deploy the offline TiUP component After sending the package to the control machine of the target cluster, install the TiUP component by running the following commands: @@ -194,15 +201,16 @@ sh tidb-community-server-${version}-linux-amd64/local_install.sh && \ source /home/tidb/.bash_profile ``` -The `local_install.sh` script automatically executes the `tiup mirror set tidb-community-server-${version}-linux-amd64` command to set the current mirror address to `tidb-community-server-${version}-linux-amd64`. +The `local_install.sh` script automatically runs the `tiup mirror set tidb-community-server-${version}-linux-amd64` command to set the current mirror address to `tidb-community-server-${version}-linux-amd64`. -To switch the mirror to another directory, you can manually execute the `tiup mirror set ` command. To switch the mirror to the online environment, you can execute the `tiup mirror set https://tiup-mirrors.pingcap.com` command. +To switch the mirror to another directory, run the `tiup mirror set ` command. To switch the mirror to the online environment, run the `tiup mirror set https://tiup-mirrors.pingcap.com` command. -## Step 3: Initialize cluster topology file +
+
-According to the intended cluster topology, you need to manually create and edit the cluster initialization configuration file. +## Step 3. Initialize cluster topology file -To create the cluster initialization configuration file, you can create a YAML-formatted configuration file on the control machine using TiUP: +Run the following command to create a cluster topology file: {{< copyable "shell-regular" >}} @@ -210,11 +218,27 @@ To create the cluster initialization configuration file, you can create a YAML-f tiup cluster template > topology.yaml ``` -> **Note:** -> -> For the hybrid deployment scenarios, you can also execute `tiup cluster template --full > topology.yaml` to create the recommended topology template. For the geo-distributed deployment scenarios, you can execute `tiup cluster template --multi-dc > topology.yaml` to create the recommended topology template. +In the following two common scenarios, you can generate recommended topology templates by running commands: -Execute `vi topology.yaml` to see the configuration file content: +- For hybrid deployment: Multiple instances are deployed on a single machine. For details, see [Hybrid Deployment Topology](/hybrid-deployment-topology.md). + + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster template --full > topology.yaml + ``` + +- For geo-distributed deployment: TiDB clusters are deployed in geographically distributed data centers. For details, see [Geo-Distributed Deployment Topology](/geo-distributed-deployment-topology.md). + + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster template --multi-dc > topology.yaml + ``` + +Run `vi topology.yaml` to see the configuration file content: + +{{< copyable "shell-regular" >}} ```shell global: @@ -243,51 +267,40 @@ alertmanager_servers: - host: 10.0.1.4 ``` -The following examples cover six common scenarios. You need to modify the configuration file (named `topology.yaml`) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly. - -- [Minimal deployment topology](/minimal-deployment-topology.md) - - This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. It is suitable for OLTP applications. - -- [TiFlash deployment topology](/tiflash-deployment-topology.md) - - This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. It is suitable for real-time HTAP applications. - -- [TiCDC deployment topology](/ticdc-deployment-topology.md) +The following examples cover seven common scenarios. You need to modify the configuration file (named `topology.yaml`) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly. - This is to deploy TiCDC along with the minimal cluster topology. TiCDC is a tool for replicating the incremental data of TiDB, introduced in TiDB 4.0. It supports multiple downstream platforms, such as TiDB, MySQL, and MQ. Compared with TiDB Binlog, TiCDC has lower latency and native high availability. After the deployment, start TiCDC and [create the replication task using `cdc cli`](/ticdc/manage-ticdc.md). - -- [TiDB Binlog deployment topology](/tidb-binlog-deployment-topology.md) - - This is to deploy TiDB Binlog along with the minimal cluster topology. TiDB Binlog is the widely used component for replicating incremental data. It provides near real-time backup and replication. - -- [TiSpark deployment topology](/tispark-deployment-topology.md) - - This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still **experimental**. - -- [Hybrid deployment topology](/hybrid-deployment-topology.md) - - This is to deploy multiple instances on a single machine. You need to add extra configurations for the directory, port, resource ratio, and label. - -- [Geo-distributed deployment topology](/geo-distributed-deployment-topology.md) - - This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention. +| Application | Configuration task | Configuration file template | Topology description | +| :-- | :-- | :-- | :-- | +| OLTP | [Deploy minimal topology](/minimal-deployment-topology.md) | [Simple minimal configuration template](/config-templates/simple-mini.yaml)
[Full minimal configuration template](/config-templates/complex-mini.yaml) | This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. | +| HTAP | [Deploy the TiFlash topology](/tiflash-deployment-topology.md) | [Simple TiFlash configuration template](/config-templates/simple-tiflash.yaml)
[Full TiFlash configuration template](/config-templates/complex-tiflash.yaml) | This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. | +| Replicate incremental data using [TiCDC](/ticdc/ticdc-overview.md) | [Deploy the TiCDC topology](/ticdc-deployment-topology.md) | [Simple TiCDC configuration template](/config-templates/simple-cdc.yaml)
[Full TiCDC configuration template](/config-templates/complex-cdc.yaml) | This is to deploy TiCDC along with the minimal cluster topology. TiCDC supports multiple downstream platforms, such as TiDB, MySQL, and MQ. | +| Replicate incremental data using [TiDB Binlog](/tidb-binlog/tidb-binlog-overview.md) | [Deploy the TiDB Binlog topology](/tidb-binlog-deployment-topology.md) | [Simple TiDB Binlog configuration template (MySQL as downstream)](/config-templates/simple-tidb-binlog.yaml)
[Simple TiDB Binlog configuration template (Files as downstream)](/config-templates/simple-file-binlog.yaml)
[Full TiDB Binlog configuration template](/config-templates/complex-tidb-binlog.yaml) | This is to deploy TiDB Binlog along with the minimal cluster topology. | +| Use OLAP on Spark | [Deploy the TiSpark topology](/tispark-deployment-topology.md) | [Simple TiSpark configuration template](/config-templates/simple-tispark.yaml)
[Full TiSpark configuration template](/config-templates/complex-tispark.yaml) | This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still **experimental**. | +| Deploy multiple instances on a single machine | [Deploy a hybrid topology](/hybrid-deployment-topology.md) | [Simple configuration template for hybrid deployment](/config-templates/simple-multi-instance.yaml)
[Full configuration template for hybrid deployment](/config-templates/complex-multi-instance.yaml) | The deployment topologies also apply when you need to add extra configurations for the directory, port, resource ratio, and label. | +| Deploy TiDB clusters across data centers | [Deploy a geo-distributed deployment topology](/geo-distributed-deployment-topology.md) | [Configuration template for geo-distributed deployment](/config-templates/geo-redundancy-deployment.yaml) | This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention. | > **Note:** > > - For parameters that should be globally effective, configure these parameters of corresponding components in the `server_configs` section of the configuration file. > - For parameters that should be effective on a specific node, configure these parameters in the `config` of this node. > - Use `.` to indicate the subcategory of the configuration, such as `log.slow-threshold`. For more formats, see [TiUP configuration template](https://github.com/pingcap/tiup/blob/master/embed/examples/cluster/topology.example.yaml). -> - For more parameter description, see [TiDB `config.toml.example`](https://github.com/pingcap/tidb/blob/master/config/config.toml.example), [TiKV `config.toml.example`](https://github.com/tikv/tikv/blob/master/etc/config-template.toml), [PD `config.toml.example`](https://github.com/pingcap/pd/blob/master/conf/config.toml), and [TiFlash configuration](/tiflash/tiflash-configuration.md). +> - If you need to specify the user group name to be created on the target machine, see [this example](https://github.com/pingcap/tiup/blob/master/embed/examples/cluster/topology.example.yaml#L7). + +For more configuration description, see the following configuration examples: + +- [TiDB `config.toml.example`](https://github.com/pingcap/tidb/blob/master/config/config.toml.example) +- [TiKV `config.toml.example`](https://github.com/tikv/tikv/blob/master/etc/config-template.toml) +- [PD `config.toml.example`](https://github.com/pingcap/pd/blob/master/conf/config.toml) +- [TiFlash `config.toml.example`](https://github.com/pingcap/tiflash/blob/master/etc/config-template.toml) -## Step 4: Execute the deployment command +## Step 4. Run the deployment command > **Note:** > > You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP: > -> - If you use secret keys, you can specify the path of the keys through `-i` or `--identity_file`; -> - If you use passwords, add the `-p` flag to enter the password interaction window; +> - If you use secret keys, specify the path of the keys through `-i` or `--identity_file`. +> - If you use passwords, add the `-p` flag to enter the password interaction window. > - If password-free login to the target machine has been configured, no authentication is required. > > In general, TiUP creates the user and group specified in the `topology.yaml` file on the target machine, with the following exceptions: @@ -295,35 +308,43 @@ The following examples cover six common scenarios. You need to modify the config > - The user name configured in `topology.yaml` already exists on the target machine. > - You have used the `--skip-create-user` option in the command line to explicitly skip the step of creating the user. -Before you execute the `deploy` command, use the `check` and `check --apply` commands to detect and automatically repair the potential risks in the cluster: +Before you run the `deploy` command, use the `check` and `check --apply` commands to detect and automatically repair potential risks in the cluster: -{{< copyable "shell-regular" >}} +1. Check for potential risks: -```shell -tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] -tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa] -``` + {{< copyable "shell-regular" >}} -Then execute the `deploy` command to deploy the TiDB cluster: + ```shell + tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] + ``` -{{< copyable "shell-regular" >}} +2. Enable automatic repair: -```shell -tiup cluster deploy tidb-test v6.1.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] -``` + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa] + ``` + +3. Deploy a TiDB cluster: + + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster deploy tidb-test v6.0.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] + ``` -In the above command: +In the `tiup cluster deploy` command above: -- The name of the deployed TiDB cluster is `tidb-test`. -- You can see the latest supported versions by running `tiup list tidb`. This document takes `v6.1.0` as an example. -- The initialization configuration file is `topology.yaml`. -- `--user root`: Log in to the target machine through the `root` key to complete the cluster deployment, or you can use other users with `ssh` and `sudo` privileges to complete the deployment. -- `[-i]` and `[-p]`: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the `root` user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively. -- If you need to specify the user group name to be created on the target machine, see [this example](https://github.com/pingcap/tiup/blob/master/embed/examples/cluster/topology.example.yaml#L7). +- `tidb-test` is the name of the TiDB cluster to be deployed. +- `v6.1.0` is the version of the TiDB cluster to be deployed. You can see the latest supported versions by running `tiup list tidb`. +- `topology.yaml` is the initialization configuration file. +- `--user root` indicates logging into the target machine as the `root` user to complete the cluster deployment. The `root` user is expected to have `ssh` and `sudo` privileges to the target machine. Alternatively, you can use other users with `ssh` and `sudo` privileges to complete the deployment. +- `[-i]` and `[-p]` are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the root user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively. At the end of the output log, you will see ```Deployed cluster `tidb-test` successfully```. This indicates that the deployment is successful. -## Step 5: Check the clusters managed by TiUP +## Step 5. Check the clusters managed by TiUP {{< copyable "shell-regular" >}} @@ -331,18 +352,11 @@ At the end of the output log, you will see ```Deployed cluster `tidb-test` succe tiup cluster list ``` -TiUP supports managing multiple TiDB clusters. The command above outputs information of all the clusters currently managed by TiUP, including the name, deployment user, version, and secret key information: - -```log -Starting /home/tidb/.tiup/components/cluster/v1.5.0/cluster list -Name User Version Path PrivateKey ----- ---- ------- ---- ---------- -tidb-test tidb v5.3.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa -``` +TiUP supports managing multiple TiDB clusters. The preceding command outputs information of all the clusters currently managed by TiUP, including the cluster name, deployment user, version, and secret key information: -## Step 6: Check the status of the deployed TiDB cluster +## Step 6. Check the status of the deployed TiDB cluster -For example, execute the following command to check the status of the `tidb-test` cluster: +For example, run the following command to check the status of the `tidb-test` cluster: {{< copyable "shell-regular" >}} @@ -352,7 +366,7 @@ tiup cluster display tidb-test Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status is `Down`/`inactive`), and directory information. -## Step 7: Start a TiDB cluster +## Step 7. Start a TiDB cluster Since TiUP cluster v1.9.0, safe start is introduced as a new start method. Starting a database using this method improves the security of the database. It is recommended that you use this method. @@ -394,11 +408,17 @@ tiup cluster start tidb-test If the output log includes ```Started cluster `tidb-test` successfully```, the start is successful. After standard start, you can log in to a database using a root user without a password. -## Step 8: Verify the running status of the TiDB cluster +## Step 8. Verify the running status of the TiDB cluster + +{{< copyable "shell-regular" >}} + +```shell +tiup cluster display tidb-test +``` -For the specific operations, see [Verify Cluster Status](/post-installation-check.md). +If the output log shows `Up` status, the cluster is running properly. -## What's next +## See also If you have deployed [TiFlash](/tiflash/tiflash-overview.md) along with the TiDB cluster, see the following documents: diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index 8c7880b33ce0a..2f1dd8756f4b7 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -1,13 +1,13 @@ --- -title: Scale the TiDB Cluster Using TiUP +title: Scale a TiDB Cluster Using TiUP summary: Learn how to scale the TiDB cluster using TiUP. --- -# Scale the TiDB Cluster Using TiUP +# Scale a TiDB Cluster Using TiUP The capacity of a TiDB cluster can be increased or decreased without interrupting the online services. -This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash cluster using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/production-deployment-using-tiup.md#step-2-install-tiup-on-the-control-machine). +This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash cluster using TiUP. If you have not installed TiUP, refer to the steps in [Step 2. Deploy TiUP on the control machine](/production-deployment-using-tiup.md#step-2-deploy-tiup-on-the-control-machine). To view the current cluster name list, run `tiup cluster list`. @@ -23,11 +23,11 @@ For example, if the original topology of the cluster is as follows: ## Scale out a TiDB/PD/TiKV cluster -If you want to add a TiDB node to the `10.0.1.5` host, take the following steps. +This section exemplifies how to add a TiDB node to the `10.0.1.5` host. > **Note:** > -> You can take similar steps to add the PD node. Before you add the TiKV node, it is recommended that you adjust the PD scheduling parameters in advance according to the cluster load. +> You can take similar steps to add a PD node. Before you add a TiKV node, it is recommended that you adjust the PD scheduling parameters in advance according to the cluster load. 1. Configure the scale-out topology: @@ -35,7 +35,7 @@ If you want to add a TiDB node to the `10.0.1.5` host, take the following steps. > > * The port and directory information is not required by default. > * If multiple instances are deployed on a single machine, you need to allocate different ports and directories for them. If the ports or directories have conflicts, you will receive a notification during deployment or scaling. - > * Since TiUP v1.0.0, the scale-out configuration will inherit the global configuration of the original cluster. + > * Since TiUP v1.0.0, the scale-out configuration inherits the global configuration of the original cluster. Add the scale-out topology configuration in the `scale-out.yaml` file: @@ -90,29 +90,41 @@ If you want to add a TiDB node to the `10.0.1.5` host, take the following steps. To view the configuration of the current cluster, run `tiup cluster edit-config `. Because the parameter configuration of `global` and `server_configs` is inherited by `scale-out.yaml` and thus also takes effect in `scale-out.yaml`. - After the configuration, the current topology of the cluster is as follows: +2. Run the scale-out command: - | Host IP | Service | - |:---|:----| - | 10.0.1.3 | TiDB + TiFlash | - | 10.0.1.4 | TiDB + PD | - | 10.0.1.5 | **TiDB** + TiKV + Monitor | - | 10.0.1.1 | TiKV | - | 10.0.1.2 | TiKV | + Before you run the `scale-out` command, use the `check` and `check --apply` commands to detect and automatically repair potential risks in the cluster: -2. Run the scale-out command: + 1. Check for potential risks: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - tiup cluster scale-out scale-out.yaml - ``` + ```shell + tiup cluster check scale-out.yaml --cluster --user root [-p] [-i /home/root/.ssh/gcp_rsa] + ``` - > **Note:** - > - > The command above is based on the assumption that the mutual trust has been configured for the user to execute the command and the new machine. If the mutual trust cannot be configured, use the `-p` option to enter the password of the new machine, or use the `-i` option to specify the private key file. + 2. Enable automatic repair: - If you see the `Scaled cluster out successfully`, the scale-out operation is successfully completed. + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster check scale-out.yaml --cluster --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa] + ``` + + 3. Run the `scale-out` command: + + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster scale-out scale-out.yaml [-p] [-i /home/root/.ssh/gcp_rsa] + ``` + + In the preceding commands: + + - `scale-out.yaml` is the scale-out configuration file. + - `--user root` indicates logging in to the target machine as the `root` user to complete the cluster scale out. The `root` user is expected to have `ssh` and `sudo` privileges to the target machine. Alternatively, you can use other users with `ssh` and `sudo` privileges to complete the deployment. + - `[-i]` and `[-p]` are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the root user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively. + + If you see `Scaled cluster out successfully`, the scale-out operation succeeds. 3. Check the cluster status: @@ -136,14 +148,14 @@ After the scale-out, the cluster topology is as follows: ## Scale out a TiFlash cluster -If you want to add a TiFlash node to the `10.0.1.4` host, take the following steps. +This section exemplifies how to add a TiFlash node to the `10.0.1.4` host. > **Note:** > -> When adding a TiFlash node to an existing TiDB cluster, you need to note the following things: +> When adding a TiFlash node to an existing TiDB cluster, note the following: > -> 1. Confirm that the current TiDB version supports using TiFlash. Otherwise, upgrade your TiDB cluster to v5.0 or later versions. -> 2. Execute the `tiup ctl: pd -u http://: config set enable-placement-rules true` command to enable the Placement Rules feature. Or execute the corresponding command in [pd-ctl](/pd-control.md). +> - Confirm that the current TiDB version supports using TiFlash. Otherwise, upgrade your TiDB cluster to v5.0 or later versions. +> - Run the `tiup ctl: pd -u http://: config set enable-placement-rules true` command to enable the Placement Rules feature. Or run the corresponding command in [pd-ctl](/pd-control.md). 1. Add the node information to the `scale-out.yaml` file: @@ -153,10 +165,10 @@ If you want to add a TiFlash node to the `10.0.1.4` host, take the following ste ```ini tiflash_servers: - - host: 10.0.1.4 + - host: 10.0.1.4 ``` - Currently, you can only add IP but not domain name. + Currently, you can only add IP addresses but not domain names. 2. Run the scale-out command: @@ -168,7 +180,7 @@ If you want to add a TiFlash node to the `10.0.1.4` host, take the following ste > **Note:** > - > The command above is based on the assumption that the mutual trust has been configured for the user to execute the command and the new machine. If the mutual trust cannot be configured, use the `-p` option to enter the password of the new machine, or use the `-i` option to specify the private key file. + > The preceding command is based on the assumption that the mutual trust has been configured for the user to run the command and the new machine. If the mutual trust cannot be configured, use the `-p` option to enter the password of the new machine, or use the `-i` option to specify the private key file. 3. View the cluster status: @@ -192,7 +204,7 @@ After the scale-out, the cluster topology is as follows: ## Scale out a TiCDC cluster -If you want to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts, take the following steps. +This section exemplifies how to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts. 1. Add the node information to the `scale-out.yaml` file: @@ -220,7 +232,7 @@ If you want to add two TiCDC nodes to the `10.0.1.3` and `10.0.1.4` hosts, take > **Note:** > - > The command above is based on the assumption that the mutual trust has been configured for the user to execute the command and the new machine. If the mutual trust cannot be configured, use the `-p` option to enter the password of the new machine, or use the `-i` option to specify the private key file. + > The preceding command is based on the assumption that the mutual trust has been configured for the user to run the command and the new machine. If the mutual trust cannot be configured, use the `-p` option to enter the password of the new machine, or use the `-i` option to specify the private key file. 3. View the cluster status: @@ -244,16 +256,13 @@ After the scale-out, the cluster topology is as follows: ## Scale in a TiDB/PD/TiKV cluster -If you want to remove a TiKV node from the `10.0.1.5` host, take the following steps. +This section exemplifies how to remove a TiKV node from the `10.0.1.5` host. > **Note:** > -> - You can take similar steps to remove the TiDB and PD node. +> - You can take similar steps to remove a TiDB or PD node. > - Because the TiKV, TiFlash, and TiDB Binlog components are taken offline asynchronously and the stopping process takes a long time, TiUP takes them offline in different methods. For details, see [Particular handling of components' offline process](/tiup/tiup-component-cluster-scale-in.md#particular-handling-of-components-offline-process). - -> **Note:** -> -> The PD Client in TiKV caches the list of PD nodes. The current version of TiKV has a mechanism to automatically and regularly update PD nodes, which can help mitigate the issue of an expired list of PD nodes cached by TiKV. However, after scaling out PD, you should try to avoid directly removing all PD nodes at once that exist before the scaling. If necessary, before making all the previously existing PD nodes offline, make sure to switch the PD leader to a newly added PD node. +> - The PD Client in TiKV caches the list of PD nodes. The current version of TiKV has a mechanism to automatically and regularly update PD nodes, which can help mitigate the issue of an expired list of PD nodes cached by TiKV. However, after scaling out PD, you should try to avoid directly removing all PD nodes at once that exist before the scaling. If necessary, before making all the previously existing PD nodes offline, make sure to switch the PD leader to a newly added PD node. 1. View the node ID information: @@ -295,13 +304,11 @@ If you want to remove a TiKV node from the `10.0.1.5` host, take the following s The `--node` parameter is the ID of the node to be taken offline. - If you see the `Scaled cluster in successfully`, the scale-in operation is successfully completed. + If you see `Scaled cluster in successfully`, the scale-in operation succeeds. 3. Check the cluster status: - The scale-in process takes some time. If the status of the node to be scaled in becomes `Tombstone`, that means the scale-in operation is successful. - - To check the scale-in status, run the following command: + The scale-in process takes some time. You can run the following command to check the scale-in status: {{< copyable "shell-regular" >}} @@ -309,6 +316,8 @@ If you want to remove a TiKV node from the `10.0.1.5` host, take the following s tiup cluster display ``` + If the node to be scaled in becomes `Tombstone`, the scale-in operation succeeds. + Access the monitoring platform at using your browser, and view the status of the cluster. The current topology is as follows: @@ -323,29 +332,29 @@ The current topology is as follows: ## Scale in a TiFlash cluster -If you want to remove a TiFlash node from the `10.0.1.4` host, take the following steps. +This section exemplifies how to remove a TiFlash node from the `10.0.1.4` host. ### 1. Adjust the number of replicas of the tables according to the number of remaining TiFlash nodes Before the node goes down, make sure that the number of remaining nodes in the TiFlash cluster is no smaller than the maximum number of replicas of all tables. Otherwise, modify the number of TiFlash replicas of the related tables. -1. For all tables whose replicas are greater than the number of remaining TiFlash nodes in the cluster, execute the following command in the TiDB client: +1. For all tables whose replicas are greater than the number of remaining TiFlash nodes in the cluster, run the following command in the TiDB client: {{< copyable "sql" >}} ```sql - alter table . set tiflash replica 0; + ALTER TABLE . SET tiflash replica 0; ``` 2. Wait for the TiFlash replicas of the related tables to be deleted. [Check the table replication progress](/tiflash/use-tiflash.md#check-replication-progress) and the replicas are deleted if the replication information of the related tables is not found. ### 2. Perform the scale-in operation -Next, perform the scale-in operation with one of the following solutions. +Perform the scale-in operation with one of the following solutions. -#### Solution 1: Use TiUP to remove a TiFlash node +#### Solution 1. Use TiUP to remove a TiFlash node -1. First, confirm the name of the node to be taken down: +1. Confirm the name of the node to be taken down: {{< copyable "shell-regular" >}} @@ -361,7 +370,7 @@ Next, perform the scale-in operation with one of the following solutions. tiup cluster scale-in --node 10.0.1.4:9000 ``` -#### Solution 2: Manually remove a TiFlash node +#### Solution 2. Manually remove a TiFlash node In special cases (such as when a node needs to be forcibly taken down), or if the TiUP scale-in operation fails, you can manually remove a TiFlash node with the following steps. @@ -371,15 +380,15 @@ In special cases (such as when a node needs to be forcibly taken down), or if th * If you use TiUP deployment, replace `pd-ctl` with `tiup ctl pd`: - {{< copyable "shell-regular" >}} + {{< copyable "shell-regular" >}} - ```shell - tiup ctl: pd -u http://: store - ``` + ```shell + tiup ctl: pd -u http://: store + ``` - > **Note:** - > - > If multiple PD instances exist in the cluster, you only need to specify the IP address:port of an active PD instance in the above command. + > **Note:** + > + > If multiple PD instances exist in the cluster, you only need to specify the IP address:port of an active PD instance in the above command. 2. Remove the TiFlash node in pd-ctl: @@ -393,13 +402,13 @@ In special cases (such as when a node needs to be forcibly taken down), or if th tiup ctl: pd -u http://: store delete ``` - > **Note:** - > - > If multiple PD instances exist in the cluster, you only need to specify the IP address:port of an active PD instance in the above command. + > **Note:** + > + > If multiple PD instances exist in the cluster, you only need to specify the IP address:port of an active PD instance in the above command. 3. Wait for the store of the TiFlash node to disappear or for the `state_name` to become `Tombstone` before you stop the TiFlash process. -4. Manually delete TiFlash data files (whose location can be found in the `data_dir` directory under the TiFlash configuration of the cluster topology file). +4. Manually delete TiFlash data files (the location can be found in the `data_dir` directory under the TiFlash configuration of the cluster topology file). 5. Manually update TiUP's cluster configuration file (delete the information of the TiFlash node that goes down in edit mode). @@ -454,9 +463,29 @@ The steps to manually clean up the replication rules in PD are below: curl -v -X DELETE http://:/pd/api/v1/config/rule/tiflash/table-45-r ``` +3. View the cluster status: + + {{< copyable "shell-regular" >}} + + ```shell + tiup cluster display + ``` + + Access the monitoring platform at using your browser, and view the status of the cluster and the new nodes. + +After the scale-out, the cluster topology is as follows: + +| Host IP | Service | +|:----|:----| +| 10.0.1.3 | TiDB + TiFlash + TiCDC | +| 10.0.1.4 | TiDB + PD + TiCDC **(TiFlash is deleted)** | +| 10.0.1.5 | TiDB+ Monitor | +| 10.0.1.1 | TiKV | +| 10.0.1.2 | TiKV | + ## Scale in a TiCDC cluster -If you want to remove the TiCDC node from the `10.0.1.4` host, take the following steps: + This section exemplifies how to remove the TiCDC node from the `10.0.1.4` host. 1. Take the node offline: diff --git a/tiup/tiup-cluster.md b/tiup/tiup-cluster.md index cad91ec2ec375..4366916205d00 100644 --- a/tiup/tiup-cluster.md +++ b/tiup/tiup-cluster.md @@ -219,7 +219,7 @@ For the PD component, `|L` or `|UI` might be appended to `Up` or `Down`. `|L` in > **Note:** > -> This section describes only the syntax of the scale-in command. For detailed steps of online scaling, refer to [Scale the TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md). +> This section describes only the syntax of the scale-in command. For detailed steps of online scaling, refer to [Scale a TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md). Scaling in a cluster means making some node(s) offline. This operation removes the specific node(s) from the cluster and deletes the remaining files. @@ -288,7 +288,7 @@ After PD schedules the data on the node to other TiKV nodes, this node will be d > **Note:** > -> This section describes only the syntax of the scale-out command. For detailed steps of online scaling, refer to [Scale the TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md). +> This section describes only the syntax of the scale-out command. For detailed steps of online scaling, refer to [Scale a TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md). The scale-out operation has an inner logic similar to that of deployment: the TiUP cluster component firstly ensures the SSH connection of the node, creates the required directories on the target node, then executes the deployment operation, and starts the node service. diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 39b1edb59cc98..010bf1d4803c4 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -73,7 +73,7 @@ Before upgrading your TiDB cluster, you first need to upgrade TiUP or TiUP mirro > > If the cluster to upgrade was deployed not using the offline method, skip this step. -Refer to [Deploy a TiDB Cluster Using TiUP - Deploy TiUP offline](/production-deployment-using-tiup.md#method-2-deploy-tiup-offline) to download the TiUP mirror of the new version and upload it to the control machine. After executing `local_install.sh`, TiUP will complete the overwrite upgrade. +Refer to [Deploy a TiDB Cluster Using TiUP - Deploy TiUP offline](/production-deployment-using-tiup.md#deploy-tiup-offline) to download the TiUP mirror of the new version and upload it to the control machine. After executing `local_install.sh`, TiUP will complete the overwrite upgrade. {{< copyable "shell-regular" >}}