-
Notifications
You must be signed in to change notification settings - Fork 708
restore: update the definition of the parameter --load-stats and the usage of pitr id map (#21078) #22200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release-8.5
Are you sure you want to change the base?
restore: update the definition of the parameter --load-stats and the usage of pitr id map (#21078) #22200
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -85,4 +85,78 @@ During the initial restore, `br` first enters the snapshot restore phase. This p | |||||
|
|
||||||
| When entering the log restore phase during the initial restore, `br` creates a `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database in the target cluster. This database records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database and retry with a different backup. | ||||||
|
|
||||||
| <<<<<<< HEAD | ||||||
| Before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map` might lead to inconsistent PITR restore data. | ||||||
| ======= | ||||||
| Note that before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. **Deleting data from `mysql.tidb_pitr_id_map` arbitrarily might lead to inconsistent PITR restore data.** | ||||||
|
|
||||||
| > **Note:** | ||||||
| > | ||||||
| > To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`. | ||||||
|
|
||||||
| ## Implementation details: store checkpoint data in the external storage | ||||||
|
|
||||||
| > **Note:** | ||||||
| > | ||||||
| > Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example: | ||||||
| > | ||||||
| > ```shell | ||||||
| > ./br restore full -s "s3://backup-bucket/backup-prefix" --checkpoint-storage "s3://temp-bucket/checkpoints" | ||||||
| > ``` | ||||||
|
|
||||||
| In the external storage, the directory structure of the checkpoint data is as follows: | ||||||
|
|
||||||
| - Root path `restore-{downstream-cluster-ID}` uses the downstream cluster ID `{downstream-cluster-ID}` to distinguish between different restore clusters. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This sentence is a bit redundant. For better readability, consider rephrasing it.
Suggested change
|
||||||
| - Path `restore-{downstream-cluster-ID}/log` stores log file checkpoint data during the log restore phase. | ||||||
| - Path `restore-{downstream-cluster-ID}/sst` stores checkpoint data of the SST files that are not backed up by log backup during the log restore phase. | ||||||
| - Path `restore-{downstream-cluster-ID}/snapshot` stores checkpoint data during the snapshot restore phase. | ||||||
|
|
||||||
| ``` | ||||||
| . | ||||||
| `-- restore-{downstream-cluster-ID} | ||||||
| |-- log | ||||||
| | |-- checkpoint.meta | ||||||
| | |-- data | ||||||
| | | |-- {uuid}.cpt | ||||||
| | | |-- {uuid}.cpt | ||||||
| | | `-- {uuid}.cpt | ||||||
| | |-- ingest_index.meta | ||||||
| | `-- progress.meta | ||||||
| |-- snapshot | ||||||
| | |-- checkpoint.meta | ||||||
| | |-- checksum | ||||||
| | | |-- {uuid}.cpt | ||||||
| | | |-- {uuid}.cpt | ||||||
| | | `-- {uuid}.cpt | ||||||
| | `-- data | ||||||
| | |-- {uuid}.cpt | ||||||
| | |-- {uuid}.cpt | ||||||
| | `-- {uuid}.cpt | ||||||
| `-- sst | ||||||
| `-- checkpoint.meta | ||||||
| ``` | ||||||
|
|
||||||
| Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore. | ||||||
|
|
||||||
| ### Snapshot restore | ||||||
|
|
||||||
| During the initial restore, `br` creates a `restore-{downstream-cluster-ID}/snapshot` path in the target cluster. The path records checkpoint data, the upstream cluster ID, and the BackupTS of the backup data. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The phrase 'in the target cluster' is incorrect here, as this section describes storing data in an external storage. It should be 'in the specified external storage'. Also, the second sentence could be more active.
Suggested change
|
||||||
|
|
||||||
| If the restore fails, you can retry it using the same command. `br` will automatically read the checkpoint information from the specified external storage path and resume from the last restore point. | ||||||
|
|
||||||
| If the restore fails and you try to restore backup data with different checkpoint information to the same cluster, `br` reports an error. It indicates that the current upstream cluster ID or BackupTS is different from the checkpoint record. If the restore cluster has been cleaned, you can manually clean up the checkpoint data in the external storage or specify another external storage path to store checkpoint data, and retry with a different backup. | ||||||
|
|
||||||
| ### PITR restore | ||||||
|
|
||||||
| [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) consists of snapshot restore and log restore phases. | ||||||
|
|
||||||
| During the initial restore, `br` first enters the snapshot restore phase. BR records the checkpoint data, the upstream cluster ID, BackupTS of the backup data (that is, the start time point `start-ts` of log restore) and the restored time point `restored-ts` of log restore in the `restore-{downstream-cluster-ID}/snapshot` path. If restore fails during this phase, you cannot adjust the `start-ts` and `restored-ts` of log restore when resuming checkpoint restore. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This sentence is quite long and complex, which can make it hard to read. Consider breaking it into smaller sentences for better clarity.
Suggested change
|
||||||
|
|
||||||
| When entering the log restore phase during the initial restore, `br` creates a `restore-{downstream-cluster-ID}/log` path in the target cluster. This path records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually clean up the checkpoint data in the external storage or specify another external storage path to store checkpoint data, and retry with a different backup. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The phrase 'in the target cluster' is incorrect here, as this section is about storing data in an external storage. It should be 'in the specified external storage'.
Suggested change
|
||||||
|
|
||||||
| Note that before entering the log restore phase during the initial restore, `br` constructs a mapping of the database and table IDs in the upstream and downstream clusters at the `restored-ts` time point. This mapping is persisted in the checkpoint storage with the file name `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}` to prevent duplicate allocation of database and table IDs. **Deleting files from the directory `pitr_id_maps` arbitrarily might lead to inconsistent PITR restore data.** | ||||||
|
|
||||||
| > **Note:** | ||||||
| > | ||||||
| > To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`. | ||||||
| >>>>>>> 827df4ff8c (restore: update the definition of the parameter --load-stats and the usage of pitr id map (#21078)) | ||||||
|
Comment on lines
+88
to
+162
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -127,8 +127,21 @@ tiup br restore full \ | |||||
| --storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log | ||||||
| ``` | ||||||
|
|
||||||
| > **Note:** | ||||||
| > | ||||||
| > Starting from v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics. | ||||||
|
|
||||||
| When the backup and restore feature backs up data, it stores statistics in JSON format within the `backupmeta` file. When restoring data, it loads statistics in JSON format into the cluster. For more information, see [LOAD STATS](/sql-statements/sql-statement-load-stats.md). | ||||||
|
|
||||||
| Starting from 9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This sentence is very long and contains a lot of information, making it difficult to parse. For better readability, consider breaking it down into smaller, more focused sentences. Also, there are a few minor formatting and grammar issues. For example,
Suggested change
|
||||||
|
|
||||||
| The following is an example: | ||||||
|
|
||||||
| ```shell | ||||||
| tiup br restore full \ | ||||||
| --storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log --load-stats --fast-load-sys-tables | ||||||
| ``` | ||||||
|
|
||||||
| ## Encrypt the backup data | ||||||
|
|
||||||
| BR supports encrypting backup data at the backup side and [at the storage side when backing up to Amazon S3](/br/backup-and-restore-storages.md#amazon-s3-server-side-encryption). You can choose either encryption method as required. | ||||||
|
|
@@ -179,6 +192,22 @@ During restore, a progress bar is displayed in the terminal as shown below. When | |||||
| Full Restore <---------/...............................................> 17.12%. | ||||||
| ``` | ||||||
|
|
||||||
| Starting from TiDB v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistics physically in a new cluster: | ||||||
|
|
||||||
| ```shell | ||||||
| tiup br restore full \ | ||||||
| --pd "${PD_IP}:2379" \ | ||||||
| --with-sys-table \ | ||||||
| --fast-load-sys-tables \ | ||||||
| --storage "s3://${backup_collection_addr}/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}" \ | ||||||
| --ratelimit 128 \ | ||||||
| --log-file restorefull.log | ||||||
| ``` | ||||||
|
|
||||||
| > **Note:** | ||||||
| > | ||||||
| > Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables. | ||||||
|
|
||||||
| ## Restore a database or a table | ||||||
|
|
||||||
| You can use `br` to restore partial data of a specified database or table from backup data. This feature allows you to filter out data that you do not need during the restore. | ||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The document structure has become confusing with the addition of this section. There are now two sections with similar headings (
## Implementation detailsand## Implementation details: store checkpoint data in the external storage), both describing implementation details but for different storage methods. This can be confusing for readers.To improve clarity, I suggest restructuring this part of the document. For example, you could have a single
## Implementation detailssection with two subsections:This would provide a clearer structure for the user to understand the two different methods for storing checkpoint data.