Skip to content

Conversation

@ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #21109

First-time contributors' checklist

What is changed, added or deleted? (Required)

Which TiDB version(s) do your changes apply to? (Required)

Tips for choosing the affected version(s):

By default, CHOOSE MASTER ONLY so your changes will be applied to the next TiDB major or minor releases. If your PR involves a product feature behavior change or a compatibility change, CHOOSE THE AFFECTED RELEASE BRANCH(ES) AND MASTER.

For details, see tips for choosing the affected versions.

  • master (the latest development version)
  • v9.0 (TiDB 9.0 versions)
  • v8.5 (TiDB 8.5 versions)
  • v8.4 (TiDB 8.4 versions)
  • v8.3 (TiDB 8.3 versions)
  • v8.1 (TiDB 8.1 versions)
  • v7.5 (TiDB 7.5 versions)
  • v7.1 (TiDB 7.1 versions)
  • v6.5 (TiDB 6.5 versions)
  • v6.1 (TiDB 6.1 versions)
  • v5.4 (TiDB 5.4 versions)

What is the related PR or file link(s)?

Do your changes match any of the following descriptions?

  • Delete files
  • Change aliases
  • Need modification after applied to another branch
  • Might cause conflicts after applied to another branch

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot ti-chi-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR. type/compatibility-or-feature-change This PR involves compatibility changes or feature behavior changes. v9.0-beta.2 This PR/issue applies to TiDB v9.0-beta.2. labels Dec 18, 2025
@ti-chi-bot
Copy link

ti-chi-bot bot commented Dec 18, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign jackysp for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @ti-chi-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the TiDB documentation for Point-in-Time Recovery (PITR) to reflect new features and operational enhancements introduced in TiDB v9.0.0. The changes provide comprehensive guidance on filtering data during PITR, executing concurrent restore tasks, and managing compatibility with ongoing log backups. Additionally, it introduces a new command for gracefully aborting failed restore processes, improving the overall robustness and user experience of backup and restore operations.

Highlights

  • PITR Data Filtering: Documentation has been added for using filters during Point-in-Time Recovery (PITR) to restore specific databases or tables, providing fine-grained control over the restore process. This feature is available from TiDB v9.0.0.
  • Concurrent PITR Restores: The documentation now includes details on performing multiple PITR restore tasks concurrently, enhancing efficiency for large-scale data recovery scenarios in TiDB v9.0.0 and later.
  • Log Backup and Restore Compatibility: New sections explain the compatibility between ongoing log backup tasks and both snapshot and PITR restore operations, outlining conditions for seamless integration and handling potential data inconsistencies.
  • Abort Restore Operations: Instructions for using the tiup br abort command have been added, detailing how to clean up metadata and checkpoint data for failed restore operations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@lilin90 lilin90 self-assigned this Dec 18, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds documentation for new PITR features available in TiDB v9.0.0, such as filtering, concurrent restores, and aborting operations. My review includes a critical fix for unresolved merge conflict markers that were left in the document. I have also provided several suggestions to improve clarity and conciseness, aligning with the repository's style guide for technical documentation.

Comment on lines +501 to +650
<<<<<<< HEAD
=======
### Restore data using filters
Starting from TiDB v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
The filter patterns follow the same [table filtering syntax](/table-filter.md) as other BR operations:
- `'*.*'`: matches all databases and tables.
- `'db1.*'`: matches all tables in the database `db1`.
- `'db1.table1'`: matches the specific table `table1` in the database `db1`.
- `'db*.tbl*'`: matches databases starting with `db` and tables starting with `tbl`.
- `'!mysql.*'`: excludes all tables in the `mysql` database.
Usage examples:
```shell
# restore specific databases
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.*' --filter 'db2.*'
# restore specific tables
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.users' --filter 'db1.orders'
# restore using pattern matching
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db*.tbl*'
```
> **Note:**
>
> - Before restoring data using filters, ensure that the target cluster does not contain any databases or tables that match the filter. Otherwise, the restore will fail with an error.
> - The filter options apply during the restore phase for both snapshot and log backups.
> - You can specify multiple `--filter` options to include or exclude different patterns.
> - PITR filtering does not support system tables yet. If you need to restore specific system tables, use the `br restore full` command with filters instead. Note that this command restores only the snapshot backup data (not log backup data).
### Concurrent restore operations
Starting from TiDB v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
Usage example for concurrent restores:
```shell
# terminal 1 - restore database db1
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.*'
# terminal 2 - restore database db2 (can run simultaneously)
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db2.*'
```
> **Note:**
>
> Each concurrent restore operation must target a different database or a non-overlapping set of tables. Attempting to restore overlapping datasets concurrently will result in an error.
### Compatibility between ongoing log backup and snapshot restore
Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
- The node performing backup and restore operations has the following necessary permissions:
- Read access to the external storage containing the backup source, for snapshot restore
- Write access to the target external storage used by the log backup
- The target external storage for the log backup is Amazon S3 (`s3://`), Google Cloud Storage (`gcs://`), or Azure Blob Storage (`azblob://`).
- The data to be restored uses the same type of external storage as the target storage for the log backup.
- Neither the data to be restored nor the log backup has enabled local encryption. For details, see [log backup encryption](#encrypt-the-log-backup-data) and [snapshot backup encryption](/br/br-snapshot-manual.md#encrypt-the-backup-data).
If any of the above conditions are not met, you can restore the data by following these steps:
1. [Stop the log backup task](#stop-a-log-backup-task).
2. Perform the data restore.
3. After the restore is complete, perform a new snapshot backup.
4. [Restart the log backup task](#restart-a-log-backup-task).
> **Note:**
>
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v9.0.0 or later. Otherwise, restoring the recorded full restore data might fail.
### Compatibility between ongoing log backup and PITR operations
Starting from TiDB v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
#### Important limitation for PITR with ongoing log backup
When you perform the PITR operations while a log backup is running, the restored data will also be recorded in the ongoing log backup. However, due to the nature of log restore operations, data inconsistencies might occur within the restore window. The system writes metadata to external storage to mark both the time range and data range where consistency cannot be guaranteed.
If such inconsistency occurs during the time range `[t1, t2)`, you cannot directly restore data from this period. Instead, choose one of the following alternatives:
- Restore data up to `t1` (to retrieve data before the inconsistent period).
- Perform a new snapshot backup after `t2`, and use it as the base for future PITR operations.
### Abort restore operations
If a restore operation fails, you can use the `tiup br abort` command to clean up registry entries and checkpoint data. This command automatically locates and removes relevant metadata based on the original restore parameters, including entries in the `mysql.tidb_restore_registry` table and checkpoint data (regardless of whether it is stored in a local database or external storage).
> **Note:**
>
> The `abort` command only cleans up metadata. You need to manually delete any actual restored data from the cluster.
The examples of aborting restore operations using the same parameters as the original restore command are as follows:
```shell
# Abort a PITR operation
tiup br abort restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}'
# Abort a PITR operation with filters
tiup br abort restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--filter 'db1.*'
# Abort a full restore
tiup br abort restore full --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}'
# Abort a database restore
tiup br abort restore db --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--db database_name
# Abort a table restore
tiup br abort restore table --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--db database_name --table table_name
```
>>>>>>> 5182861b86 (br: pitr filter feature release doc (#21109))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This block contains unresolved merge conflict markers (<<<<<<< HEAD, =======, >>>>>>>). These markers must be removed to ensure the document renders correctly.

### Restore data using filters

Starting from TiDB v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.

The filter patterns follow the same [table filtering syntax](/table-filter.md) as other BR operations:

- `'*.*'`: matches all databases and tables.
- `'db1.*'`: matches all tables in the database `db1`.
- `'db1.table1'`: matches the specific table `table1` in the database `db1`.
- `'db*.tbl*'`: matches databases starting with `db` and tables starting with `tbl`.
- `'!mysql.*'`: excludes all tables in the `mysql` database.

Usage examples:

```shell
# restore specific databases
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.*' --filter 'db2.*'

# restore specific tables
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.users' --filter 'db1.orders'

# restore using pattern matching
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db*.tbl*'

Note:

  • Before restoring data using filters, ensure that the target cluster does not contain any databases or tables that match the filter. Otherwise, the restore will fail with an error.
  • The filter options apply during the restore phase for both snapshot and log backups.
  • You can specify multiple --filter options to include or exclude different patterns.
  • PITR filtering does not support system tables yet. If you need to restore specific system tables, use the br restore full command with filters instead. Note that this command restores only the snapshot backup data (not log backup data).

Concurrent restore operations

Starting from TiDB v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.

Usage example for concurrent restores:

# terminal 1 - restore database db1
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db1.*'

# terminal 2 - restore database db2 (can run simultaneously)
tiup br restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--start-ts "2025-06-02 00:00:00+0800" \
--restored-ts "2025-06-03 18:00:00+0800" \
--filter 'db2.*'

Note:

Each concurrent restore operation must target a different database or a non-overlapping set of tables. Attempting to restore overlapping datasets concurrently will result in an error.

Compatibility between ongoing log backup and snapshot restore

Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (br restore [full|database|table]) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):

  • The node performing backup and restore operations has the following necessary permissions:
    • Read access to the external storage containing the backup source, for snapshot restore
    • Write access to the target external storage used by the log backup
  • The target external storage for the log backup is Amazon S3 (s3://), Google Cloud Storage (gcs://), or Azure Blob Storage (azblob://).
  • The data to be restored uses the same type of external storage as the target storage for the log backup.
  • Neither the data to be restored nor the log backup has enabled local encryption. For details, see log backup encryption and snapshot backup encryption.

If any of the above conditions are not met, you can restore the data by following these steps:

  1. Stop the log backup task.
  2. Perform the data restore.
  3. After the restore is complete, perform a new snapshot backup.
  4. Restart the log backup task.

Note:

When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v9.0.0 or later. Otherwise, restoring the recorded full restore data might fail.

Compatibility between ongoing log backup and PITR operations

Starting from TiDB v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.

Important limitation for PITR with ongoing log backup

When you perform the PITR operations while a log backup is running, the restored data will also be recorded in the ongoing log backup. However, due to the nature of log restore operations, data inconsistencies might occur within the restore window. The system writes metadata to external storage to mark both the time range and data range where consistency cannot be guaranteed.

If such inconsistency occurs during the time range [t1, t2), you cannot directly restore data from this period. Instead, choose one of the following alternatives:

  • Restore data up to t1 (to retrieve data before the inconsistent period).
  • Perform a new snapshot backup after t2, and use it as the base for future PITR operations.

Abort restore operations

If a restore operation fails, you can use the tiup br abort command to clean up registry entries and checkpoint data. This command automatically locates and removes relevant metadata based on the original restore parameters, including entries in the mysql.tidb_restore_registry table and checkpoint data (regardless of whether it is stored in a local database or external storage).

Note:

The abort command only cleans up metadata. You need to manually delete any actual restored data from the cluster.

The examples of aborting restore operations using the same parameters as the original restore command are as follows:

# Abort a PITR operation
tiup br abort restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}'

# Abort a PITR operation with filters
tiup br abort restore point --pd="${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--full-backup-storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--filter 'db1.*'

# Abort a full restore
tiup br abort restore full --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}'

# Abort a database restore
tiup br abort restore db --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--db database_name

# Abort a table restore
tiup br abort restore table --pd="${PD_IP}:2379" \
--storage='s3://backup-101/snapshot-20250602000000?access-key=${ACCESS-KEY}&secret-access-key=${SECRET-ACCESS-KEY}' \
--db database_name --table table_name

### Compatibility between ongoing log backup and snapshot restore
Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

This sentence is a bit long and complex. For better readability, consider rephrasing it to be more direct, in line with the style guide's emphasis on clarity and simplicity.

Suggested change
Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
Starting from v9.0.0, you can perform a snapshot restore (`br restore [full|database|table]`) while a log backup task is running. For the restored data to be correctly recorded by the ongoing log backup, all of the following conditions must be met:
References
  1. The style guide recommends writing for clarity and simplicity, and avoiding unnecessary words. (link)

Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
- The node performing backup and restore operations has the following necessary permissions:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

This line has a trailing whitespace. Please remove it for consistency and clean formatting.

Suggested change
- The node performing backup and restore operations has the following necessary permissions:
- The node performing backup and restore operations has the following necessary permissions:

>
> The `abort` command only cleans up metadata. You need to manually delete any actual restored data from the cluster.
The examples of aborting restore operations using the same parameters as the original restore command are as follows:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

This sentence is a bit wordy. Consider a more concise phrasing to improve readability, as suggested by the style guide.

Suggested change
The examples of aborting restore operations using the same parameters as the original restore command are as follows:
The following examples show how to abort restore operations using the same parameters as the original restore command:
References
  1. The style guide recommends avoiding unnecessary words. (link)

@ti-chi-bot
Copy link

ti-chi-bot bot commented Dec 18, 2025

@ti-chi-bot: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-verify 35d2e38 link true /test pull-verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR. type/compatibility-or-feature-change This PR involves compatibility changes or feature behavior changes. v9.0-beta.2 This PR/issue applies to TiDB v9.0-beta.2.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants