Restore Management
Updated at:2025-10-27
You can restore backed-up Kubernetes resources to a target cluster. This document explains how to carry out restoration for clusters with active backup tasks.
Prerequisites
- The CCE Backup Controller backup component has been successfully installed in the target cluster. For details, see CCE Backup Controller Description.
- The source cluster contains a successfully completed backup task.
Usage restrictions
- Only Kubernetes resource objects can be restored; cloud resources like Cloud Disk Service (CDS) and Baidu Load Balance (BLB) are not recoverable.
Operation steps
- Sign in to Cloud Container Engine Console (CCE).
- On the cluster list page, click the name of the target cluster. Then, in the left navigation bar, select Operations and Management - Application Backup - Recovery Management.
- Click Create Recovery Task on the recovery task page and complete the relevant configurations.
| ConfigMap | Required/Optional | Configuration |
|---|---|---|
| Task name | Required | Set a custom name for the recovery task. |
| Backup repository | Required | Select the backup repository where the recovery data is stored. The associated backup tasks will be filtered through this repository. |
| Select backup | Required | Choose the specific backup task that you want to recover. |
| Recovery scope | Required | All namespaces: Recover resource objects in all namespaces included in the backup task. Specify namespaces: Select specific namespaces from the backup task to recover resources. |
| Exclude namespace | Required | Required only when the "Recovery Scope" is set to All Namespaces, used to filter out namespaces that do not need to be recovered. |
| Conflict handling | Required | Do not overwrite (recommended): If a backup resource with the same name exists in the namespace of the target cluster to be recovered, the recovery task will not overwrite the existing resource. Update: If a backup resource with the same name exists in the namespace of the target cluster to be recovered, the recovery task will attempt to update the existing resource. |
- Click OK to complete creation.
Description:
- Recovery tasks cannot guarantee a 100% success rate.
View recovery status
You can check the recovery status through the Status attribute in the Recovery Task List. The status descriptions are as follows:
| Status | Description |
|---|---|
| Initializing | Create recovery task resource objects |
| Execution in progress | The recovery task is currently in progress. |
| Succeeded | The recovery task has been successfully completed. |
| Partially succeeded | Some resources were successfully recovered, while others encountered issues. You can check the number of successful recoveries and the reasons for failures in the status field within the YAML on the console. |
| Failed | The recovery task failed to complete. You can find the reason for the failure in the console or in the status field of the YAML. |
Precautions for backup recovery
- During the recovery process, make sure the container network mode (e.g., VPC routing or VPC-ENI) is consistent between the backup cluster and the recovery cluster.
- Cloud storage resources in the cluster (e.g., CDS/BOS/DPFS) are not supported for recovery. Pods requiring storage may remain in a pending state due to the absence of storage resources.
- To enhance the chances of a successful recovery, it’s recommended to prioritize “specified namespaces” for a small selection of multiple operations during the recovery process.
- Differences between Kubernetes versions may include incompatible changes (e.g., deprecated parameters or APIServer version updates). For better recovery outcomes, ensure the Kubernetes version of the recovery cluster matches or is a close major version to the source cluster (e.g., backups from a 1.18 cluster can be recovered on a 1.18 or 1.20 cluster).
- It's recommended to verify the resource status of the recovery cluster prior to recovery. A lack of resources may lead to Pods being in a pending state.
- If the backup cluster and target cluster are in different regions, ensure proper network connectivity to avoid image pull failures that can result in Pods remaining pending.
