百度智能云

All Product Document

          Relational Database Service

          Best Practices for Data Restoration of the Clone Instance

          Overview

          In case of data deletion and data tampering resulting from incorrect actions during usage of the database, you can realize data recovery via the "Clone Instance" feature of the RDS. As for a clone instance, there are two data recovery modes:

          • Recovery by time point: the data snapshot (accurate to the second), which recovers data to designated time point, and applies to data recovery after incorrect actions. The recovery process comprises full recovery and incremental recovery.
          • Recovery by backup set: the data snapshot, which recovers data to designated backup set and meets demands like service pressure measurement and cluster splitting & expansion. The recovery process only refers to a full recovery.

          Operating Instruction:

          Prerequisites

          • RDS instance's running status, normal
          • Master instance's backup status, normal, backup set, normal

          Notices

          • During the creation of a clone instance, you need to download backup data and incremental "binlog" log and then carry out full data recovery and incremental data recovery. Therefore, larger data volume and higher write concurrency signify more time in the creation of a clone instance.
          • Data for clone instance are static snapshots at a selected time point, except for the incremental data that are newly written after the time point chosen for clone instance.
          • After the successful creation of a clone instance, you need to export data of the table from the clone instance, check the data, and do comparisons with the online data before extract the data for recovery. Subsequently, you need to operate with great care and recover the data to the online production environment.
          • Common data recovery process (example):
          Time point Operation log
          9:18 AM Delete some data for incorrect actions.
          10:10 AM Sense the data anomaly.
          10:15 AM Operate RDS's "Clone an Instance" feature and designate the time point for recovery to 09:18.
          12:10 PM Complete recovery of a clone instance. Note: in the clone instance there are no data that are newly written after 09:18.
          12:30 PM Through data comparison and check, recover the data involved in incorrect actions to the online environment.

          Case on data recovery using "Clone Instance" feature

          1. Confirm the "Clone an Instance" action objective (recovery scope)

            Due to an incorrect action at 09:18:02, delete by mistake the data of the field "status=0" in the table "t_order", and plan to recover data in "t_order" table to snapshot at 09:18:02. Then, use the "Recovery by time point" feature of "Clone an Instance", and set the time point for recovery as 2019-07-02 09:18:02

          2. Select "Clone an Instance" by time point (there is indeed successful historical backup)

          3. After cloning the instance, check if data on clone instance meet expectations.
          4. After confirming that recovered data snapshots meet expectations, recover the data to the online environment. To avoid direct replacement of the online table, rename the clone instance's "t_order" table.

            rename table t_order to t_order_bk_091802;
          5. Export data from the clone instance, and import the data into an online instance, for example:

            Master Database ip:10.100.0.7
            Clone Instance ip:10.100.0.14
            #Enable screen on bcc to avoid session interruption during large table export and import
            screen -S mysql_importdata
            #Set the Linux session time to avoid the Linux process exiting because it is idle after exporting
            export TMOUT=0
            #Data export and import simultaneously by the pipeline
            # --set-gtid-purged = OFF When exporting data, do not export gtid, to avoid import, skip from the library because gtid has been executed, resulting in the recovered data only in the master database
            # --skip-lock-tables do not lock, avoid locking online tables
            # --quick Avoid exporting data to the cache, affecting online services
            # --hex-blob Convert blob data to hexadecimal to avoid inconsistent data after import
            # --single-transaction Ensure data consistency through a single transaction
            # --tz-utc = 0 execute utf time zone to ensure consistent time data
            mysqldump -h 10.100.0.14 -uxx -password --set-gtid-purged=OFF --skip-lock-tables   --quick  --hex-blob --single-transaction --tz-utc=0   baidu_dba t_order_bk_091802 | mysql -h 10.100.0.7  -uXXX -pXXX dba_baidu baidu_dba
          6. Data check

            Check the data difference between the new table and the table recovered from the clone instance and determine the data that will be added after recovery. The specific check method depends on the customers' service needs (for example, you can export the data into a file, and diff via the file)

            As shown in this example, you can find through verification that the data with an id in the range of 1 to 172631 gets accidentally deleted and needs to be restored.
          7. New data added to the original table.

            #Add the original table data to the new online table
            begin 
            insert into t_order(data,status) select data,status from  t_order_bk_091802 where id>=1 and id <=172631 ;
            #Check in after filling in
            select status, count(1),min(id),max(id) from t_order where id>=1 and id <=172631 ;
            #As expected, commit transaction
            commit;

          Frequent problems and solutions

          • There is error report when using "mysqldump" export on bcc: For example: mysqldump: unknown variable 'set-gtid-purged=OFF'

            Cause: rule out causes like wrong option letters and password mistakes, and the main problem is the use of outdated "mysqldump" version that you need to upgrade, for example:

            mysqldump --version
            mysqldump Ver 10.13 Distrib 5.1.73, for redhat-linux-gnu (x86_64) #This is the default version is too low and needs to be upgraded.

            Solution: Download "mysql" of the latest version for compiling and installation on BCC or copy one "mysqldump" and "mysql" file from BCC that was installed with "mysql" of high version.

          • Export command error: mysqldump: Permission denied

            Cause: "mysqldump" has no execution privilege. Grant "mysqldump" the privilege of execution.

            Solution: chmod a+x mysqldump

          • How to check data

            Solution: If the SQL you have incorrectly manipulated is known, modify the SQL so that you can do a query, after which , make data comparison. Or, you can query the data and input the data into a text file, and then make text comparison.

          References

          "Clone an instance" Operations Guide

          Previous
          Best Practices for Data Security
          Next
          Best Practices for the MySQL Slow Log