DynamoDB – Data Backup


DynamoDB – Data Backup


”;


Utilize Data Pipeline”s import/export functionality to perform backups. How you execute a backup depends on whether you use the GUI console, or use Data Pipeline directly (API). Either create separate pipelines for each table when using the console, or import/export multiple tables in a single pipeline if using a direct option.

Exporting and Importing Data

You must create an Amazon S3 bucket prior to performing an export. You can export from one or more tables.

Perform the following four step process to execute an export −

Step 1 − Log in to the AWS Management Console and open the Data Pipeline console located at https://console.aws.amazon.com/datapipeline/

Step 2 − If you have no pipelines in the AWS region used, select Get started now. If you have one or more, select Create new pipeline.

Step 3 − On the creation page, enter a name for your pipeline. Choose Build using a template for the Source parameter. Select Export DynamoDB table to S3 from the list. Enter the source table in the Source DynamoDB table name field.

Enter the destination S3 bucket in the Output S3 Folder text box using the following format: s3://nameOfBucket/region/nameOfFolder. Enter an S3 destination for the log file in S3 location for logs text box.

Step 4 − Select Activate after entering all settings.

The pipeline may take several minutes to finish its creation process. Use the console to monitor its status. Confirm successful processing with the S3 console by viewing the exported file.

Importing Data

Successful imports can only happen if the following conditions are true: you created a destination table, the destination and source use identical names, and the destination and source use identical key schema.

You can use a populated destination table, however, imports replace data items sharing a key with source items, and also add excess items to the table. The destination can also use a different region.

Though you can export multiple sources, you can only import one per operation. You can perform an import by adhering to the following steps −

Step 1 − Log in to the AWS Management Console, and then open the Data Pipeline console.

Step 2 − If you are intending to execute a cross region import, then you should select the destination region.

Step 3 − Select Create new pipeline.

Step 4 − Enter the pipeline name in the Name field. Choose Build using a template for the Source parameter, and in the template list, select Import DynamoDB backup data from S3.

Enter the location of the source file in the Input S3 Folder text box. Enter the destination table name in the Target DynamoDB table name field. Then enter the location for the log file in the S3 location for logs text box.

Step 5 − Select Activate after entering all settings.

The import starts immediately after the pipeline creation. It may take several minutes for the pipeline to complete the creation process.

Errors

When errors occur, the Data Pipeline console displays ERROR as the pipeline status. Clicking the pipeline with an error takes you to its detail page, which reveals every step of the process and the point at which the failure occurred. Log files within also provide some insight.

You can review the common causes of the errors as follows −

  • The destination table for an import does not exist, or does not use identical key schema to the source.

  • The S3 bucket does not exist, or you do not have read/write permissions for it.

  • The pipeline timed out.

  • You do not have the necessary export/import permissions.

  • Your AWS account reached its resource limit.

Advertisements

”;

Leave a Reply

Your email address will not be published. Required fields are marked *