DynamoDB – Error Handling

DynamoDB – Error Handling ”; Previous Next On unsuccessful processing of a request, DynamoDB throws an error. Each error consists of the following components: HTTP status code, exception name, and message. Error management rests on your SDK, which propagates errors, or your own code. Codes and Messages Exceptions fall into different HTTP header status codes. The 4xx and 5xx hold errors related to request issues and AWS. A selection of exceptions in the HTTP 4xx category are as follows − AccessDeniedException − The client failed to sign the request correctly. ConditionalCheckFailedException − A condition evaluated to false. IncompleteSignatureException − The request included an incomplete signature. Exceptions in the HTTP 5xx category are as follows − Internal Server Error Service Unavailable Retries and Backoff Algorithms Errors come from a variety of sources such as servers, switches, load balancers, and other pieces of structures and systems. Common solutions consist of simple retries, which supports reliability. All SDKs include this logic automatically, and you can set retry parameters to suit your application needs. For example − Java offers a maxErrorRetry value to stop retries. Amazon recommends using a backoff solution in addition to retries in order to control flow. This consists of progressively increasing wait periods between retries and eventually stopping after a fairly short period. Note SDKs perform automatic retries, but not exponential backoff. The following program is an example of the retry backoff − public enum Results { SUCCESS, NOT_READY, THROTTLED, SERVER_ERROR } public static void DoAndWaitExample() { try { // asynchronous operation. long token = asyncOperation(); int retries = 0; boolean retry = false; do { long waitTime = Math.min(getWaitTime(retries), MAX_WAIT_INTERVAL); System.out.print(waitTime + “n”); // Pause for result Thread.sleep(waitTime); // Get result Results result = getAsyncOperationResult(token); if (Results.SUCCESS == result) { retry = false; } else if (Results.NOT_READY == result) { retry = true; } else if (Results.THROTTLED == result) { retry = true; } else if (Results.SERVER_ERROR == result) { retry = true; } else { // stop on other error retry = false; } } while (retry && (retries++ < MAX_RETRIES)); } catch (Exception ex) { } } public static long getWaitTime(int retryCount) { long waitTime = ((long) Math.pow(3, retryCount) * 100L); return waitTime; } Print Page Previous Next Advertisements ”;

DynamoDB – Useful Resources

DynamoDB – Useful Resources ”; Previous Next The following resources contain additional information on DynamoDB. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Getting Started with AWS Mobile Services 22 Lectures 2 hours Harshit Srivastava More Detail AWS Database Course – Learn RDS, Neptune, DynamoDB 17 Lectures 1.5 hours Harshit Srivastava More Detail Cloud Databases on AWS and AZURE 22 Lectures 2 hours Harshit Srivastava More Detail AWS Certified Solutions Architect – Associate SAA-C03 Training Course 165 Lectures 20 hours Total Seminars More Detail RDS PostgreSQL and DynamoDB CRUD: AWS with Python and Boto3 50 Lectures 3.5 hours Niyazi Erdogan More Detail Work with RDS and DynamoDB: AWS with Python and Boto3 Series 49 Lectures 3 hours Niyazi Erdogan More Detail Print Page Previous Next Advertisements ”;

DynamoDB – Best Practices

DynamoDB – Best Practices ”; Previous Next Certain practices optimize code, prevent errors, and minimize throughput cost when working with various sources and elements. The following are some of the most important and commonly used best practices in DynamoDB. Tables The distribution of tables means the best approaches spread read/write activity evenly across all table items. Aim for uniform data access on table items. Optimal throughput usage rests on primary key selection and item workload patterns. Spread the workload evenly across partition key values. Avoid things like a small amount of heavily used partition key values. Opt for better choices like large quantities of distinct partition key values. Gain an understanding of partition behavior. Estimate partitions automatically allocated by DynamoDB. DynamoDB offers burst throughput usage, which reserves unused throughput for “bursts” of power. Avoid heavy use of this option because bursts consume large amounts of throughput quickly; furthermore, it does not prove a reliable resource. On uploads, distribute data in order to achieve better performance. Implement this by uploading to all allocated servers concurrently. Cache frequently used items to offload read activity to the cache rather than the database. Items Throttling, performance, size, and access costs remain the biggest concerns with items. Opt for one-to-many tables. Remove attributes and divide tables to match access patterns. You can improve efficiency dramatically through this simple approach. Compress large values prior to storing them. Utilize standard compression tools. Use alternate storage for large attribute values such as S3. You can store the object in S3, and an identifier in the item. Distribute large attributes across several items through virtual item pieces. This provides a workaround for the limitations of item size. Queries and Scans Queries and scans mainly suffer from throughput consumption challenges. Avoid bursts, which typically result from things like switching to a strongly consistent read. Use parallel scans in a low-resource way (i.e., background function with no throttling). Furthermore, only employ them with large tables, and situations where you do not fully utilize throughput or scan operations offer poor performance. Local Secondary Indices Indexes present issues in the areas of throughput and storage costs, and the efficiency of queries. Avoid indexing unless you query the attributes often. In projections, choose wisely because they bloat indexes. Select only those heavily used. Utilize sparse indexes, meaning indexes in which sort keys do not appear in all table items. They benefit queries on attributes not present in most table items. Pay attention to the item collection (all table items and their indices) expansion. Add/update operations cause both tables and indexes to grow, and 10GB remains the limit for collections. Global Secondary Indices Indexes present issues in the areas of throughput and storage costs, and the efficiency of queries. Opt for key attributes spreading, which like read/write spreading in tables provides workload uniformity. Choose attributes which evenly spread data. Also, utilize sparse indexes. Exploit global secondary indices for fast searches in queries requesting a modest amount of data. Print Page Previous Next Advertisements ”;

DynamoDB – Discussion

Discuss DynamoDB ”; Previous Next DynamoDB is a fully-managed NoSQL database service designed to deliver fast and predictable performance. It uses the Dynamo model in the essence of its design, and improves those features. It began as a way to manage website scalability challenges presented by the holiday season load. This tutorial introduces you to key DynamoDB concepts necessary for creating and deploying a highly-scalable and performance-focused database. Print Page Previous Next Advertisements ”;

DynamoDB – Quick Guide

DynamoDB – Quick Guide ”; Previous Next DynamoDB – Overview DynamoDB allows users to create databases capable of storing and retrieving any amount of data, and serving any amount of traffic. It automatically distributes data and traffic over servers to dynamically manage each customer”s requests, and also maintains fast performance. DynamoDB vs. RDBMS DynamoDB uses a NoSQL model, which means it uses a non-relational system. The following table highlights the differences between DynamoDB and RDBMS − Common Tasks RDBMS DynamoDB Connect to the Source It uses a persistent connection and SQL commands. It uses HTTP requests and API operations Create a Table Its fundamental structures are tables, and must be defined. It only uses primary keys, and no schema on creation. It uses various data sources. Get Table Info All table info remains accessible Only primary keys are revealed. Load Table Data It uses rows made of columns. In tables, it uses items made of attributes Read Table Data It uses SELECT statements and filtering statements. It uses GetItem, Query, and Scan. Manage Indexes It uses standard indexes created through SQL statements. Modifications to it occur automatically on table changes. It uses a secondary index to achieve the same function. It requires specifications (partition key and sort key). Modify Table Data It uses an UPDATE statement. It uses an UpdateItem operation. Delete Table Data It uses a DELETE statement. It uses a DeleteItem operation. Delete a Table It uses a DROP TABLE statement. It uses a DeleteTable operation. Advantages The two main advantages of DynamoDB are scalability and flexibility. It does not force the use of a particular data source and structure, allowing users to work with virtually anything, but in a uniform way. Its design also supports a wide range of use from lighter tasks and operations to demanding enterprise functionality. It also allows simple use of multiple languages: Ruby, Java, Python, C#, Erlang, PHP, and Perl. Limitations DynamoDB does suffer from certain limitations, however, these limitations do not necessarily create huge problems or hinder solid development. You can review them from the following points − Capacity Unit Sizes − A read capacity unit is a single consistent read per second for items no larger than 4KB. A write capacity unit is a single write per second for items no bigger than 1KB. Provisioned Throughput Min/Max − All tables and global secondary indices have a minimum of one read and one write capacity unit. Maximums depend on region. In the US, 40K read and write remains the cap per table (80K per account), and other regions have a cap of 10K per table with a 20K account cap. Provisioned Throughput Increase and Decrease − You can increase this as often as needed, but decreases remain limited to no more than four times daily per table. Table Size and Quantity Per Account − Table sizes have no limits, but accounts have a 256 table limit unless you request a higher cap. Secondary Indexes Per Table − Five local and five global are permitted. Projected Secondary Index Attributes Per Table − DynamoDB allows 20 attributes. Partition Key Length and Values − Their minimum length sits at 1 byte, and maximum at 2048 bytes, however, DynamoDB places no limit on values. Sort Key Length and Values − Its minimum length stands at 1 byte, and maximum at 1024 bytes, with no limit for values unless its table uses a local secondary index. Table and Secondary Index Names − Names must conform to a minimum of 3 characters in length, and a maximum of 255. They use the following characters: AZ, a-z, 0-9, “_”, “-”, and “.”. Attribute Names − One character remains the minimum, and 64KB the maximum, with exceptions for keys and certain attributes. Reserved Words − DynamoDB does not prevent the use of reserved words as names. Expression Length − Expression strings have a 4KB limit. Attribute expressions have a 255-byte limit. Substitution variables of an expression have a 2MB limit. DynamoDB – Basic Concepts Before using DynamoDB, you must familiarize yourself with its basic components and ecosystem. In the DynamoDB ecosystem, you work with tables, attributes, and items. A table holds sets of items, and items hold sets of attributes. An attribute is a fundamental element of data requiring no further decomposition, i.e., a field. Primary Key The Primary Keys serve as the means of unique identification for table items, and secondary indexes provide query flexibility. DynamoDB streams record events by modifying the table data. The Table Creation requires not only setting a name, but also the primary key; which identifies table items. No two items share a key. DynamoDB uses two types of primary keys − Partition Key − This simple primary key consists of a single attribute referred to as the “partition key.” Internally, DynamoDB uses the key value as input for a hash function to determine storage. Partition Key and Sort Key − This key, known as the “Composite Primary Key”, consists of two attributes. The partition key and The sort key. DynamoDB applies the first attribute to a hash function, and stores items with the same partition key together; with their order determined by the sort key. Items can share partition keys, but not sort keys. The Primary Key attributes only allow scalar (single) values; and string, number, or binary data types. The non-key attributes do not have these constraints. Secondary Indexes These indexes allow you to query table data with an alternate key. Though DynamoDB does not force their use, they optimize querying. DynamoDB uses two types of secondary indexes − Global Secondary Index − This index possesses partition and sort keys, which can differ from table keys. Local Secondary Index − This index possesses a partition key identical to the table, however, its sort key differs. API The API operations offered by DynamoDB include those of the control plane, data plane (e.g., creation, reading, updating, and deleting), and streams. In control plane operations, you create and manage tables with

DynamoDB – Monitoring

DynamoDB – Monitoring ”; Previous Next Amazon offers CloudWatch for aggregating and analyzing performance through the CloudWatch console, command line, or CloudWatch API. You can also use it to set alarms and perform tasks. It performs specified actions on certain events. Cloudwatch Console Utilize CloudWatch by accessing the Management Console, and then opening the CloudWatch console at https://console.aws.amazon.com/cloudwatch/. You can then perform the following steps − Select Metrics from the navigation pane. Under DynamoDB metrics within the CloudWatch Metrics by Category pane, choose Table Metrics. Use the upper pane to scroll below and examine the entire list of table metrics. The Viewing list provides metrics options. In the results interface, you can select/deselect each metric by selecting the checkbox beside the resource name and metric. Then you would be able to view graphs for each item. API Integration You can access CloudWatch with queries. Use metric values to perform CloudWatch actions. Note DynamoDB does not send metrics with a value of zero. It simply skips metrics for time periods where those metrics remain at that value. The following are some of the most commonly used metrics − ConditionalCheckFailedRequests − It tracks the quantity of failed attempts at conditional writes such as conditional PutItem writes. The failed writes increment this metric by one on evaluation to false. It also throws an HTTP 400 error. ConsumedReadCapacityUnits − It quantifies the capacity units used over a certain time period. You can use this to examine individual table and index consumption. ConsumedWriteCapacityUnits − It quantifies the capacity units used over a certain time period. You can use this to examine individual table and index consumption. ReadThrottleEvents − It quantifies requests exceeding provisioned capacity units in table/index reads. It increments on each throttle including batch operations with multiple throttles. ReturnedBytes − It quantifies the bytes returned in retrieval operations within a certain time period. ReturnedItemCount − It quantifies the items returned in Query and Scan operations over a certain time period. It addresses only items returned, not those evaluated, which are typically totally different figures. Note − There are many more metrics that exist, and most of these allow you to calculate averages, sums, maximum, minimum, and count. Print Page Previous Next Advertisements ”;

DynamoDB – CloudTrail

DynamoDB – CloudTrail ”; Previous Next DynamoDB includes CloudTrail integration. It captures low-level API requests from or for DynamoDB in an account, and sends log files to a specified S3 bucket. It targets calls from the console or API. You can use this data to determine requests made and their source, user, timestamp, and more. When enabled, it tracks actions in log files, which include other service records. It supports eight actions and two streams − The eight actions are as follows − CreateTable DeleteTable DescribeTable ListTables UpdateTable DescribeReservedCapacity DescribeReservedCapacityOfferings PurchaseReservedCapacityOfferings While, the two streams are − DescribeStream ListStreams All the logs contain information about accounts making requests. You can determine detailed information like whether root or IAM users made the request, or whether with temporary credentials or federated. The log files remain in storage for however long you specify, with settings for archiving and deletion. The default creates encrypted logs. You can set alerts for new logs. You can also organize multiple logs, across regions and accounts, into a single bucket. Interpreting Log Files Each file contains a single or multiple entries. Each entry consists of multiple JSON format events. An entry represents a request, and includes associated information; with no guarantee of order. You can review the following sample log file − {“Records”: [ { “eventVersion”: “5.05”, “userIdentity”: { “type”: “AssumedRole”, “principalId”: “AKTTIOSZODNN8SAMPLE:jane”, “arn”: “arn:aws:sts::155522255533:assumed-role/users/jane”, “accountId”: “155522255533”, “accessKeyId”: “AKTTIOSZODNN8SAMPLE”, “sessionContext”: { “attributes”: { “mfaAuthenticated”: “false”, “creationDate”: “2016-05-11T19:01:01Z” }, “sessionIssuer”: { “type”: “Role”, “principalId”: “AKTTI44ZZ6DHBSAMPLE”, “arn”: “arn:aws:iam::499955777666:role/admin-role”, “accountId”: “499955777666”, “userName”: “jill” } } }, “eventTime”: “2016-05-11T14:33:20Z”, “eventSource”: “dynamodb.amazonaws.com”, “eventName”: “DeleteTable”, “awsRegion”: “us-west-2”, “sourceIPAddress”: “192.0.2.0”, “userAgent”: “console.aws.amazon.com”, “requestParameters”: {“tableName”: “Tools”}, “responseElements”: {“tableDescription”: { “tableName”: “Tools”, “itemCount”: 0, “provisionedThroughput”: { “writeCapacityUnits”: 25, “numberOfDecreasesToday”: 0, “readCapacityUnits”: 25 }, “tableStatus”: “DELETING”, “tableSizeBytes”: 0 }}, “requestID”: “4D89G7D98GF7G8A7DF78FG89AS7GFSO5AEMVJF66Q9ASUAAJG”, “eventID”: “a954451c-c2fc-4561-8aea-7a30ba1fdf52”, “eventType”: “AwsApiCall”, “apiVersion”: “2013-04-22”, “recipientAccountId”: “155522255533” } ]} Print Page Previous Next Advertisements ”;

DynamoDB – Aggregation

DynamoDB – Aggregation ”; Previous Next DynamoDB does not provide aggregation functions. You must make creative use of queries, scans, indices, and assorted tools to perform these tasks. In all this, the throughput expense of queries/scans in these operations can be heavy. You also have the option to use libraries and other tools for your preferred DynamoDB coding language. Ensure their compatibility with DynamoDB prior to using it. Calculate Maximum or Minimum Utilize the ascending/descending storage order of results, the Limit parameter, and any parameters which set order to find the highest and lowest values. For example − Map<String, AttributeValue> eaval = new HashMap<>(); eaval.put(“:v1”, new AttributeValue().withS(“hashval”)); queryExpression = new DynamoDBQueryExpression<Table>() .withIndexName(“yourindexname”) .withKeyConditionExpression(“HK = :v1”) .withExpressionAttributeValues(values) .withScanIndexForward(false); //descending order queryExpression.setLimit(1); QueryResultPage<Lookup> res = dynamoDBMapper.queryPage(Table.class, queryExpression); Calculate Count Use DescribeTable to get a count of the table items, however, note that it provides stale data. Also, utilize the Java getScannedCount method. Utilize LastEvaluatedKey to ensure it delivers all results. For example − ScanRequest scanRequest = new ScanRequest().withTableName(yourtblName); ScanResult yourresult = client.scan(scanRequest); System.out.println(“#items:” + yourresult.getScannedCount()); Calculating Average and Sum Utilize indices and a query/scan to retrieve and filter values before processing. Then simply operate on those values through an object. Print Page Previous Next Advertisements ”;

Local Secondary Indexes

DynamoDB – Local Secondary Indexes ”; Previous Next Some applications only perform queries with the primary key, but some situations benefit from an alternate sort key. Allow your application a choice by creating a single or multiple local secondary indexes. Complex data access requirements, such as combing millions of items, make it necessary to perform more efficient queries/scans. Local secondary indices provide an alternate sort key for a partition key value. They also hold copies of all or some table attributes. They organize data by table partition key, but use a different sort key. Using a local secondary index removes the need for a whole table scan, and allows a simple and quick query using a sort key. All the local secondary indexes must satisfy certain conditions − Identical partition key and source table partition key. A sort key of only one scalar attribute. Projection of the source table sort key acting as a non-key attribute. All the local secondary indexes automatically hold partition and sort keys from parent tables. In queries, this means efficient gathering of projected attributes, and also retrieval of attributes not projected. The storage limit for a local secondary index remains 10GB per partition key value, which includes all table items, and index items sharing a partition key value. Projecting an Attribute Some operations require excess reads/fetching due to complexity. These operations can consume substantial throughput. Projection allows you to avoid costly fetching and perform rich queries by isolating these attributes. Remember projections consist of attributes copied into a secondary index. When making a secondary index, you specify the attributes projected. Recall the three options provided by DynamoDB: KEYS_ONLY, INCLUDE, and ALL. When opting for certain attributes in projection, consider the associated cost tradeoffs − If you project only a small set of necessary attributes, you dramatically reduce the storage costs. If you project frequently accessed non-key attributes, you offset scan costs with storage costs. If you project most or all non-key attributes, this maximizes flexibility and reduces throughput (no retrievals); however, storage costs rise. If you project KEYS_ONLY for frequent writes/updates and infrequent queries, it minimizes size, but maintains query preparation. Local Secondary Index Creation Use the LocalSecondaryIndex parameter of CreateTable to make a single or multiple local secondary indexes. You must specify one non-key attribute for the sort key. On table creation, you create local secondary indices. On deletion, you delete these indexes. Tables with a local secondary index must obey a limit of 10GB in size per partition key value, but can store any amount of items. Local Secondary Index Queries and Scans A query operation on local secondary indexes returns all items with a matching partition key value when multiple items in the index share sort key values. Matching items do not return in a certain order. Queries for local secondary indexes use either eventual or strong consistency, with strongly consistent reads delivering the latest values. A scan operation returns all local secondary index data. Scans require you to provide a table and index name, and allow the use of a filter expression to discard data. Item Writing On creation of a local secondary index, you specify a sort key attribute and its data type. When you write an item, its type must match the data type of the key schema if the item defines an attribute of an index key. DynamoDB imposes no one-to-one relationship requirements on table items and local secondary index items. The tables with multiple local secondary indexes carry higher write costs than those with less. Throughput Considerations in Local Secondary Indexes Read capacity consumption of a query depends on the nature of data access. Queries use either eventual or strong consistency, with strongly consistent reads using one unit compared to half a unit in eventually consistent reads. Result limitations include a 1MB size maximum. Result sizes come from the sum of matching index item size rounded up to the nearest 4KB, and matching table item size also rounded up to the nearest 4KB. The write capacity consumption remains within provisioned units. Calculate the total provisioned cost by finding the sum of consumed units in table writing and consumed units in updating indices. You can also consider the key factors influencing cost, some of which can be − When you write an item defining an indexed attribute or update an item to define an undefined indexed attribute, a single write operation occurs. When a table update changes an indexed key attribute value, two writes occur to delete and then – add an item. When a write causes the deletion of an indexed attribute, one write occurs to remove the old item projection. When an item does not exist within the index prior to or after an update, no writes occur. Local Secondary Index Storage On a table item write, DynamoDB automatically copies the right attribute set to the required local secondary indexes. This charges your account. The space used results from the sum of table primary key byte size, index key attribute byte size, any present projected attribute byte size, and 100 bytes in overhead for each index item. The estimate storage is got by estimating average index item size and multiplying by table item quantity. Using Java to Work with Local Secondary Indexes Create a local secondary index by first creating a DynamoDB class instance. Then, create a CreateTableRequest class instance with necessary request information. Finally, use the createTable method. Example DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); String tableName = “Tools”; CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(tableName); //Provisioned Throughput createTableRequest.setProvisionedThroughput ( new ProvisionedThroughput() .withReadCapacityUnits((long)5) .withWriteCapacityUnits(( long)5)); //Attributes ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>(); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Make”) .withAttributeType(“S”)); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Model”) .withAttributeType(“S”)); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Line”) .withAttributeType(“S”)); createTableRequest.setAttributeDefinitions(attributeDefinitions); //Key Schema ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>(); tableKeySchema.add(new KeySchemaElement() .withAttributeName(“Make”) .withKeyType(KeyType.HASH)); //Partition key tableKeySchema.add(new KeySchemaElement() .withAttributeName(“Model”) .withKeyType(KeyType.RANGE)); //Sort key createTableRequest.setKeySchema(tableKeySchema); ArrayList<KeySchemaElement> indexKeySchema = new ArrayList<KeySchemaElement>(); indexKeySchema.add(new KeySchemaElement() .withAttributeName(“Make”) .withKeyType(KeyType.HASH)); //Partition key indexKeySchema.add(new KeySchemaElement() .withAttributeName(“Line”) .withKeyType(KeyType.RANGE)); //Sort key Projection projection = new Projection() .withProjectionType(ProjectionType.INCLUDE); ArrayList<String> nonKeyAttributes = new

DynamoDB – Data Backup

DynamoDB – Data Backup ”; Previous Next Utilize Data Pipeline”s import/export functionality to perform backups. How you execute a backup depends on whether you use the GUI console, or use Data Pipeline directly (API). Either create separate pipelines for each table when using the console, or import/export multiple tables in a single pipeline if using a direct option. Exporting and Importing Data You must create an Amazon S3 bucket prior to performing an export. You can export from one or more tables. Perform the following four step process to execute an export − Step 1 − Log in to the AWS Management Console and open the Data Pipeline console located at https://console.aws.amazon.com/datapipeline/ Step 2 − If you have no pipelines in the AWS region used, select Get started now. If you have one or more, select Create new pipeline. Step 3 − On the creation page, enter a name for your pipeline. Choose Build using a template for the Source parameter. Select Export DynamoDB table to S3 from the list. Enter the source table in the Source DynamoDB table name field. Enter the destination S3 bucket in the Output S3 Folder text box using the following format: s3://nameOfBucket/region/nameOfFolder. Enter an S3 destination for the log file in S3 location for logs text box. Step 4 − Select Activate after entering all settings. The pipeline may take several minutes to finish its creation process. Use the console to monitor its status. Confirm successful processing with the S3 console by viewing the exported file. Importing Data Successful imports can only happen if the following conditions are true: you created a destination table, the destination and source use identical names, and the destination and source use identical key schema. You can use a populated destination table, however, imports replace data items sharing a key with source items, and also add excess items to the table. The destination can also use a different region. Though you can export multiple sources, you can only import one per operation. You can perform an import by adhering to the following steps − Step 1 − Log in to the AWS Management Console, and then open the Data Pipeline console. Step 2 − If you are intending to execute a cross region import, then you should select the destination region. Step 3 − Select Create new pipeline. Step 4 − Enter the pipeline name in the Name field. Choose Build using a template for the Source parameter, and in the template list, select Import DynamoDB backup data from S3. Enter the location of the source file in the Input S3 Folder text box. Enter the destination table name in the Target DynamoDB table name field. Then enter the location for the log file in the S3 location for logs text box. Step 5 − Select Activate after entering all settings. The import starts immediately after the pipeline creation. It may take several minutes for the pipeline to complete the creation process. Errors When errors occur, the Data Pipeline console displays ERROR as the pipeline status. Clicking the pipeline with an error takes you to its detail page, which reveals every step of the process and the point at which the failure occurred. Log files within also provide some insight. You can review the common causes of the errors as follows − The destination table for an import does not exist, or does not use identical key schema to the source. The S3 bucket does not exist, or you do not have read/write permissions for it. The pipeline timed out. You do not have the necessary export/import permissions. Your AWS account reached its resource limit. Print Page Previous Next Advertisements ”;