Web Identity Federation

DynamoDB – Web Identity Federation ”; Previous Next Web Identity Federation allows you to simplify authentication and authorization for large user groups. You can skip the creation of individual accounts, and require users to login to an identity provider to get temporary credentials or tokens. It uses AWS Security Token Service (STS) to manage credentials. Applications use these tokens to interact with services. Web Identity Federation also supports other identity providers such as – Amazon, Google, and Facebook. Function − In use, Web Identity Federation first calls an identity provider for user and app authentication, and the provider returns a token. This results in the app calling AWS STS and passing the token for input. STS authorizes the app and grants it temporary access credentials, which allow the app to use an IAM role and access resources based on policy. Implementing Web Identity Federation You must perform the following three steps prior to use − Use a supported third party identity provider to register as a developer. Register your application with the provider to obtain an app ID. Create a single or multiple IAM roles, including policy attachment. You must use a role per provider per app. Assume one of your IAM roles to use Web Identity Federation. Your app must then perform a three-step process − Authentication Credential acquisition Resource Access In the first step, your app uses its own interface to call the provider and then manages the token process. Then step two manages tokens and requires your app to send an AssumeRoleWithWebIdentity request to AWS STS. The request holds the first token, the provider app ID, and the ARN of the IAM role. The STS the provides credentials set to expire after a certain period. In the final step, your app receives a response from STS containing access information for DynamoDB resources. It consists of access credentials, expiration time, role, and role ID. Print Page Previous Next Advertisements ”;

DynamoDB – Aggregation

DynamoDB – Aggregation ”; Previous Next DynamoDB does not provide aggregation functions. You must make creative use of queries, scans, indices, and assorted tools to perform these tasks. In all this, the throughput expense of queries/scans in these operations can be heavy. You also have the option to use libraries and other tools for your preferred DynamoDB coding language. Ensure their compatibility with DynamoDB prior to using it. Calculate Maximum or Minimum Utilize the ascending/descending storage order of results, the Limit parameter, and any parameters which set order to find the highest and lowest values. For example − Map<String, AttributeValue> eaval = new HashMap<>(); eaval.put(“:v1”, new AttributeValue().withS(“hashval”)); queryExpression = new DynamoDBQueryExpression<Table>() .withIndexName(“yourindexname”) .withKeyConditionExpression(“HK = :v1”) .withExpressionAttributeValues(values) .withScanIndexForward(false); //descending order queryExpression.setLimit(1); QueryResultPage<Lookup> res = dynamoDBMapper.queryPage(Table.class, queryExpression); Calculate Count Use DescribeTable to get a count of the table items, however, note that it provides stale data. Also, utilize the Java getScannedCount method. Utilize LastEvaluatedKey to ensure it delivers all results. For example − ScanRequest scanRequest = new ScanRequest().withTableName(yourtblName); ScanResult yourresult = client.scan(scanRequest); System.out.println(“#items:” + yourresult.getScannedCount()); Calculating Average and Sum Utilize indices and a query/scan to retrieve and filter values before processing. Then simply operate on those values through an object. Print Page Previous Next Advertisements ”;

Local Secondary Indexes

DynamoDB – Local Secondary Indexes ”; Previous Next Some applications only perform queries with the primary key, but some situations benefit from an alternate sort key. Allow your application a choice by creating a single or multiple local secondary indexes. Complex data access requirements, such as combing millions of items, make it necessary to perform more efficient queries/scans. Local secondary indices provide an alternate sort key for a partition key value. They also hold copies of all or some table attributes. They organize data by table partition key, but use a different sort key. Using a local secondary index removes the need for a whole table scan, and allows a simple and quick query using a sort key. All the local secondary indexes must satisfy certain conditions − Identical partition key and source table partition key. A sort key of only one scalar attribute. Projection of the source table sort key acting as a non-key attribute. All the local secondary indexes automatically hold partition and sort keys from parent tables. In queries, this means efficient gathering of projected attributes, and also retrieval of attributes not projected. The storage limit for a local secondary index remains 10GB per partition key value, which includes all table items, and index items sharing a partition key value. Projecting an Attribute Some operations require excess reads/fetching due to complexity. These operations can consume substantial throughput. Projection allows you to avoid costly fetching and perform rich queries by isolating these attributes. Remember projections consist of attributes copied into a secondary index. When making a secondary index, you specify the attributes projected. Recall the three options provided by DynamoDB: KEYS_ONLY, INCLUDE, and ALL. When opting for certain attributes in projection, consider the associated cost tradeoffs − If you project only a small set of necessary attributes, you dramatically reduce the storage costs. If you project frequently accessed non-key attributes, you offset scan costs with storage costs. If you project most or all non-key attributes, this maximizes flexibility and reduces throughput (no retrievals); however, storage costs rise. If you project KEYS_ONLY for frequent writes/updates and infrequent queries, it minimizes size, but maintains query preparation. Local Secondary Index Creation Use the LocalSecondaryIndex parameter of CreateTable to make a single or multiple local secondary indexes. You must specify one non-key attribute for the sort key. On table creation, you create local secondary indices. On deletion, you delete these indexes. Tables with a local secondary index must obey a limit of 10GB in size per partition key value, but can store any amount of items. Local Secondary Index Queries and Scans A query operation on local secondary indexes returns all items with a matching partition key value when multiple items in the index share sort key values. Matching items do not return in a certain order. Queries for local secondary indexes use either eventual or strong consistency, with strongly consistent reads delivering the latest values. A scan operation returns all local secondary index data. Scans require you to provide a table and index name, and allow the use of a filter expression to discard data. Item Writing On creation of a local secondary index, you specify a sort key attribute and its data type. When you write an item, its type must match the data type of the key schema if the item defines an attribute of an index key. DynamoDB imposes no one-to-one relationship requirements on table items and local secondary index items. The tables with multiple local secondary indexes carry higher write costs than those with less. Throughput Considerations in Local Secondary Indexes Read capacity consumption of a query depends on the nature of data access. Queries use either eventual or strong consistency, with strongly consistent reads using one unit compared to half a unit in eventually consistent reads. Result limitations include a 1MB size maximum. Result sizes come from the sum of matching index item size rounded up to the nearest 4KB, and matching table item size also rounded up to the nearest 4KB. The write capacity consumption remains within provisioned units. Calculate the total provisioned cost by finding the sum of consumed units in table writing and consumed units in updating indices. You can also consider the key factors influencing cost, some of which can be − When you write an item defining an indexed attribute or update an item to define an undefined indexed attribute, a single write operation occurs. When a table update changes an indexed key attribute value, two writes occur to delete and then – add an item. When a write causes the deletion of an indexed attribute, one write occurs to remove the old item projection. When an item does not exist within the index prior to or after an update, no writes occur. Local Secondary Index Storage On a table item write, DynamoDB automatically copies the right attribute set to the required local secondary indexes. This charges your account. The space used results from the sum of table primary key byte size, index key attribute byte size, any present projected attribute byte size, and 100 bytes in overhead for each index item. The estimate storage is got by estimating average index item size and multiplying by table item quantity. Using Java to Work with Local Secondary Indexes Create a local secondary index by first creating a DynamoDB class instance. Then, create a CreateTableRequest class instance with necessary request information. Finally, use the createTable method. Example DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); String tableName = “Tools”; CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(tableName); //Provisioned Throughput createTableRequest.setProvisionedThroughput ( new ProvisionedThroughput() .withReadCapacityUnits((long)5) .withWriteCapacityUnits(( long)5)); //Attributes ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>(); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Make”) .withAttributeType(“S”)); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Model”) .withAttributeType(“S”)); attributeDefinitions.add(new AttributeDefinition() .withAttributeName(“Line”) .withAttributeType(“S”)); createTableRequest.setAttributeDefinitions(attributeDefinitions); //Key Schema ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>(); tableKeySchema.add(new KeySchemaElement() .withAttributeName(“Make”) .withKeyType(KeyType.HASH)); //Partition key tableKeySchema.add(new KeySchemaElement() .withAttributeName(“Model”) .withKeyType(KeyType.RANGE)); //Sort key createTableRequest.setKeySchema(tableKeySchema); ArrayList<KeySchemaElement> indexKeySchema = new ArrayList<KeySchemaElement>(); indexKeySchema.add(new KeySchemaElement() .withAttributeName(“Make”) .withKeyType(KeyType.HASH)); //Partition key indexKeySchema.add(new KeySchemaElement() .withAttributeName(“Line”) .withKeyType(KeyType.RANGE)); //Sort key Projection projection = new Projection() .withProjectionType(ProjectionType.INCLUDE); ArrayList<String> nonKeyAttributes = new

DynamoDB – Data Backup

DynamoDB – Data Backup ”; Previous Next Utilize Data Pipeline”s import/export functionality to perform backups. How you execute a backup depends on whether you use the GUI console, or use Data Pipeline directly (API). Either create separate pipelines for each table when using the console, or import/export multiple tables in a single pipeline if using a direct option. Exporting and Importing Data You must create an Amazon S3 bucket prior to performing an export. You can export from one or more tables. Perform the following four step process to execute an export − Step 1 − Log in to the AWS Management Console and open the Data Pipeline console located at https://console.aws.amazon.com/datapipeline/ Step 2 − If you have no pipelines in the AWS region used, select Get started now. If you have one or more, select Create new pipeline. Step 3 − On the creation page, enter a name for your pipeline. Choose Build using a template for the Source parameter. Select Export DynamoDB table to S3 from the list. Enter the source table in the Source DynamoDB table name field. Enter the destination S3 bucket in the Output S3 Folder text box using the following format: s3://nameOfBucket/region/nameOfFolder. Enter an S3 destination for the log file in S3 location for logs text box. Step 4 − Select Activate after entering all settings. The pipeline may take several minutes to finish its creation process. Use the console to monitor its status. Confirm successful processing with the S3 console by viewing the exported file. Importing Data Successful imports can only happen if the following conditions are true: you created a destination table, the destination and source use identical names, and the destination and source use identical key schema. You can use a populated destination table, however, imports replace data items sharing a key with source items, and also add excess items to the table. The destination can also use a different region. Though you can export multiple sources, you can only import one per operation. You can perform an import by adhering to the following steps − Step 1 − Log in to the AWS Management Console, and then open the Data Pipeline console. Step 2 − If you are intending to execute a cross region import, then you should select the destination region. Step 3 − Select Create new pipeline. Step 4 − Enter the pipeline name in the Name field. Choose Build using a template for the Source parameter, and in the template list, select Import DynamoDB backup data from S3. Enter the location of the source file in the Input S3 Folder text box. Enter the destination table name in the Target DynamoDB table name field. Then enter the location for the log file in the S3 location for logs text box. Step 5 − Select Activate after entering all settings. The import starts immediately after the pipeline creation. It may take several minutes for the pipeline to complete the creation process. Errors When errors occur, the Data Pipeline console displays ERROR as the pipeline status. Clicking the pipeline with an error takes you to its detail page, which reveals every step of the process and the point at which the failure occurred. Log files within also provide some insight. You can review the common causes of the errors as follows − The destination table for an import does not exist, or does not use identical key schema to the source. The S3 bucket does not exist, or you do not have read/write permissions for it. The pipeline timed out. You do not have the necessary export/import permissions. Your AWS account reached its resource limit. Print Page Previous Next Advertisements ”;

Global Secondary Indexes

DynamoDB – Global Secondary Indexes ”; Previous Next Applications requiring various query types with different attributes can use a single or multiple global secondary indexes in performing these detailed queries. For example − A system keeping a track of users, their login status, and their time logged in. The growth of the previous example slows queries on its data. Global secondary indexes accelerate queries by organizing a selection of attributes from a table. They employ primary keys in sorting data, and require no key table attributes, or key schema identical to the table. All the global secondary indexes must include a partition key, with the option of a sort key. The index key schema can differ from the table, and index key attributes can use any top-level string, number, or binary table attributes. In a projection, you can use other table attributes, however, queries do not retrieve from parent tables. Attribute Projections Projections consist of an attribute set copied from table to secondary index. A Projection always occurs with the table partition key and sort key. In queries, projections allow DynamoDB access to any attribute of the projection; they essentially exist as their own table. In a secondary index creation, you must specify attributes for projection. DynamoDB offers three ways to perform this task − KEYS_ONLY − All index items consist of table partition and sort key values, and index key values. This creates the smallest index. INCLUDE − It includes KEYS_ONLY attributes and specified non-key attributes. ALL − It includes all source table attributes, creating the largest possible index. Note the tradeoffs in projecting attributes into a global secondary index, which relate to throughput and storage cost. Consider the following points − If you only need access to a few attributes, with low latency, project only those you need. This reduces storage and write costs. If an application frequently accesses certain non-key attributes, project them because the storage costs pale in comparison to scan consumption. You can project large sets of attributes frequently accessed, however, this carries a high storage cost. Use KEYS_ONLY for infrequent table queries and frequent writes/updates. This controls size, but still offers good performance on queries. Global Secondary Index Queries and Scans You can utilize queries for accessing a single or multiple items in an index. You must specify index and table name, desired attributes, and conditions; with the option to return results in ascending or descending order. You can also utilize scans to get all index data. It requires table and index name. You utilize a filter expression to retrieve specific data. Table and Index Data Synchronization DynamoDB automatically performs synchronization on indexes with their parent table. Each modifying operation on items causes asynchronous updates, however, applications do not write to indexes directly. You need to understand the impact of DynamoDB maintenance on indices. On creation of an index, you specify key attributes and data types, which means on a write, those data types must match key schema data types. On item creation or deletion, indexes update in an eventually consistent manner, however, updates to data propagate in a fraction of a second (unless system failure of some type occurs). You must account for this delay in applications. Throughput Considerations in Global Secondary Indexes − Multiple global secondary indexes impact throughput. Index creation requires capacity unit specifications, which exist separate from the table, resulting in operations consuming index capacity units rather than table units. This can result in throttling if a query or write exceeds provisioned throughput. View throughput settings by using DescribeTable. Read Capacity − Global secondary indexes deliver eventual consistency. In queries, DynamoDB performs provision calculations identical to that used for tables, with a lone difference of using index entry size rather than item size. The limit of a query returns remains 1MB, which includes attribute name size and values across every returned item. Write Capacity When write operations occur, the affected index consumes write units. Write throughput costs are the sum of write capacity units consumed in table writes and units consumed in index updates. A successful write operation requires sufficient capacity, or it results in throttling. Write costs also remain dependent on certain factors, some of which are as follows − New items defining indexed attributes or item updates defining undefined indexed attributes use a single write operation to add the item to the index. Updates changing indexed key attribute value use two writes to delete an item and write a new one. A table write triggering deletion of an indexed attribute uses a single write to erase the old item projection in the index. Items absent in the index prior to and after an update operation use no writes. Updates changing only projected attribute value in the index key schema, and not indexed key attribute value, use one write to update values of projected attributes into the index. All these factors assume an item size of less than or equal to 1KB. Global Secondary Index Storage On an item write, DynamoDB automatically copies the right set of attributes to any indices where the attributes must exist. This impacts your account by charging it for table item storage and attribute storage. The space used results from the sum of these quantities − Byte size of table primary key Byte size of index key attribute Byte size of projected attributes 100 byte-overhead per index item You can estimate storage needs through estimating average item size and multiplying by the quantity of the table items with the global secondary index key attributes. DynamoDB does not write item data for a table item with an undefined attribute defined as an index partition or sort key. Global Secondary Index Crud Create a table with global secondary indexes by using the CreateTable operation paired with the GlobalSecondaryIndexes parameter. You must specify an attribute to serve as the index partition key, or use another for the index sort key. All index key attributes must be string, number, or binary scalars. You must

DynamoDB – Conditions

DynamoDB – Conditions ”; Previous Next In granting permissions, DynamoDB allows specifying conditions for them through a detailed IAM policy with condition keys. This supports settings like access to specific items and attributes. Note − The DynamoDB does not support any tags. Detailed Control Several conditions allow specificity down to items and attributes like granting read-only access to specific items based on user account. Implement this level of control with conditioned IAM policies, which manages the security credentials. Then simply apply the policy to the desired users, groups, and roles. Web Identity Federation, a topic discussed later, also provides a way to control user access through Amazon, Facebook, and Google logins. The condition element of IAM policy implements access control. You simply add it to a policy. An example of its use consists of denying or permitting access to table items and attributes. The condition element can also employ condition keys to limit permissions. You can review the following two examples of the condition keys − dynamodb:LeadingKeys − It prevents the item access by users without an ID matching the partition key value. dynamodb:Attributes − It prevents users from accessing or operating on attributes outside of those listed. On evaluation, IAM policies result in a true or false value. If any part evaluates to false, the whole policy evaluates to false, which results in denial of access. Be sure to specify all required information in condition keys to ensure users have appropriate access. Predefined Condition Keys AWS offers a collection of predefined condition keys, which apply to all services. They support a broad range of uses and fine detail in examining users and access. Note − There is case sensitivity in condition keys. You can review a selection of the following service-specific keys − dynamodb:LeadingKey − It represents a table”s first key attribute; the partition key. Use the ForAllValues modifier in conditions. dynamodb:Select − It represents a query/scan request Select parameter. It must be of the value ALL_ATTRIBUTES, ALL_PROJECTED_ATTRIBUTES, SPECIFIC_ATTRIBUTES, or COUNT. dynamodb:Attributes − It represents an attribute name list within a request, or attributes returned from a request. Its values and their functions resemble API action parameters, e.g., BatchGetItem uses AttributesToGet. dynamodb:ReturnValues − It represents a requests’ ReturnValues parameter, and can use these values: ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW, and NONE. dynamodb:ReturnConsumedCapacity − It represents a request”s ReturnConsumedCapacity parameter, and can use these values: TOTAL and NONE. Print Page Previous Next Advertisements ”;

DynamoDB – Indexes

DynamoDB – Indexes ”; Previous Next DynamoDB uses indexes for primary key attributes to improve accesses. They accelerate application accesses and data retrieval, and support better performance by reducing application lag. Secondary Index A secondary index holds an attribute subset and an alternate key. You use it through either a query or scan operation, which targets the index. Its contents include attributes you project or copy. In creation, you define an alternate key for the index, and any attributes you wish to project in the index. DynamoDB then performs a copy of the attributes into the index, including primary key attributes sourced from the table. After performing these tasks, you simply use a query/scan as if performing on a table. DynamoDB automatically maintains all secondary indices. On item operations, such as adding or deleting, it updates any indexes on the target table. DynamoDB offers two types of secondary indexes − Global Secondary Index − This index includes a partition key and sort key, which may differ from the source table. It uses the label “global” due to the capability of queries/scans on the index to span all table data, and over all partitions. Local Secondary Index − This index shares a partition key with the table, but uses a different sort key. Its “local” nature results from all of its partitions scoping to a table partition with identical partition key value. The best type of index to use depends on application needs. Consider the differences between the two presented in the following table − Quality Global Secondary Index Local Secondary Index Key Schema It uses a simple or composite primary key. It always uses a composite primary key. Key Attributes The index partition key and sort key can consist of string, number, or binary table attributes. The partition key of the index is an attribute shared with the table partition key. The sort key can be string, number, or binary table attributes. Size Limits Per Partition Key Value They carry no size limitations. It imposes a 10GB maximum limit on total size of indexed items associated with a partition key value. Online Index Operations You can spawn them at table creation, add them to existing tables, or delete existing ones. You must create them at table creation, but cannot delete them or add them to existing tables. Queries It allows queries covering the entire table, and every partition. They address single partitions through the partition key value provided in the query. Consistency Queries of these indices only offer the eventually consistent option. Queries of these offer the options of eventually consistent or strongly consistent. Throughput Cost It includes throughput settings for reads and writes. Queries/scans consume capacity from the index, not the table, which also applies to table write updates. Queries/scans consume table read capacity. Table writes update local indexes, and consume table capacity units. Projection Queries/scans can only request attributes projected into the index, with no retrievals of table attributes. Queries/scans can request those attributes not projected; furthermore, automatic fetches of them occur. When creating multiple tables with secondary indexes, do it sequentially; meaning make a table and wait for it to reach ACTIVE state before creating another and again waiting. DynamoDB does not permit concurrent creation. Each secondary index requires certain specifications − Type − Specify local or global. Name − It uses naming rules identical to tables. Key Schema − Only top level string, number, or binary type are permitted, with index type determining other requirements. Attributes for Projection − DynamoDB automatically projects them, and allows any data type. Throughput − Specify read/write capacity for global secondary indexes. The limit for indexes remains 5 global and 5 local per table. You can access the detailed information about indexes with DescribeTable. It returns the name, size, and item count. Note − These values updates every 6 hours. In queries or scans used to access index data, provide the table and index names, desired attributes for the result, and any conditional statements. DynamoDB offers the option to return results in either ascending or descending order. Note − The deletion of a table also deletes all indexes. Print Page Previous Next Advertisements ”;

DynamoDB – Scan

DynamoDB – Scan ”; Previous Next Scan Operations read all table items or secondary indices. Its default function results in returning all data attributes of all items within an index or table. Employ the ProjectionExpression parameter in filtering attributes. Every scan returns a result set, even on finding no matches, which results in an empty set. Scans retrieve no more than 1MB, with the option to filter data. Note − The parameters and filtering of scans also apply to querying. Types of Scan Operations Filtering − Scan operations offer fine filtering through filter expressions, which modify data after scans, or queries; before returning results. The expressions use comparison operators. Their syntax resembles condition expressions with the exception of key attributes, which filter expressions do not permit. You cannot use a partition or sort key in a filter expression. Note − The 1MB limit applies prior to any application of filtering. Throughput Specifications − Scans consume throughput, however, consumption focuses on item size rather than returned data. The consumption remains the same whether you request every attribute or only a few, and using or not using a filter expression also does not impact consumption. Pagination − DynamoDB paginates results causing division of results into specific pages. The 1MB limit applies to returned results, and when you exceed it, another scan becomes necessary to gather the rest of the data. The LastEvaluatedKey value allows you to perform this subsequent scan. Simply apply the value to the ExclusiveStartkey. When the LastEvaluatedKey value becomes null, the operation has completed all pages of data. However, a non-null value does not automatically mean more data remains. Only a null value indicates status. The Limit Parameter − The limit parameter manages the result size. DynamoDB uses it to establish the number of items to process before returning data, and does not work outside of the scope. If you set a value of x, DynamoDB returns the first x matching items. The LastEvaluatedKey value also applies in cases of limit parameters yielding partial results. Use it to complete scans. Result Count − Responses to queries and scans also include information related to ScannedCount and Count, which quantify scanned/queried items and quantify items returned. If you do not filter, their values are identical. When you exceed 1MB, the counts represent only the portion processed. Consistency − Query results and scan results are eventually consistent reads, however, you can set strongly consistent reads as well. Use the ConsistentRead parameter to change this setting. Note − Consistent read settings impact consumption by using double the capacity units when set to strongly consistent. Performance − Queries offer better performance than scans due to scans crawling the full table or secondary index, resulting in a sluggish response and heavy throughput consumption. Scans work best for small tables and searches with less filters, however, you can design lean scans by obeying a few best practices such as avoiding sudden, accelerated read activity and exploiting parallel scans. A query finds a certain range of keys satisfying a given condition, with performance dictated by the amount of data it retrieves rather than the volume of keys. The parameters of the operation and the number of matches specifically impact performance. Parallel Scan Scan operations perform processing sequentially by default. Then they return data in 1MB portions, which prompts the application to fetch the next portion. This results in long scans for large tables and indices. This characteristic also means scans may not always fully exploit the available throughput. DynamoDB distributes table data across multiple partitions; and scan throughput remains limited to a single partition due to its single-partition operation. A solution for this problem comes from logically dividing tables or indices into segments. Then “workers” parallel (concurrently) scan segments. It uses the parameters of Segment and TotalSegments to specify segments scanned by certain workers and specify the total quantity of segments processed. Worker Number You must experiment with worker values (Segment parameter) to achieve the best application performance. Note − Parallel scans with large sets of workers impacts throughput by possibly consuming all throughput. Manage this issue with the Limit parameter, which you can use to stop a single worker from consuming all throughput. The following is a deep scan example. Note − The following program may assume a previously created data source. Before attempting to execute, acquire supporting libraries and create necessary data sources (tables with required characteristics, or other referenced sources). This example also uses Eclipse IDE, an AWS credentials file, and the AWS Toolkit within an Eclipse AWS Java Project. import java.util.HashMap; import java.util.Iterator; import java.util.Map; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient; import com.amazonaws.services.dynamodbv2.document.DynamoDB; import com.amazonaws.services.dynamodbv2.document.Item; import com.amazonaws.services.dynamodbv2.document.ItemCollection; import com.amazonaws.services.dynamodbv2.document.ScanOutcome; import com.amazonaws.services.dynamodbv2.document.Table; public class ScanOpSample { static DynamoDB dynamoDB = new DynamoDB( new AmazonDynamoDBClient(new ProfileCredentialsProvider())); static String tableName = “ProductList”; public static void main(String[] args) throws Exception { findProductsUnderOneHun(); //finds products under 100 dollars } private static void findProductsUnderOneHun() { Table table = dynamoDB.getTable(tableName); Map<String, Object> expressionAttributeValues = new HashMap<String, Object>(); expressionAttributeValues.put(“:pr”, 100); ItemCollection<ScanOutcome> items = table.scan ( “Price < :pr”, //FilterExpression “ID, Nomenclature, ProductCategory, Price”, //ProjectionExpression null, //No ExpressionAttributeNames expressionAttributeValues); System.out.println(“Scanned ” + tableName + ” to find items under $100.”); Iterator<Item> iterator = items.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toJSONPretty()); } } } Print Page Previous Next Advertisements ”;

DynamoDB – Update Items

DynamoDB – Update Items ”; Previous Next Updating an item in DynamoDB mainly consists of specifying the full primary key and table name for the item. It requires a new value for each attribute you modify. The operation uses UpdateItem, which modifies the existing items or creates them on discovery of a missing item. In updates, you might want to track the changes by displaying the original and new values, before and after the operations. UpdateItem uses the ReturnValues parameter to achieve this. Note − The operation does not report capacity unit consumption, but you can use the ReturnConsumedCapacity parameter. Use the GUI console, Java, or any other tool to perform this task. How to Update Items Using GUI Tools? Navigate to the console. In the navigation pane on the left side, select Tables. Choose the table needed, and then select the Items tab. Choose the item desired for an update, and select Actions | Edit. Modify any attributes or values necessary in the Edit Item window. Update Items Using Java Using Java in the item update operations requires creating a Table class instance, and calling its updateItem method. Then you specify the item”s primary key, and provide an UpdateExpression detailing attribute modifications. The Following is an example of the same − DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); Table table = dynamoDB.getTable(“ProductList”); Map<String, String> expressionAttributeNames = new HashMap<String, String>(); expressionAttributeNames.put(“#M”, “Make”); expressionAttributeNames.put(“#P”, “Price expressionAttributeNames.put(“#N”, “ID”); Map<String, Object> expressionAttributeValues = new HashMap<String, Object>(); expressionAttributeValues.put(“:val1”, new HashSet<String>(Arrays.asList(“Make1″,”Make2”))); expressionAttributeValues.put(“:val2”, 1); //Price UpdateItemOutcome outcome = table.updateItem( “internalID”, // key attribute name 111, // key attribute value “add #M :val1 set #P = #P – :val2 remove #N”, // UpdateExpression expressionAttributeNames, expressionAttributeValues); The updateItem method also allows for specifying conditions, which can be seen in the following example − Table table = dynamoDB.getTable(“ProductList”); Map<String, String> expressionAttributeNames = new HashMap<String, String>(); expressionAttributeNames.put(“#P”, “Price”); Map<String, Object> expressionAttributeValues = new HashMap<String, Object>(); expressionAttributeValues.put(“:val1”, 44); // change Price to 44 expressionAttributeValues.put(“:val2”, 15); // only if currently 15 UpdateItemOutcome outcome = table.updateItem (new PrimaryKey(“internalID”,111), “set #P = :val1”, // Update “#P = :val2”, // Condition expressionAttributeNames, expressionAttributeValues); Update Items Using Counters DynamoDB allows atomic counters, which means using UpdateItem to increment/decrement attribute values without impacting other requests; furthermore, the counters always update. The following is an example that explains how it can be done. Note − The following sample may assume a previously created data source. Before attempting to execute, acquire supporting libraries and create necessary data sources (tables with required characteristics, or other referenced sources). This sample also uses Eclipse IDE, an AWS credentials file, and the AWS Toolkit within an Eclipse AWS Java Project. package com.amazonaws.codesamples.document; import java.io.IOException; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient; import com.amazonaws.services.dynamodbv2.document.DeleteItemOutcome; import com.amazonaws.services.dynamodbv2.document.DynamoDB; import com.amazonaws.services.dynamodbv2.document.Item; import com.amazonaws.services.dynamodbv2.document.Table; import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome; import com.amazonaws.services.dynamodbv2.document.spec.DeleteItemSpec; import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec; import com.amazonaws.services.dynamodbv2.document.utils.NameMap; import com.amazonaws.services.dynamodbv2.document.utils.ValueMap; import com.amazonaws.services.dynamodbv2.model.ReturnValue; public class UpdateItemOpSample { static DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); static String tblName = “ProductList”; public static void main(String[] args) throws IOException { createItems(); retrieveItem(); // Execute updates updateMultipleAttributes(); updateAddNewAttribute(); updateExistingAttributeConditionally(); // Item deletion deleteItem(); } private static void createItems() { Table table = dynamoDB.getTable(tblName); try { Item item = new Item() .withPrimaryKey(“ID”, 303) .withString(“Nomenclature”, “Polymer Blaster 4000”) .withStringSet( “Manufacturers”, new HashSet<String>(Arrays.asList(“XYZ Inc.”, “LMNOP Inc.”))) .withNumber(“Price”, 50000) .withBoolean(“InProduction”, true) .withString(“Category”, “Laser Cutter”); table.putItem(item); item = new Item() .withPrimaryKey(“ID”, 313) .withString(“Nomenclature”, “Agitatatron 2000”) .withStringSet( “Manufacturers”, new HashSet<String>(Arrays.asList(“XYZ Inc,”, “CDE Inc.”))) .withNumber(“Price”, 40000) .withBoolean(“InProduction”, true) .withString(“Category”, “Agitator”); table.putItem(item); } catch (Exception e) { System.err.println(“Cannot create items.”); System.err.println(e.getMessage()); } } private static void updateAddNewAttribute() { Table table = dynamoDB.getTable(tableName); try { Map<String, String> expressionAttributeNames = new HashMap<String, String>(); expressionAttributeNames.put(“#na”, “NewAttribute”); UpdateItemSpec updateItemSpec = new UpdateItemSpec() .withPrimaryKey(“ID”, 303) .withUpdateExpression(“set #na = :val1”) .withNameMap(new NameMap() .with(“#na”, “NewAttribute”)) .withValueMap(new ValueMap() .withString(“:val1”, “A value”)) .withReturnValues(ReturnValue.ALL_NEW); UpdateItemOutcome outcome = table.updateItem(updateItemSpec); // Confirm System.out.println(“Displaying updated item…”); System.out.println(outcome.getItem().toJSONPretty()); } catch (Exception e) { System.err.println(“Cannot add an attribute in ” + tableName); System.err.println(e.getMessage()); } } } Print Page Previous Next Advertisements ”;

DynamoDB – Getting Items

DynamoDB – Getting Items ”; Previous Next Retrieving an item in DynamoDB requires using GetItem, and specifying the table name and item primary key. Be sure to include a complete primary key rather than omitting a portion. For example, omitting the sort key of a composite key. GetItem behaviour conforms to three defaults − It executes as an eventually consistent read. It provides all attributes. It does not detail its capacity unit consumption. These parameters allow you to override the default GetItem behaviour. Retrieve an Item DynamoDB ensures reliability through maintaining multiple copies of items across multiple servers. Each successful write creates these copies, but takes substantial time to execute; meaning eventually consistent. This means you cannot immediately attempt a read after writing an item. You can change the default eventually consistent read of GetItem, however, the cost of more current data remains consumption of more capacity units; specifically, two times as much. Note DynamoDB typically achieves consistency across every copy within a second. You can use the GUI console, Java, or another tool to perform this task. Item Retrieval Using Java Using Java in item retrieval operations requires creating a DynamoDB Class Instance, Table Class Instance, and calling the Table instance”s getItem method. Then specify the primary key of the item. You can review the following example − DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); Table table = dynamoDB.getTable(“ProductList”); Item item = table.getItem(“IDnum”, 109); In some cases, you need to specify the parameters for this operation. The following example uses .withProjectionExpression and GetItemSpec for retrieval specifications − GetItemSpec spec = new GetItemSpec() .withPrimaryKey(“IDnum”, 122) .withProjectionExpression(“IDnum, EmployeeName, Department”) .withConsistentRead(true); Item item = table.getItem(spec); System.out.println(item.toJSONPretty()); You can also review a the following bigger example for better understanding. Note − The following sample may assume a previously created data source. Before attempting to execute, acquire supporting libraries and create necessary data sources (tables with required characteristics, or other referenced sources). This sample also uses Eclipse IDE, an AWS credentials file, and the AWS Toolkit within an Eclipse AWS Java Project. package com.amazonaws.codesamples.document; import java.io.IOException import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient; import com.amazonaws.services.dynamodbv2.document.DeleteItemOutcome; import com.amazonaws.services.dynamodbv2.document.DynamoDB; import com.amazonaws.services.dynamodbv2.document.Item; import com.amazonaws.services.dynamodbv2.document.Table; import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome; import com.amazonaws.services.dynamodbv2.document.spec.DeleteItemSpec; import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec; import com.amazonaws.services.dynamodbv2.document.utils.NameMap; import com.amazonaws.services.dynamodbv2.document.utils.ValueMap; import com.amazonaws.services.dynamodbv2.model.ReturnValue; public class GetItemOpSample { static DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient( new ProfileCredentialsProvider())); static String tblName = “ProductList”; public static void main(String[] args) throws IOException { createItems(); retrieveItem(); // Execute updates updateMultipleAttributes(); updateAddNewAttribute(); updateExistingAttributeConditionally(); // Item deletion deleteItem(); } private static void createItems() { Table table = dynamoDB.getTable(tblName); try { Item item = new Item() .withPrimaryKey(“ID”, 303) .withString(“Nomenclature”, “Polymer Blaster 4000”) .withStringSet( “Manufacturers”, new HashSet<String>(Arrays.asList(“XYZ Inc.”, “LMNOP Inc.”))) .withNumber(“Price”, 50000) .withBoolean(“InProduction”, true) .withString(“Category”, “Laser Cutter”); table.putItem(item); item = new Item() .withPrimaryKey(“ID”, 313) .withString(“Nomenclature”, “Agitatatron 2000”) .withStringSet( “Manufacturers”, new HashSet<String>(Arrays.asList(“XYZ Inc,”, “CDE Inc.”))) .withNumber(“Price”, 40000) .withBoolean(“InProduction”, true) .withString(“Category”, “Agitator”); table.putItem(item); } catch (Exception e) { System.err.println(“Cannot create items.”); System.err.println(e.getMessage()); } } private static void retrieveItem() { Table table = dynamoDB.getTable(tableName); try { Item item = table.getItem(“ID”, 303, “ID, Nomenclature, Manufacturers”, null); System.out.println(“Displaying retrieved items…”); System.out.println(item.toJSONPretty()); } catch (Exception e) { System.err.println(“Cannot retrieve items.”); System.err.println(e.getMessage()); } } } Print Page Previous Next Advertisements ”;