AWS – Discussion

Discuss Amazon Web Services ”; Previous Next Amazon Web Services (AWS) is Amazon’s cloud web hosting platform that offers flexible, reliable, scalable, easy-to-use, and cost-effective solutions. This tutorial covers various important topics illustrating how AWS works and how it is beneficial to run your website on Amazon Web Services. Print Page Previous Next Advertisements ”;

AWS – Data Pipeline

Amazon Web Services – Data Pipeline ”; Previous Next AWS Data Pipeline is a web service, designed to make it easier for users to integrate data spread across multiple AWS services and analyze it from a single location. Using AWS Data Pipeline, data can be accessed from the source, processed, and then the results can be efficiently transferred to the respective AWS services. How to Set Up Data Pipeline? Following are the steps to set up data pipeline − Step 1 − Create the Pipeline using the following steps. Sign-in to AWS account. Use this link to Open AWS Data Pipeline console − https://console.aws.amazon.com/datapipeline/ Select the region in the navigation bar. Click the Create New Pipeline button. Fill the required details in the respective fields. In the Source field, choose Build using a template and then select this template − Getting Started using ShellCommandActivity. The Parameters section opens only when the template is selected. Leave the S3 input folder and Shell command to run with their default values. Click the folder icon next to S3 output folder, and select the buckets. In Schedule, leave the values as default. In Pipeline Configuration, leave the logging as enabled. Click the folder icon under S3 location for logs and select the buckets. In Security/Access, leave IAM roles values as default. Click the Activate button. How to Delete a Pipeline? Deleting the pipeline will also delete all associated objects. Step 1 − Select the pipeline from the pipelines list. Step 2 − Click the Actions button and then choose Delete. Step 3 − A confirmation prompt window opens. Click Delete. Features of AWS Data Pipeline Simple and cost-efficient − Its drag-and-drop features makes it easy to create a pipeline on console. Its visual pipeline creator provides a library of pipeline templates. These templates make it easier to create pipelines for tasks like processing log files, archiving data to Amazon S3, etc. Reliable − Its infrastructure is designed for fault tolerant execution activities. If failures occur in the activity logic or data sources, then AWS Data Pipeline automatically retries the activity. If the failure continues, then it will send a failure notification. We can even configure these notification alerts for situations like successful runs, failure, delays in activities, etc. Flexible − AWS Data Pipeline provides various features like scheduling, tracking, error handling, etc. It can be configured to take actions like run Amazon EMR jobs, execute SQL queries directly against databases, execute custom applications running on Amazon EC2, etc. Print Page Previous Next Advertisements ”;

AWS – Storage Gateway

Amazon Web Services – Storage Gateway ”; Previous Next AWS Storage Gateway provides integration between the on-premises IT environment and the AWS storage infrastructure. The user can store data in the AWS cloud for scalable, data security features and cost-efficient storage. AWS Gateway offers two types of storage, i.e. volume based and tape based. Volume Gateways This storage type provides cloud-backed storage volumes which can be mount as Internet Small Computer System Interface (iSCSI) devices from on-premises application servers. Gateway-cached Volumes AWS Storage Gateway stores all the on-premises application data in a storage volume in Amazon S3. Its storage volume ranges from 1GB to 32 TB and up to 20 volumes with a total storage of 150TB. We can attach these volumes with iSCSI devices from on-premises application servers. It is of two categories − Cache Storage Disk Every application requires storage volumes to store their data. This storage type is used to initially store data when it is to be written to the storage volumes in AWS. The data from the cache storage disk is waiting to be uploaded to Amazon S3 from the upload buffer. The cache storage disk keeps the most recently accessed data for low-latency access. When the application needs data, the cache storage disk is first checked before checking Amazon S3. There are few guidelines to determine the amount of disk space to be allocated for cache storage. We should allocate at least 20% of the existing file store size as cache storage. It should be more than the upload buffer. Upload buffer disk − This type of storage disk is used to store the data before it is uploaded to Amazon S3 over SSL connection. The storage gateway uploads the data from the upload buffer over an SSL connection to AWS. Snapshots − Sometimes we need to back up storage volumes in Amazon S3. These backups are incremental and are known as snapshots. The snapshots are stored in Amazon S3 as Amazon EBS snapshots. Incremental backup means that a new snapshot is backing up only the data that has changed since the last snapshot. We can take snapshots either at a scheduled interval or as per the requirement. Gateway-stored Volumes When the Virtual Machine (VM) is activated, gateway volumes are created and mapped to the on-premises direct-attached storage disks. Hence, when the applications write/read the data from the gateway storage volumes, it reads and writes the data from the mapped on-premises disk. A gateway-stored volume allows to store primary data locally and provides on-premises applications with low-latency access to entire datasets. We can mount them as iSCSI devices to the on-premises application servers. It ranges from 1 GB to 16 TB in size and supports up to 12 volumes per gateway with a maximum storage of 192 TB. Gateway-Virtual Tape Library (VTL) This storage type provides a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning, scaling, and maintaining a physical tape infrastructure. Each gateway-VTL is preconfigured with media changer and tape drives, that are available with the existing client backup applications as iSCSI devices. Tape cartridges can be added later as required to archive the data. Few terms used in Architecture are explained below. Virtual Tape − Virtual tape is similar to a physical tape cartridge. It is stored in the AWS cloud. We can create virtual tapes in two ways: by using AWS Storage Gateway console or by using AWS Storage Gateway API. The size of each virtual tape is from 100 GB to 2.5 TB. The size of one gateway is up to 150 TB and can have maximum 1500 tapes at a time. Virtual Tape Library (VTL) − Each gateway-VTL comes with one VTL. VTL is similar to a physical tape library available on-premises with tape drives. The gateway first stores data locally, then asynchronously uploads it to virtual tapes of VTL. Tape Drive − A VTL tape drive is similar to a physical tape drive that can perform I/O operations on tape. Each VTL consists of 10 tape drives that are used for backup applications as iSCSI devices. Media Changer − A VTL media changer is similar to a robot that moves tapes around in a physical tape library”s storage slots and tape drives. Each VTL comes with one media changer that is used for backup applications as iSCSI device. Virtual Tape Shelf (VTS) − A VTS is used to archive tapes from gateway VTL to VTS and vice-a-versa. Archiving Tapes − When the backup software ejects a tape, the gateway moves the tape to the VTS for storage. It is used data archiving and backups. Retrieving Tapes − Tapes archived to the VTS cannot be read directly so to read an archived tape, we need to retrieve the tape from gateway VTL either by using the AWS Storage Gateway console or by using the AWS Storage Gateway API. Print Page Previous Next Advertisements ”;

AWS – Elastic Block Store

Amazon Web Services – Elastic Block Store ”; Previous Next Amazon Elastic Block Store (EBS) is a block storage system used to store persistent data. Amazon EBS is suitable for EC2 instances by providing highly available block level storage volumes. It has three types of volume, i.e. General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. These three volume types differ in performance, characteristics, and cost. EBS Volume Types Following are the three types. EBS General Purpose (SSD) This volume type is suitable for small and medium workloads like Root disk EC2 volumes, small and medium database workloads, frequently logs accessing workloads, etc. By default, SSD supports 3 IOPS (Input Output Operations per Second)/GB means 1 GB volume will give 3 IOPS, and 10 GB volume will give 30 IOPS. Its storage capacity of one volume ranges from 1 GB to 1 TB. The cost of one volume is $0.10 per GB for one month. Provisioned IOPS (SSD) This volume type is suitable for the most demanding I/O intensive, transactional workloads and large relational, EMR and Hadoop workloads, etc. By default, IOPS SSD supports 30 IOPS/GB means 10GB volume will give 300 IOPS. Its storage capacity of one volume ranges from 10GB to 1TB. The cost of one volume is $0.125 per GB for one month for provisioned storage and $0.10 per provisioned IOPS for one month. EBS Magnetic Volumes It was formerly known as standard volumes. This volume type is suitable for ideal workloads like infrequently accessing data, i.e. data backups for recovery, logs storage, etc. Its storage capacity of one volume ranges from 10GB to 1TB. The cost of one volume is $0.05 per GB for one month for provisioned storage and $0. 05 per million I/O requests. Volumes Attached to One Instance Each account will be limited to 20 EBS volumes. For a requirement of more than 20 EBS volumes, contact Amazon’s Support team. We can attach up to 20 volumes on a single instance and each volume ranges from 1GB to 1TB in size. In EC2 instances, we store data in local storage which is available till the instance is running. However, when we shut down the instance, the data gets lost. Thus, when we need to save anything, it is advised to save it on Amazon EBS, as we can access and read the EBS volumes anytime, once we attach the file to an EC2 instance. Amazon EBS Benefits Reliable and secure storage − Each of the EBS volume will automatically respond to its Availability Zone to protect from component failure. Secure − Amazon’s flexible access control policies allows to specify who can access which EBS volumes. Access control plus encryption offers a strong defense-in-depth security strategy for data. Higher performance − Amazon EBS uses SSD technology to deliver data results with consistent I/O performance of application. Easy data backup − Data backup can be saved by taking point-in-time snapshots of Amazon EBS volumes. How to Set Up Amazon EBS? Step 1 − Create Amazon EBS volume using the following steps. Open the Amazon EC2 console. Select the region in the navigation bar where the volume is to be created. In the navigation pane, select Volumes, then select Create Volume. Provide the required information like Volume Type list, Size, IOPS, Availability zone, etc. then click the Create button. The volume names can be seen in the volumes list. Step 2 − Store EBS Volume from a snapshot using the following steps. Repeat the above 1 to 4 steps to create volume. Type snapshot ID in the Snapshot ID field from which the volume is to be restored and select it from the list of suggested options. If there is requirement for more storage, change the storage size in the Size field. Select the Yes Create button. Step 3 − Attach EBS Volume to an Instance using the following steps. Open the Amazon EC2 console. Select Volumes in the navigation pane. Choose a volume and click the Attach Volume option. An Attach Volume dialog box will open. Enter the name/ID of instance to attach the volume in the Instance field or select it from the list of suggestion options. Click the Attach button. Connect to instance and make the volume available. Step 4 − Detach a volume from Instance. First, use the command /dev/sdh in cmd to unmount the device. Open the Amazon EC2 console. In the navigation pane, select the Volumes option. Choose a volume and click the Detach Volumes option. A confirmation dialog box opens. Click the Yes, Detach button to confirm. Print Page Previous Next Advertisements ”;

AWS – Amazon Kinesis

Amazon Web Services – Kinesis ”; Previous Next Amazon Kinesis is a managed, scalable, cloud-based service that allows real-time processing of streaming large amount of data per second. It is designed for real-time applications and allows developers to take in any amount of data from several sources, scaling up and down that can be run on EC2 instances. It is used to capture, store, and process data from large, distributed streams such as event logs and social media feeds. After processing the data, Kinesis distributes it to multiple consumers simultaneously. How to Use Amazon KCL? It is used in situations where we require rapidly moving data and its continuous processing. Amazon Kinesis can be used in the following situations − Data log and data feed intake − We need not wait to batch up the data, we can push data to an Amazon Kinesis stream as soon as the data is produced. It also protects data loss in case of data producer fails. For example: System and application logs can be continuously added to a stream and can be available in seconds when required. Real-time graphs − We can extract graphs/metrics using Amazon Kinesis stream to create report results. We need not wait for data batches. Real-time data analytics − We can run real-time streaming data analytics by using Amazon Kinesis. Limits of Amazon Kinesis? Following are certain limits that should be kept in mind while using Amazon Kinesis Streams − Records of a stream can be accessible up to 24 hours by default and can be extended up to 7 days by enabling extended data retention. The maximum size of a data blob (the data payload before Base64-encoding) in one record is 1 megabyte (MB). One shard supports up to 1000 PUT records per second. For more information related to limits, visit the following link − https://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html How to Use Amazon Kinesis? Following are the steps to use Amazon Kinesis − Step 1 − Set up Kinesis Stream using the following steps − Sign into AWS account. Select Amazon Kinesis from Amazon Management Console. Click the Create stream and fill the required fields such as stream name and number of shards. Click the Create button. The Stream will now be visible in the Stream List. Step 2 − Set up users on Kinesis stream. Create New Users & assign a policy to each user.(We have discussed the procedure above to create Users and assigning policy to them) Step 3 − Connect your application to Amazon Kinesis; here we are connecting Zoomdata to Amazon Kinesis. Following are the steps to connect. Log in to Zoomdata as Administrator and click Sources in menu. Select the Kinesis icon and fill the required details. Click the Next button. Select the desired Stream on the Stream tab. On the Fields tab, create unique label names, as required and click the Next button. On the Charts Tab, enable the charts for data. Customize the settings as required and then click the Finish button to save the setting. Features of Amazon Kinesis Real-time processing − It allows to collect and analyze information in real-time like stock trade prices otherwise we need to wait for data-out report. Easy to use − Using Amazon Kinesis, we can create a new stream, set its requirements, and start streaming data quickly. High throughput, elastic − It allows to collect and analyze information in real-time like stock trade prices otherwise we need to wait for data-out report. Integrate with other Amazon services − It can be integrated with Amazon Redshift, Amazon S3 and Amazon DynamoDB. Build kinesis applications − Amazon Kinesis provides the developers with client libraries that enable the design and operation of real-time data processing applications. Add the Amazon Kinesis Client Library to Java application and it will notify when new data is available for processing. Cost-efficient − Amazon Kinesis is cost-efficient for workloads of any scale. Pay as we go for the resources used and pay hourly for the throughput required. Print Page Previous Next Advertisements ”;

AWS – Elastic MapReduce

Amazon Web Services – Elastic MapReduce ”; Previous Next Amazon Elastic MapReduce (EMR) is a web service that provides a managed framework to run data processing frameworks such as Apache Hadoop, Apache Spark, and Presto in an easy, cost-effective, and secure manner. It is used for data analysis, web indexing, data warehousing, financial analysis, scientific simulation, etc. How to Set Up Amazon EMR? Follow these steps to set up Amazon EMR − Step 1 − Sign in to AWS account and select Amazon EMR on management console. Step 2 − Create Amazon S3 bucket for cluster logs & output data. (Procedure is explained in detail in Amazon S3 section) Step 3 − Launch Amazon EMR cluster. Following are the steps to create cluster and launch it to EMR. Use this link to open Amazon EMR console − https://console.aws.amazon.com/elasticmapreduce/home Select create cluster and provide the required details on Cluster Configuration page. Leave the Tags section options as default and proceed. On the Software configuration section, level the options as default. On the File System Configuration section, leave the options for EMRFS as set by default. EMRFS is an implementation of HDFS, it allows Amazon EMR clusters to store data on Amazon S3. On the Hardware Configuration section, select m3.xlarge in EC2 instance type field and leave other settings as default. Click the Next button. On the Security and Access section, for EC2 key pair, select the pair from the list in EC2 key pair field and leave the other settings as default. On Bootstrap Actions section, leave the fields as set by default and click the Add button. Bootstrap actions are scripts that are executed during the setup before Hadoop starts on every cluster node. On the Steps section, leave the settings as default and proceed. Click the Create Cluster button and the Cluster Details page opens. This is where we should run the Hive script as a cluster step and use the Hue web interface to query the data. Step 4 − Run the Hive script using the following steps. Open the Amazon EMR console and select the desired cluster. Move to the Steps section and expand it. Then click the Add step button. The Add Step dialog box opens. Fill the required fields, then click the Add button. To view the output of Hive script, use the following steps − Open the Amazon S3 console and select S3 bucket used for the output data. Select the output folder. The query writes the results into a separate folder. Select os_requests. The output is stored in a text file. This file can be downloaded. Benefits of Amazon EMR Following are the benefits of Amazon EMR − Easy to use − Amazon EMR is easy to use, i.e. it is easy to set up cluster, Hadoop configuration, node provisioning, etc. Reliable − It is reliable in the sense that it retries failed tasks and automatically replaces poorly performing instances. Elastic − Amazon EMR allows to compute large amount of instances to process data at any scale. It easily increases or decreases the number of instances. Secure − It automatically configures Amazon EC2 firewall settings, controls network access to instances, launch clusters in an Amazon VPC, etc. Flexible − It allows complete control over the clusters and root access to every instance. It also allows installation of additional applications and customizes your cluster as per requirement. Cost-efficient − Its pricing is easy to estimate. It charges hourly for every instance used. Print Page Previous Next Advertisements ”;

AWS – Amazon S3

Amazon Web Services – Amazon S3 ”; Previous Next Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online backup and archiving of data and application programs. It allows to upload, store, and download any type of files up to 5 TB in size. This service allows the subscribers to access the same systems that Amazon uses to run its own web sites. The subscriber has control over the accessibility of data, i.e. privately/publicly accessible. How to Configure S3? Following are the steps to configure a S3 account. Step 1 − Open the Amazon S3 console using this link − https://console.aws.amazon.com/s3/home Step 2 − Create a Bucket using the following steps. A prompt window will open. Click the Create Bucket button at the bottom of the page. Create a Bucket dialog box will open. Fill the required details and click the Create button. The bucket is created successfully in Amazon S3. The console displays the list of buckets and its properties. Select the Static Website Hosting option. Click the radio button Enable website hosting and fill the required details. Step 3 − Add an Object to a bucket using the following steps. Open the Amazon S3 console using the following link − https://console.aws.amazon.com/s3/home Click the Upload button. Click the Add files option. Select those files which are to be uploaded from the system and then click the Open button. Click the start upload button. The files will get uploaded into the bucket. To open/download an object − In the Amazon S3 console, in the Objects & Folders list, right-click on the object to be opened/downloaded. Then, select the required object. How to Move S3 Objects? Following are the steps to move S3 objects. step 1 − Open Amazon S3 console. step 2 − Select the files & folders option in the panel. Right-click on the object that is to be moved and click the Cut option. step 3 − Open the location where we want this object. Right-click on the folder/bucket where the object is to be moved and click the Paste into option. How to Delete an Object? Step 1 − Open Amazon S3. Step 2 − Select the files & folders option in the panel. Right-click on the object that is to be deleted. Select the delete option. Step 3 − A pop-up window will open for confirmation. Click Ok. How to Empty a Bucket? Step 1 − Open Amazon S3 console. Step 2 − Right-click on the bucket that is to be emptied and click the empty bucket option. Step 3 − A confirmation message will appear on the pop-up window. Read it carefully and click the Empty bucket button to confirm. Amazon S3 Features Low cost and Easy to Use − Using Amazon S3, the user can store a large amount of data at very low charges. Secure − Amazon S3 supports data transfer over SSL and the data gets encrypted automatically once it is uploaded. The user has complete control over their data by configuring bucket policies using AWS IAM. Scalable − Using Amazon S3, there need not be any worry about storage concerns. We can store as much data as we have and access it anytime. Higher performance − Amazon S3 is integrated with Amazon CloudFront, that distributes content to the end users with low latency and provides high data transfer speeds without any minimum usage commitments. Integrated with AWS services − Amazon S3 integrated with AWS services include Amazon CloudFront, Amazon CLoudWatch, Amazon Kinesis, Amazon RDS, Amazon Route 53, Amazon VPC, AWS Lambda, Amazon EBS, Amazon Dynamo DB, etc. Print Page Previous Next Advertisements ”;

AWS – Console Mobile App

Amazon Web Services – Console Mobile App ”; Previous Next The AWS Console mobile app, provided by Amazon Web Services, allows its users to view resources for select services and also supports a limited set of management functions for select resource types. Following are the various services and supported functions that can be accessed using the mobile app. EC2 (Elastic Compute Cloud) Browse, filter and search instances. View configuration details. Check status of CloudWatch metrics and alarms. Perform operations over instances like start, stop, reboot, termination. Manage security group rules. Manage Elastic IP Addresses. View block devices. Elastic Load Balancing Browse, filter and search load balancers. View configuration details of attached instances. Add and remove instances from load balancers. S3 Browse buckets and view their properties. View properties of objects. Route 53 Browse and view hosted zones. Browse and view details of record sets. RDS (Relational Database Service) Browse, filter, search and reboot instances. View configuration details, security and network settings. Auto Scaling View group details, policies, metrics and alarms. Manage the number of instances as per the situation. Elastic Beanstalk View applications and events. View environment configuration and swap environment CNAMEs. Restart app servers. DynamoDB View tables and their details like metrics, index, alarms, etc. CloudFormation View stack status, tags, parameters, output, events, and resources. OpsWorks View configuration details of stack, layers, instances and applications. View instances, its logs, and reboot them. CloudWatch View CloudWatch graphs of resources. List CloudWatch alarms by status and time. Action configurations for alarms. Services Dashboard Provides information of available services and their status. All information related to the billing of the user. Switch the users to see the resources in multiple accounts. Features of AWS Mobile App To have access to the AWS Mobile App, we must have an existing AWS account. Simply create an identity using the account credentials and select the region in the menu. This app allows us to stay signed in to multiple identities at the same time. For security reasons, it is recommended to secure the device with a passcode and to use an IAM user”s credentials to log in to the app. In case the device is lost, then the IAM user can be deactivated to prevent unauthorized access. Root accounts cannot be deactivated via mobile console. While using AWS Multi-Factor Authentication (MFA), it is recommended to use either a hardware MFA device or a virtual MFA on a separate mobile device for account security reasons. The latest version is 1.14. There is a feedback link in the App”s menu to share our experiences and for any queries. Print Page Previous Next Advertisements ”;

AWS – Account

Amazon Web Services – Account ”; Previous Next How to Use AWS Account? Following are the steps to access AWS services − Create an AWS account. Sign-up for AWS services. Create your password and access your account credentials. Activate your services in credits section. Create an AWS Account Amazon provides a fully functional free account for one year for users to use and learn the different components of AWS. You get access to AWS services like EC2, S3, DynamoDB, etc. for free. However, there are certain limitations based on the resources consumed. Step 1 − To create an AWS account, open this link https://aws.amazon.com and sign-up for new account and enter the required details. If we already have an account, then we can sign-in using the existing AWS password. Step 2 − After providing an email-address, complete this form. Amazon uses this information for billing, invoicing and identifying the account. After creating the account, sign-up for the services needed. Step 3 − To sign-up for the services, enter the payment information. Amazon executes a minimal amount transaction against the card on the file to check that it is valid. This charge varies with the region. Step 4 − Next, is the identity verification. Amazon does a call back to verify the provided contact number. Step 5 − Choose a support plan. Subscribe to one of the plans like Basic, Developer, Business, or Enterprise. The basic plan costs nothing and has limited resources, which is good to get familiar with AWS. Step 6 − The final step is confirmation. Click the link to login again and it redirects to AWS management console. Now the account is created and can be used to avail AWS services. AWS Account Identifiers AWS assigns two unique IDs to each AWS account. An AWS account ID A conical user ID AWS Account ID It is a 12-digit number like 123456789000 and is used to construct Amazon Resource Names (ARN). This ID helps to distinguish our resources from resources in other AWS accounts. To know the AWS account number, click Support on the upper right side of the navigation bar in AWS management console as shown in the following screenshot. Conical String User ID It is a long string of alphanumeric characters like 1234abcdef1234. This ID is used in Amazon S3 bucket policy for cross-account access, i.e. to access resources in another AWS account. Account Alias Account alias is the URL for your sign-in page and contains the account ID by default. We can customize this URL with the company name and even overwrite the previous one. How to Create/Delete Your Own AWS Account Alias? Step 1 − Sign in to the AWS management console and open the IAM console using the following link https://console.aws.amazon.com/iam/ Step 2 − Select the customize link and create an alias of choice. Step 3 − To delete the alias, click the customize link, then click the Yes, Delete button. This deletes the alias and it reverts to the Account ID. Multi Factor Authentication Multi Factor Authentication (MFA) provides additional security by authenticating the users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services. If the MFA code is correct, then only the user can access AWS services or else not. Requirements To use MFA services, the user has to assign a device (hardware or virtual) to IAM user or AWS root account. Each MFA device assigned to the user must be unique, i.e. the user cannot enter a code from another user”s device to authenticate. How to Enable MFA Device? Step 1 − Open the following link, https:// console.aws.amazon.com/iam/ Step 2 − On the web page, choose users from the navigation pane on the right side to view the list of user name. Step 3 − Scroll down to security credentials and choose MFA. Click activate MFA. Step 4 − Follow the instructions and the MFA device will get activated with the account. There are 3 ways to enable a MFA device − SMS MFA Device In this method, MFA requires us to configure the IAM user with the phone number of the user”s SMS-compatible mobile device. When the user signs in, AWS sends a six-digit code by SMS text message to the user”s mobile device. The user is required to enter the same code on a second web page during sign-in to authenticate the right user. This SMS-based MFA cannot be used with AWS root account. Hardware MFA Device In this method, MFA requires us to assign an MFA device (hardware) to the IAM user or the AWS root account. The device generates a six-digit numeric code based upon a time synchronized one-time password algorithm. The user has to enter the same code from the device on a second web page during sign-in to authenticate the right user. Virtual MFA Device In this method, MFA requires us to assign an MFA device (virtual) to the IAM user or the AWS root account. A virtual device is a software application (mobile app) running on a mobile device that emulates a physical device. The device generates a six-digit numeric code based upon a time-synchronized one-time password algorithm. The user has to enter the same code from the device on a second web page during sign-in to authenticate the right user. AWS Identity & Access Management (IAM) IAM is a user entity which we create in AWS to represent a person that uses it with limited access to resources. Hence, we do not have to use the root account in our day-to-day activities as the root account has unrestricted access to our AWS resources. How to Create Users in IAM? Step 1 − Open the link https://console.aws.amazon.com/iam/ to sign-in to AWS Management console. Step 2 − Select the Users option on the left navigation pane to open the list of all users. Step 3 − We can also create New Users using the Create New Users

AWS – Redshift

Amazon Web Services – Redshift ”; Previous Next Amazon Redshift is a fully managed data warehouse service in the cloud. Its datasets range from 100s of gigabytes to a petabyte. The initial process to create a data warehouse is to launch a set of compute resources called nodes, which are organized into groups called cluster. After that you can process your queries. How to Set Up Amazon Redshift? Following are the steps to set up Amazon Redshift. Step 1 − Sign in and launch a Redshift Cluster using the following steps. Sign in to AWS Management console and use the following link to open Amazon Redshift console − https://console.aws.amazon.com/redshift/ Select the region where the cluster is to be created using the Region menu on the top right side corner of the screen. Click the Launch Cluster button. The Cluster Details page opens. Provide the required details and click the Continue button till the review page. A confirmation page opens. Click the Close button to finish so that cluster is visible in the Clusters list. Select the cluster in the list and review the Cluster Status information. The page will show Cluster status. Step 2 − Configure security group to authorize client connections to the cluster. The authorizing access to Redshift depends on whether the client authorizes an EC2 instance or not. Follow these steps to security group on EC2-VPC platform. Open Amazon Redshift Console and click Clusters on the navigation pane. Select the desired Cluster. Its Configuration tab opens. Click the Security group. Once the Security group page opens, click the Inbound tab. Click the Edit button. Set the fields as shown below and click the Save button. Type − Custom TCP Rule. Protocol − TCP. Port Range − Type the same port number used while launching the cluster. By-default port for Amazon Redshift is 5439. Source − Select Custom IP, then type 0.0.0.0/0. Step 3 − Connect to Redshift Cluster. There are two ways to connect to Redshift Cluster − Directly or via SSL. Following are the steps to connect directly. Connect the cluster by using a SQL client tool. It supports SQL client tools that are compatible with PostgreSQL JDBC or ODBC drivers. Use the following links to download − JDBC https://jdbc.postgresql.org/download/postgresql-8.4-703.jdbc4.jar ODBC https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_08_04_0200.zip or http://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_09_00_0101x64.zip for 64 bit machines Use the following steps to get the Connection String. Open Amazon Redshift Console and select Cluster in the Navigation pane. Select the cluster of choice and click the Configuration tab. A page opens as shown in the following screenshot with JDBC URL under Cluster Database Properties. Copy the URL. Use the following steps to connect the Cluster with SQL Workbench/J. Open SQL Workbench/J. Select the File and click the Connect window. Select Create a new connection profile and fill the required details like name, etc. Click Manage Drivers and Manage Drivers dialog box opens. Click the Create a new entry button and fill the required details. Click the folder icon and navigate to the driver location. Finally, click the Open button. Leave the Classname box and Sample URL box blank. Click OK. Choose the Driver from the list. In the URL field, paste the JDBC URL copied. Enter the username and password to their respective fields. Select the Autocommit box and click Save profile list. Features of Amazon Redshift Following are the features of Amazon Redshift − Supports VPC − The users can launch Redshift within VPC and control access to the cluster through the virtual networking environment. Encryption − Data stored in Redshift can be encrypted and configured while creating tables in Redshift. SSL − SSL encryption is used to encrypt connections between clients and Redshift. Scalable − With a few simple clicks, the number of nodes can be easily scaled in your Redshift data warehouse as per requirement. It also allows to scale over storage capacity without any loss in performance. Cost-effective − Amazon Redshift is a cost-effective alternative to traditional data warehousing practices. There are no up-front costs, no long-term commitments and on-demand pricing structure. Print Page Previous Next Advertisements ”;