AWS Compute Fundamentals
AWS Storage and Database Fundamentals
Services at a Glance
The ‘Foundations for Solutions Architect–Associate on AWS’ course is designed to walk you through the AWS compute, storage, and service offerings you need to be familiar with for the AWS Solutions Architect–Associate exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:
Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
Storage and Database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Elastic MapReduce (EMR)
Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon API Gateway
Amazon Data Pipeline
- Review AWS services relevant to the Solutions Architect–Associate exam
- Illustrate how each service can be used in an AWS based solution
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
If you are new to cloud computing I recommend you do our introduction to cloud computing courses first. These courses will give you a basic introduction to the Cloud and with Amazon Web Services. We have two courses that I recommend - What is Cloud Computing? and technical Fundamentals for AWS
The What is Cloud Computing? lecture is part of the Introduction to Cloud Computing learning path. I recommend doing this learning path if you want a good basic understanding of why you might consider using AWS Cloud Services. If you feel comfortable with Cloud, but would like to learn more about Amazon Web Services, then recommend completing the technical Fundamentals for AWS course to build your knowledge about Amazon Web Services and the value the services bring to customers.
If you have any questions or concerns about where to start please email us at firstname.lastname@example.org so we can help you with your personal learning path.
Ok so on to our certification learning path!
Solution Architect Associate for AWS Learning Path
This Course Includes:
- 7 video lectures
- Snapshots of 24 key AWS services
What You'll Learn
|Lecture Group||What you'll learn|
|Compute Fundamentals||Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
Amazon Simple Storage Service (S3)
|Services at a Glance||
Amazon Simple Queue Service (SQS)
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Hello, and welcome to this lecture where I shall be talking about EBS, which is another method of providing storage to your EC2 instances, which has different benefits to that of the instance store volumes.
Much like instance store volumes, EBS also provides block level storage to your EC2 instances. However, it also offers persistent and durable data storage, which is a huge plus. As a result, this offers far more flexibility for the data that you store on the EBS volume to what you can do with the data stored on instance store volumes. EBS volumes can be attached to your EC2 instances, and are primarily used for data that is rapidly changing that perhaps requires specific IOPS. As they provide persistent level storage to your instances, they are ideally suited for retaining important data. And as such, can be used to store personally identifiable information. In any environment where this is the case, it's essential that the data is encrypted to protect the data on the volume from malicious activity.
EBS volumes are independent of the EC2 instance, meaning that they exist as two different resources. They are network attached to the instance instead of directly attached like instance store volumes. However, a single EBS volume can only ever be attached to a single EC2 instance. Multiple EBS volumes can however be attached to a single instance. Due to the persistency of maintaining data, it doesn't matter if your instances are intentionally or unintentionally stopped, restarted, or even terminated, settings permitting. The data will remain intact. EBS offers the ability to provide point in time backup snapshots of the entire volume as and when you need to. These backups are known as snapshots. You can manually invoke a snapshot of your volume at any time, or create some code to perform this automatically on a scheduled basis. The snapshots themselves are then stored on Amazon S3 and so are very durable and reliable. The snapshots themselves are incremental, meaning that each snapshot will only copy data that has changed since the previous snapshot was taken. Once you have a snapshot of an EBS volume, you can then create a new volume from that snapshot. So, if for any reason you lost access to your EBS volume through some form of incident or disaster, you can recreate the data volume from an existing snapshot and then attach that volume to a new EC2 instance. To add additional flexibility and resilience, it is possible to copy a snapshot from one region to another.
Looking at the subject of high availability and resiliency, your EBS volumes by default are created with reliability in mind. Every write to a EBS volume is replicated multiple times within the same availability zone of your region to help prevent the complete loss of data. This means that your EBS volume itself is only available in a single availability zone. As a result, should your availability zone fail, you will lose access to your EBS volume. Should this occur, you can simply recreate the volume from your previous snapshot, which is accessible from all availability zones within that region, and attach it to another instance in another availability zone.
There are two types of EBS volumes available. Each have their own characteristics. These being SSD backed storage, solid state drive, and HDD backed storage, hard disk drive. This allows you to optimize your storage to fit your requirements from a cost to performance perspective. SSD backed storage is better suited for scenarios that work with smaller blocks. Such as databases using transactional workloads. Or often as boot volumes for your EC2 instances. Whereas HDD backed volumes are designed for better workloads that require a higher rate of throughput in megabits per second, such as processing big data and logging information. So, essentially working with larger blocks of data.
Looking at these volume types further, SSD and HDD volumes can be broken down into the following. General purpose SSD, known as GP2, and provisioned IOPS, known as IO1. And cold HDD, known as SC1, and throughput optimized HDD, known as ST1. The key points of the general purpose SSD volume includes it provides single digit millisecond latency. It can burst up to 3,000 IOPS. It has a baseline performance of three IOPS up to to 10,000 IOPS. A throughput of 128 megabytes per second for volumes up to 170 gig. Above this, this throughput then increases 768 kilobytes per second per gig up to a maximum of 160 meg per second.
Provisioned IOPS SSD volumes deliver enhanced predictable performance for applications requiring I/O intensive workloads. They also have the ability to specify IOPS rate during the creation of a new EBS volume. And when the volume is attached to an EBS-optimized instance, EBS will deliver the IOPS defined and required within 10%, 99.9% of the time throughout the year. And the volumes range from four to sixteen terabytes in size. Per volume, the maximum IOPS possible is set to 20,000 IOPS.
The cold HDD volumes offer the lowest cost compared to all other EBS volumes types. They are suited for workloads that are large in size and accessed infrequently. Their key performance attribute is its throughput capabilities in megabytes per second. It also has the ability to burst up to 80 megabits per second per terabyte, with a maximum burst capacity for each volume set at 250 megabytes per second. It will deliver the expected throughput 99% of the time over a given year, and due to the nature of these volumes, it's not possible to use these as boot volumes for your EC2 instances.
For throughput optimized HDD volumes, they are designed for frequently accessed data and are ideally suited to work well with large data sets requiring throughput-intensive workloads, such as data streaming, big data, and log processing. They have the ability to burst up to 250 megabytes per second, with a maximum burst of 500 megabytes per second per volume. And again, these will deliver the expected throughput 99% of the time over a given year. And again, due to the nature of these volumes, it's not possible to use these as boot volumes for your instances.
One great feature of EBS is its ability to enhance the security of your data, both at rest and when in transit, through data encryption. This is especially useful when you have sensitive data, including personally identifiable information, stored in your EBS volume. And in this case you may be required to have some form of encryption from a regulatory or governance perspective. EBS offers a very simple encryption mechanism. Simple in the fact that you don't have to worry about managing the data keys to perform the encryption process yourself. It's all managed and implemented by EBS. All you are required to do is to select if you want the volume encrypted or not during its creation via a checkbox. The encryption process uses the AES-256 encryption algorithm and provides its encryption process by interacting with another AWS service, the key management service known as KMS. KMS uses customer master keys, CMKs, to create data encryption keys, DEKs, enabling the encryption of data across a range of AWS services, such as EBS in this instance. Any snapshot taken from an encrypted volume will also be encrypted, and also any volume created from this encrypted snapshot will also be encrypted. You should also be aware that this encryption option is only available on selected instance types. For a detailed overview of exactly how this encryption process works, please take a look at the following blog post.
As EBS volumes are separate to EC2 instances, you can create an EBS volume in a couple of different ways from within the management console. During the creation of a new instance and attach it at the time of launch, or from within the EC2 dashboard of the AWS management console as a standalone volume ready to be attached to an instance when required. When creating an EBS volume during an EC2 instance launch, at step four of creating that instance, you are presented with the storage configuration options. Here you can either create a new blank volume or create it from an existing snapshot. You can also specify the size in gigabytes and the volume type, which we discussed previously. Importantly, you can decide what happens to the volume when the instance terminates. You can either have the volume to be deleted with the termination of the EC2 instance, or retain the volume, allowing you to maintain the data and attach it to another EC2 instance. Lastly, you also have the option of encrypting the data if required. And like I said, you can also create the EBS volume as a standalone volume. By selecting the volume option under EBS from within the EC2 dashboard of the management console, you can create a new EBS volume where you'll be presented with the following screen. Here you will have many of the same options. However, you can specify which availability zone that the volume will exist in, allowing you to attach it to any EC2 instance within that same availability zone. As you remember, EBS volumes can only be attached to EC2 instances that exist within the same availability zone.
EBS volumes also offer the additional flexibility of being able to resize them elastically should the requirement arise. Perhaps you're running out of disk space and need to scale up your volume. This can be achieved by modifying the volume within the console or via the AWS CLI. Once the volume has been increased, you would then need to extend the filesystem on the OS of the EC2 instance to see the new volume. You could also perform the same resize of a volume by creating a snapshot of your existing volume and then creating a new volume from that snapshot with an increased capacity size.
Whereas with Amazon S3 you pay for only the storage you consume, with Amazon EBS you are charged for the storage provisioned to you per month. Meaning, if you provisioned an eight gigabyte EBS volume, you would be charged for that eight gigabytes regardless of if you are only using one gigabyte of storage on that volume. There is a different cost to each volume type I discussed earlier. Again, the cost is region dependent. Based on the London region, the costs are as follows. And here we have the general purpose SSD, the provisioned IOPS, the throughput optimized, and the cold HDD volumes. For these volumes, the charges are billed on a per-second basis and are charged pro rata, depending on how many seconds you have the storage provisioned for within that month. You must also remember that when an EBS snapshot is created and stored on Amazon S3, you will also be charged for that storage. And for the London region, this would be at .053 dollars per gigabyte month of data stored.
As we can see, EBS offers a number of benefits over EC2 instance store volumes. But EBS is not well suited for all storage requirements. For example, if you only needed temporary storage or multi-instance storage access, then EBS is not recommended or possible, as EBS volumes can only be accessed by one instance at a time. Also, if you needed very high durability and availability of data storage, then you would be better suited to use Amazon S3 or EFS, the elastic filesystem.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.