The ‘Foundations for Solutions Architect–Associate on AWS’ course is designed to walk you through the AWS compute, storage, and service offerings you need to be familiar with for the AWS Solutions Architect–Associate exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:
Compute
Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
AWS Lambda
Amazon Lightsail
Amazon Batch
Storage and Database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elasticache
Amazon Redshift
Amazon Elastic MapReduce (EMR)
Services
Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Kinesis
Amazon OpsWorks
Amazon CloudFormation
Course Objectives
- Review AWS services relevant to the Solutions Architect–Associate exam
- Illustrate how each service can be used in an AWS based solution
Intended Audience
This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
Pre-Requisites
If you are new to cloud computing I recommend you do our introduction to cloud computing courses first. These courses will give you a basic introduction to the Cloud and with Amazon Web Services. We have two courses that I recommend - What is Cloud Computing? and technical Fundamentals for AWS
The What is Cloud Computing? lecture is part of the Introduction to Cloud Computing learning path. I recommend doing this learning path if you want a good basic understanding of why you might consider using AWS Cloud Services. If you feel comfortable with Cloud, but would like to learn more about Amazon Web Services, then recommend completing the technical Fundamentals for AWS course to build your knowledge about Amazon Web Services and the value the services bring to customers.
If you have any questions or concerns about where to start please email us at support@cloudacademy.com so we can help you with your personal learning path.
Ok so on to our certification learning path!
Solution Architect Associate for AWS Learning Path
This Course Includes:
- 7 video lectures
- Snapshots of 24 key AWS services
What You'll Learn
Lecture Group | What you'll learn |
---|---|
Compute Fundamentals | Amazon Elastic Cloud Compute (EC2) Amazon EC2 Container Service (ECS) AWS Lambda |
Storage Fundamentals |
Amazon Simple Storage Service (S3) |
Services at a Glance |
Amazon Simple Queue Service (SQS)
|
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Resources Referenced
Lecture Transcript
Hello and welcome to this lecture, where we'll introduce the Amazon Simple Storage Service commonly known as S3. Amazon S3 is probably the most used storage service that is provided by AWS simply down to the fact it can be used for many different use cases and called upon by many different AWS services.
Amazon S3 is a fully managed object based storage that is high available, highly durable, very cost effective, and widely accessible. S3 is also promoted as having unlimited storage capabilities making this service extremely scalable, far more scalable than your own on-premise storage solutions could ever be. There is however, limitations to the individual size of a single file that it can support. The smallest files size that it supports is zero bytes, and the largest file size is five terabytes. Although there is this size limitation on the maximum file size, it's one that many of us will not perceive as an ongoing inhibitor in the majority of use cases.
When data is uploaded within S3 you as the customer are required to specify the regional location for that data to be placed in. By specifying your region for your data Amazon S3 will then store and duplicate your uploaded data multiple times across multiple available zones within that region to increase both its durability and availability. For more information on regions, availability zones, and other AWS global infrastructure components please visit the following blog page.
Objects stored in S3 have a durability of 11 nines (99.999999999%) and so, the likelihood of losing data is extremely rare and this is down to the fact that S3 stores numerous copies of the same data in different availability zones. The availability of S3 data objects is currently four nines (99.99%). And the different between availability and durability is this. When looking at availability AWS ensure that the up time of Amazon S3 is 99.99% to enable you to access your stored data. The durability percentage refers to the probability of maintaining your data without it being lost through corruption, degradation of data, or other unknown potential damaging effects. When uploading objects to S3, specific objects are used to manage your data.
To store objects in S3 you first need to define and create a bucket. You can think of a bucket as a parent folder for your data. This bucket name must be completely unique, not just within the region you specify, but globally across all other S3 buckets that exist, which there are many millions. Once you have created your bucket name you can begin to upload your data. You can upload your data directly into that bucket or you can, if required, create folders under your bucket to store your data in for easier management. By default, there is a limit of a hundred buckets that you are allowed to create within your AWS account but this can be increased if requested through AWS.
Objects that are then stored in these buckets have a unique object key that defines the object across the flat address space of S3. Although folders can provide additional management from a data organization point of view, Amazon S3 is not a file system and so, specific features of Amazon S3 work at the bucket level rather than specific folder levels. Let's now take a closer look at some of the features offered by Amazon S3, starting with an overview of the different storage classes that are available.
There are a number of different storage classes within S3, all of which offer different performance features and costs and it's down to you to select the storage class that you require for the data. These classes are as follows, standard, standard infrequent access, and reduced redundancy. It's best to review the differences between these classes from within a table to enable you to understand the key points of difference. As you can see, the main difference of the classes is the durability and availability percentages. So when selecting your class for your data you really need to be asking yourself the follow questions. How critical is my data? Does it require the highest level of durability? How reproducible is the data? Can it be easily created again if need be? And how often is the data likely to be accessed? When looking at the data within your bucket the S3 console will also show which storage class that the object belongs to. Amazon Glacier is also another storage class, however it is also a separate service to Amazon S3. The are interactions between the two services, where S3 allows you to use lifecycle rules to move data from S3 to Glacier as a different storage class for archival purposes. But I shall cover more on Amazon Glacier in another lecture.
Let me now talk a little bit about the different security features offered by S3, starting with bucket policies. Bucket policies allow you to impose and set access controls against who or what can access the data within a specific bucket. The policy itself is written in JavaScript object notation, known as JSON, and these policies are very similar to the identity and access management policies. However, they only control access to the data in the bucket that the policy is associated to. More information on identity and access management policies can be found here within this existing course. Bucket permissions can be very detailed and specific allowing only access for a specific user within your account to access the data within a specified time range and only when coming from a specific IP address. As that example shows, there is a huge among of granularity that can be assigned within these bucket permissions.
Access control lists. Access control lists or ACLs are again another method of controlling who has access to your bucket. However, they only control access for users outside of your own AWS account, such as access from other AWS accounts, or public access. ACLs are not as granular as bucket policies can be and so the permissions are broad in access. For example, list objects and write objects. You may be familiar with a number of recent security instance in which huge amounts of data have been unnecessarily exposed on Amazon S3 because the owners of that data failed to restrict public access to these buckets which may have contained personally identifiable information. Understanding who has access to your buckets and data is essential when using Amazon S3 due to the potential of it being accessible across the internet.
Data encryption, S3 offers a number of different encryption mechanisms to allow you to be able to encrypt your data. These methods cover both server side encryption and client side encryption options. Which include server side encryption with S3 managed keys, known as SSE-S3. Server side encryption with KMS managed keys, known as SSE-KMS. And KMS is the key management service. More information on KMS can be found here within our existing course. Server side encryption with customer managed keys, known as SSE-C. And then we have client side encryption with KMS managed keys, known as CSE-KMS. And finally, client side encryption with customer managed keys, known as CSE-C. The main difference between client side and server side encryption is the location at which the encryption takes place. Server side encryption takes place within a AWS S3 and client side encryption occurs on your client prior to uploading your objects. S3 also fully supports encryption in transit via SSL secure sockets layer. For a graphical view of the process of encryption and decryption for each of these please view the following infographic.
I now want to discuss a couple of features relating to data management. Versioning, when you enable versioning on a bucket it allows for multiple versions of the same object to exist. This is useful to allow you to retrieve previous versions of a file or recover from some accidental deletion, or indeed intended malicious deletion of an object. Versioning is managed and created automatically by the bucket when you overwrite the same object. For easier management, S3 will only display the latest version of the object within the console but it does provide a way of viewing all versions as and when you need to see the previous versions. Versioning is not enabled by default, however once you have enabled it you need to be aware of two main points. Firstly, you can't disable versioning. You can suspend it on the bucket which will prevent any further versions from being created of your objects, but you can't disable it altogether. Secondly, versioning will be an added cost to you as you are storing multiple versions of the same object and the Amazon S3 cost model is based on actual usage of data.
Lifecycle rules, lifecycle rules in AWS provide an automatic method of managing the life of your data while it is being stored on Amazon S3. By adding a lifecycle rule to a bucket you are able to configure and set specific criteria that can automatically move your data from one class to another, move it to Amazon Glacier, or delete it from Amazon S3 altogether. You may want to do this as a cost saving exercise, by moving data to a cheaper storage class after a set period of time. For example, 30 days. And once those 30 days are up, Amazon S3 will then automatically change your storage class off that data as per the lifecycle rule. Another example would be that you may only be required to keep the data for a set period of time before it can be deleted, again, saving you money on storage. In this scenario you can set the bucket's lifecycle policy to automatically delete anything older than 90 days for example. The time frames are up to you and your own set requirements.
Let me now talk about some of the common use cases of Amazon S3 that are used by many organizations. AWS is commonly used in a number of different use cases, due to the features I've already discussed making it widely accessible and usable for different data types. So let me now just cover a few different scenarios where Amazon S3 would be a good solution for your storage requirement, starting with data backup. Many people find that the highly scalable and reliable components of Amazon S3 an ideal choice to store data backup. Either for existing AWS resources that you are using or for your own on-premise data. AWS also offers solutions to help you manage the transfer of your on-premise production data to Amazon S3 as a backup to your primary storage on site. Later in this course I will be discussing methods on how this can be achieved. With its ability to scout enormously and the flexibility of being able to retrieve your data with ease, it's easy to see why S3 makes a great data backup solution. When your data is stored on S3 it can be, permissions allowing, accessible from anywhere you have an internet connection if required. This is another reason S3 makes a great service for data backup solutions allowing you to retrieve your data from anywhere should you need to access it.
Static content and websites, S3 is perfect for storing static data such as images and video which are used on almost every website. Every object can also be referenced directly via unique URL or by a content delivering network such as Amazon CloudFront which interacts closely with S3. If your website is entirely static then your whole website can in fact be hosted on Amazon S3 providing a highly scalable and cost effective method for running your website. I will cover Amazon CloudFront later in the course, however if you need additional information on both Amazon CloudFront and running the static website within Amazon S3, please see the following links to our existing courses and labs on this topic.
Large data sets, S3 is also great to store computational, scientific, and statistical data allowing you to perform big data analytics. This makes this kind of data accessible to multiple parties at once for analysis without impacting performance thanks to the horizontal scaling abilities of S3. Due to the size of some of this data it's also a very cost effective method of storing large amounts of data that can be easily accessed and shared by a number of people.
Integration with other AWS services, Amazon S3 is widely used by a number of other AWS services within AWS to help those services perform their own functionality and features behind the scenes. For example, the Elastic Block Store service known as EBS, which I'll be discussing more in detail in an upcoming lecture, is able to create a backup of itself and store this backup on Amazon S3 as a snapshot. However, unlike the buckets that you create and the data that you store on S3 your EBS snapshots are not visible in any S3 buckets that you own. The snapshots are managed by AWS and hidden from S3 console as these snapshots are only backed by S3 and do not offer you the ability to manage their storage requirements as there is no need for you to so. Using S3 for this purpose, makes your EBS snapshots highly available and highly resilient. Another example would be logging. Many services use Amazon S3 to store their logs, such as AWS CloudTrail. AWS CloudTrail is a service that records and tracks all API called made within your AWS account. These API calls are recorded as events and then stored within a log file. This log file is then stored on S3. Again, due to the highly scalable and reliable nature of this service it makes sense for other services to use S3 for purposes such as this. In this instance, you are able to view the CloudTrail logs by one of your configured buckets specified during the CloudTrail creation. Amazon S3 can also be used as an origin for an Amazon CloudFront distribution. During the configuration of your CloudFront distribution, which I'll explain in further detail in a later lecture, you are able to specify a bucket to be used that stores files that are to be distributed out to AWS edge locations to help reduce web access latency for your end users.
As with many services, the cost of S3 storage varies depending on the region you select. Let me look at the example of the London region, across the three different storage classes. The first thing that I want to point is that the reduced redundancy storage class is actually more expensive than the standard storage. Originally, reduced redundancy storage was introduced to reduce the cost over that of the standard class as a trade off against lower durability. However, for many regions this is no longer the case as you can see. It is now more cost effective to use the standard class which provides a greater level of resilience for a cheaper price. If you want to optimize your cost then the preferable option would be to use the infrequent access storage class. However, the availability of this class drops to 99.9% instead of 99.99%. The durability remains the same reaching 11 nines of durability. The more storage you use on S3 the cost of each gigabyte reduces when certain thresholds are reached.
When looking at your S3 costs you might feel you are only charged for the storage itself per gigabyte, however there are a number of other cost elements to S3 and it's worth being aware of a couple of these. Including request costs and data transfer costs. Your request costs are based on actions such as put, copy, post, and get requests and are charged on a per 10,000 requests basis. Again, it will depend on your storage class as to the cost of these requests. As an example, when using the standard storage class using the London region it will cost you just over five cents per 10,000 put, copy, post, or list requests. For get requests and all other types of request it will cost you just over four cents per 10,000 requests. Data being transferred into S3 is free, however to transfer data out to another region costs two cents per gigabyte. For the latest information on costs please refer to the Amazon S3 pricing page shown here.
Although I have covered a number of reasons as to why S3 is great for different storage solutions, it's not a catch all storage service. For example, it's not ideal for the following scenarios. Archiving data for long term use, perhaps for compliance. When the data is dynamic and changing very fast. If the data being stored requires a file system. And structured data that needs to queried. There are other storage services that I will discuss that are far more suited to perform these functionalities.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.