This course introduces you to arguably the most widely used storage service within AWS, the Amazon Simple Storage Service, more commonly known as Amazon S3. You will learn about the characteristics of Amazon S3 and get a general overview of the service, before moving on and learning about the various storage classes it offers. Each storage class is different and this course explains how they can be used.
For any feedback, queries, or comments relating to this course, please contact us at support@cloudacademy.com.
Learning Objectives
The objectives of this course are to provide an overview of Amazon S3, including what the service is, the basics of the Amazon S3 console and its associated storage classes.
Intended Audience
This course is designed as an introduction to the Amazon S3 storage service and is suitable for those who are:
- Starting their AWS journey to understand the various services that exist and their use cases
- Storage engineers responsible for maintaining and storing data within an enterprise
- Anyone looking to begin their certification journey with either the AWS Cloud Practitioner or one of the three Associate-level certifications
Prerequisites
This is an entry-level course to AWS S3 and so no prior knowledge of this service is required. However, to get the most from this course, a basic understanding of Cloud Computing and awareness of AWS regional infrastructure would be beneficial but not essential.
Hello and welcome to this lecture where I will introduce the Amazon Simple Storage Service, commonly known as S3. Amazon S3 is probably the most heavily used storage service that is provided by AWS simply down to the fact that it can be a great fit for many different use cases, as well as integrating with many different AWS services. Amazon S3 is a fully managed, object-based storage service that is highly available, highly durable, very cost-effective, and widely accessible.
The service itself is promoted as having unlimited storage capabilities making Amazon S3 extremely scalable, far more scalable than your own on-premise storage solution could ever be. There are, however, limitations on the individual size of a single file that it can support. The smallest file size that it supports is zero bytes and the largest file size is five terabytes. Although there is this size limitation on a maximum file size, it's one that many of us will not perceive as an ongoing inhibitor in the majority of use cases.
The service operates an object storage service which means each object uploaded does not conform to a data structure hierarchy like a file system would, instead its architecture exists across a flat address space and is referenced by a unique URL. Now, if you compare this to file storage, where your data is stored as separate files within a series of directories forming a data structure hierarchy much like your own files are on your own laptop or computer then S3 is very different in comparison. S3 is a regional service and so when uploading data you as the customer are required to specify the regional location for that data to be placed in.
By specifying your region for your data Amazon S3 will then store and duplicate your uploaded data multiple times across multiple availability zones within that region to increase both its durability and availability. For more information on regional availability zones and other AWS global infrastructure components, please see the following blog here. Objects stored in S3 have a durability of ninety-nine point nine nine nine nine nine nine nine nine nine percent, known as eleven nines of durability and so the likelihood of losing data is extremely rare and this is down to the fact that S3 stores multiple copies of the same data in different availability zones. The availability of S3 data objects is dependent on the storage class used and this can range from 99.5% to 99.99%.
The difference between availability and durability is this: when looking at availability AWS ensures that the uptime of Amazon S3 is between 99.5% to 99.99%, depending on the storage class, to enable you to access your stored data. The durability percentage refers to the probability of maintaining your data without it being lost through corruption, degradation of data, or other unknown potential damaging effects. When uploading objects to Amazon S3, a specific structure is used to locate your data in the flat address space.
To store objects in S3, you first need to define and create a bucket. You can think of a bucket as a container for your data. This bucket name must be completely unique, not just within the region you specify, but globally against all other S3 buckets that exist, of which there are many millions. And this is because of the flat address space, you simply can't have a duplicate name. Once you have created your bucket you can then begin to upload your data within it. By default your account can have up to a hundred buckets, but this is a soft limit and a request to increase this can be made with AWS. Any object uploaded to your buckets are given a unique object key to identify it.
In addition to your bucket, you can if required create folders within the bucket to aid with categorization of your objects for easier data management. Although folders can provide additional management from a data organization point of view, I want to reiterate that Amazon S3 is not a file system and many features of Amazon S3 work at the bucket level and not a specific folder level and so the unique object key for every object contains the bucket, any folders that are present, and also the name of the file itself. Let me now provide a quick overview via a demonstration of the Amazon S3 console and I'll show you how to create a bucket within the service and upload an object to that bucket and then show you the unique object key of that object. Okay so I'm currently logged into my AWS management console and I can find amazon S3 under the storage category which is down here. So if I select S3 and this has taken me to the S3 dashboard.
Now up here we have our buckets and this list that we have here are a list of buckets that I have already created in my account so this is the bucket name which is the unique bucket identifier. And over here we have the region in which that bucket exists in, so we have some in London some in Ireland some over in US as well, and also, the date created. Over here we have access whether these buckets can be accessed by the public or not. I'm not going to dive too deep into access and security the buckets at this stage, as this is just more of an introduction to give you an overview of the console, but we do have other courses that focus on security.
Now, if I go into one of these buckets, for example, this one here cloudacademyaudio, we can see that I have created a folder in here called Stuart. There's no other objects, it's just a folder we can see that by this little icon here, so there's no actual objects in here, this is just a folder that I created just to help me manage and categorize any objects that I do upload. If I select that folder, I can see I have two more folders here. If I go into this one, for example, I can see that I have an object I have a PNG file. So this is an object that I have uploaded to S3 and we can see that it's in the courses folder under Stuart under the cloudacademyaudio bucket.
Now, if I select this object, I can get some information about it. I can open it and download it etc. We can see the last time it was modified, the storage class that it belongs to, and I'll be talking more about storage classes in the next lecture, if there's any encryption at rest activated, the file size, and here is the unique identifier of the key for this object. So we can see that the key comprises of any folders within the bucket and then also the object name at the end. And this here is the unique identifier of this object on S3, so it gives it a URL.
Now if I want to open this I can simply click on open and as we can see, it's just an image file, or I can download the object if I want to. So I just wanted to show you there how you can use folders within your bucket and also what the object key looks like as well. So now if I go back to the console, the main dashboard where all my buckets are, I want to show you how to create a bucket quickly. It's very simple simply click on create bucket, then we need to give it a unique bucket name.
Now remember, this has to be a globally unique name, so if I type in stubucketdemo and then I can select a region that I want this bucket to be in, I'm just going to select the London region, and if I want to, I can copy settings from an existing bucket, but it's going to go through the different screens to show you the options quickly that you can have when you're creating a bucket. Click on Next. Here's some management, we have some management options such as versioning and server access logging. Versioning keeps all versions of an object in the same bucket and server access logging logs requests for access to your bucket. You can also use key value pair tags, you can activate object level logging which will record any API activity with CloudTrail associated with your objects and you can also encrypt your objects as well. I'm just gonna leave all those options as default for this demonstration. Click on next.
Here we can set different permissions, I'm just gonna leave the default block all public access so this will prevent anyone from outside of my VPC accessing any data within my bucket. Click on next. We just have a review of the settings that we selected and then all you need to do is click on create bucket. So if I scroll down to my bucket that I just created, which was stubucketdemo. If I select that, we can see here that I've got no folders and no objects. So if you want to create a folder, you simply click on create folder, just give it a name. If you want to add any encryption you can do so here. I'm just going to use none as a default. Now, I can either add an object directly under this bucket or I can add it into that folder. Let me just add it directly under the bucket name of stubucketdemo.
So to upload an object you simply click on upload, add files, select your object that you'd like to upload or objects, click on next, you know we have some permissions here as to who can read or write to the object, and as we can see here, we have the block public access setting turned on for this bucket. Click on next. Now here we have our storage classes and depending on what storage class we select it will affect the durability, the availability, and also the cost of your object being stored. Now, I'm gonna go deeper into the different storage classes in the next lecture so I won't go over this too deep now. For the sake of this demonstration I'm just going to select the standard storage class.
Then we have a review page and then simply upload. And then we have my object that's been uploaded to my bucket. Again, if I select it, I can see the object key and also the unique object URL as well. So that's just a very quick demonstration to show you what the S3 console looks like, how to create buckets, how to create folders, and also upload objects as well, just so hopefully you can piece things together a little bit easier if you've not used Amazon S3 before.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.