1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course - Part 1 of 3

Introduction to S3

Contents

keyboard_tab
The AWS Solutions Architect Associate Level Certification
2
Overview about AWS
PREVIEW9m 26s
AWS Elastic Compute Cloud
AWS Simple Storage Service
AWS Identity and Access Management
Start course
Overview
Difficulty
Advanced
Duration
52m
Students
3513
Description

AWS certifications are among the cloud industry's most valuable, and the Solutions Architect Associate Level exam is the jump off point for the whole series. Once you've mastered associate level material, you'll be comfortable designing and deploying fault tolerant and highly available systems on AWS, estimating costs and building cost control mechanisms, and choosing just the right blend of services for any project you're asked to provision.

The first course of this three-part certification exam preparation series, besides offering a good general overview of AWS, focuses on the core AWS services: EC2, EBS, S3 and IAM. The second course will discuss networking and include a practical design and deployment of a real-world application infrastructure. The third and final course will explore data management and application and services deployment.

Who should take this course?

This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.

Where will you go from here?

The self-testing quizzes of the AWS Solutions Architect Associate Level preparation material are a great follow up to this series and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out the AWS Certifications Study Guide on our blog.

Transcript

Besides the instance and volume management tools of EC2, there's probably no service that figures as critically in AWS based projects as simple storage servers S3. Not only can you use it as a quick and cost effective place to store and access your data but because it's so tightly integrated with the entire range of AWS services, S3 is a perfect host for application critical data element. Whether its a small as a simple script to run dynamic remote configuration processes, or a massive data set. At its heart though S3 is just a place where you can safely keep data objects by which we mean files, S3 is built around buckets. You can think of a bucket as a container like a directory that holds your files and other data. You can create a new bucket by simply clicking on Create Bucket and entering a name that's unique across the entire S3 system.

The easiest way to do that is to choose a word and then add some numbers. You then select the Region where you'd like your bucket to live. Placing your data in the region that's closest to the clients and devices that will use it, can reduce latency and generally improve performance.

Clicking on a folder in the All Buckets view will take you to that buckets page however and I have to admit that I don't find the current AWS interface that clear on this. Click on the page in magnifying glass icon next to a bucket name will allow you to activate the actions menu. In any case once we've moved to our new bucket we can upload files to it, click on Actions and then Upload, click Add files select the file or two from your local system and then click start Upload.

Perhaps we'd like to make these files available to anyone on the internet, this too is simple. Select both files, click on Actions select Make Public.

Now click on the Properties tab to the right and note the links address which contains the URL users can use to access each file.

You can control access to S3 files for all users using permissions, the owner of this file is currently allowed to open and download it, view and edit as permissions.

We could create a second permissions rule by adding Add more permissions and selecting a user from the dropdown Grantee box, let's select Everyone and then click Open/Download. This means that anyone with the address will be able to access the file but not view or edit its permissions.

We could create a new file that grant log delivery similar rights. You can remove a permission rule by clicking on the X to its right. S3 can also be a very cheap place to host static websites, it turns out that the two HTML files have uploaded are just enough to make a simple website.

Back in the All Buckets page, select our bucket by clicking on the magnifying glass icon and then click on Properties tab to the right. Now expand static web hosting and select Enable web hosting. Enter the name of our index.html file which is index.html for the index document and click on Save. Now note the end point until we reroute traffic from custom domain to our site this will be the way users can access the site, let's visit the site. This is more than just a plain text document, we can click on links just like on any other webpage.

Back in our browser dashboard, if we click on the details tab of a files properties we see that we're able to choose between Standard and Reduced Redundancy Storage or RRS. Standard storage since its data is replicated over so many facilities promises durability up to 99.999999999% that is 99 and nine nines. RRS on the other hand is designed to provide only 99.99% durability which in fact isn't too bad. Spread over large volumes of data RRS can be somewhere around 15% to 20% cheaper, however RRS isn't for every project. If you're storing data it could be recreated if it was lost but you'd rather avoid the trouble then you might be a good candidate for RRS.

Naturally Amazon also offers full terminal console access to S3 objects and buckets through the AWS command line interface.

From a shell session on which the AWS CLI has been installed and that's already authenticated into the AWS system, we can try to upload an object to our bucket.

Let's use aws s3 cp for copy, experiment which is the name of the file in our system, s3://experiment7393 which is the name of our bucket. We can also use S3 as a synchronized backup system using sudo aws s3 sync to synchronize /var/log which is the directory we'd like to be synchronized and uploaded, s3://experiment7393 which is the name of the bucket we'd like this directory and its contents to be copied to, then --delete we'll explain that a minute. All the files in the var/log directory will be uploaded and synchronized, that means the next time this command has run. Only those files that have been edit or changed in the mean time will be uploaded, greatly reducing time and then with cause. The delete argument tells S3 to remove any archive files in the bucket who sources on my computer had been deleted in the mean time. We used sudo because some files in var/log are read protect.

About the Author
Avatar
David Clinton
Linux SysAdmin
Students
11905
Courses
12
Learning Paths
4

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.