1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. How to Use the AWS Command-Line Interface

CLI for S3 storage

Contents

Introduction to the AWS Command Line Interface
Start course
Overview
DifficultyIntermediate
Duration50m
Students1558
Ratings
5/5

Description

Although most AWS services can be managed through the console in Amazon's browser interface or via the APIs commonly used for programmatic access, there is a third way that, in many cases, can be very useful: the Command Line Interface (CLI). AWS has made software packages available for Linux, MacOS, and Windows that allows you to manage the main AWS services from a local terminal session's command line.

In this course, the Cloud Expert and Linux System Administrator David Clinton will tell you everything you need to know to get started with the AWS Command Line Interface and to use it proficiently in your daily operations. He will also provide many examples to clearly explain how command line connectivity really works.

Who should take this course

This is an intermediate course, as such you should already know the basic AWS concepts, and in particular of the services that described in this tutorial. Also, some experience with the Linux Command Line Interface is not strictly speaking necessary, but still quite useful.

If you want to boost your knowledge of AWS, EC2S3, and RDS, we strongly suggest you take our other AWS courses. Also, self-test questions are available if you'd like to test and increase your knowledge.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hi and welcome to CloudAcademy.com's video series on the AWS-CLI, the Amazon web services command line interface. In this video, we're going to explore S3, how the storage containers, the buckets, that are available through Amazon's S3 service can be accessed and manipulated through the command line.

First, let's list the buckets that may exist already on our account. There are two. One's in Asia Pacific Northeast 1, and one is in US East 1. You should bear in mind that any buckets that we create in this environment, using the command line, will automatically be created in whichever region our configuration has set to focus on. Now let's create a new bucket. AWS S3 MB, which means make bucket, S3:. Now we have to come up with a name for our bucket. We could just call it Google and hope for the best. Let's see what happens.

We're not able to do that, because this S3:// address is going to be uniquely accessible to anyone on the internet, anyone who has authority at any rate, to get into it. But therefore, it has to have a unique address. And I guess somebody else got Google before we did. Therefore, we'll try something a little more unique.

AWS S3 MB, make a bucket. S3://. Let's call it myCloudAcademyBqt, with a Q. We've done it. We now have a bucket called myCloudAcademyBqt.

Let's now copy some files up to the bucket that we've just created. AWS S3 CP, which will copy just as CP means copy in Unix and Linux environments, so too, AWS uses the same convention. We're going to copy all the contents of a directory stuff.

Directory which happens to exist in the parent directory we're currently in. We're going to copy it to S3://myCloudAcademyBqt. And we are going to add recursive. That's because this command will try to copy all the files within stuff, stuff is in the file as a directory, but all the files within the directory stuff. And in addition, anything that might be in a sub directory called more stuff, that's within stuff, to copy all the way down the sub directory and sub sub directory chain requires recursive.

We're done. Didn't take long, especially since the files really contain no data. Bear in mind that whenever you copy files up to an S3 bucket, it's going to cost you money. This storage on S3 isn't free. So just check in advance how much it costs to know whether or not it's worth it for the use that you're putting it to. Now, let's list all the objects in one of the directories on our bucket that we just created. So, S3//myCloudAcademyBqt/morestuff, because we copied more stuff up to the bucket. But also, the more stuff sub directory must have gone, too. Let's see if there's anything in that sub directory. Let's try this again with a slash, a trailing slash to tell AWS what we didn't tell it before, that we're looking for the contents of more stuff and not just more stuff itself.

There is a file called hello.text. That's exactly what we expected would be there. Now, when we copy content up to an Amazon S3 bucket, we obviously have some purpose for it being there. Sometimes it's just backup storage. A little expensive for backup storage, but that's why we may use it. But often, it's so that others should have access to it. We may want some people to have more access than others. Let's look at this command. AWS S3 CP for copy.

We're going to copy specifically stuff/firstfile.text to the S3 myCloudAcademyBqt. But we are now going to add --grants. That is, this argument will establish rights, permissions for other people to make use of this file. So, we, anybody in the following definition, will be allowed to read this filefirst file.

Who is that? If you look to the end of this string, it's all users. All users, anybody on earth who knows about this file, these scintillating details of the contents of this file, it's actually empty, will be able to read the file. But, they will have only read access and not any other access. Not write access and not execute access. But full access will be given to the owner of the email, someone@cloudacademy.com. I don't believe there is anybody with such an email. But should he or she exist, they would have full access on this file. The error we see is that this email address is not associated with an Amazon account. If it's not associated with an Amazon account, then Amazon really can't be expected to know what to do with such an email address. So if we wanted, really wanted to make firstfile.text available to a particular user, we would find a legitimate genuine email address that's associated with an Amazon account and try that again. Of course, we're just doing this for demonstration purposes. Now, we've created a bucket on S3. We've copied data, both files and directories up to S3.

And we've shown that they are actually there. We've seen them. Now there's a very important service that's also available to us, and that is sync. Not only can we copy, but we can copy smart. Let me demonstrate. AWS S3 sync. This means that the system will copy everything in the current host directory here on our computer, not in the Amazon S3 bucket, but on our computer, will copy everything in this directory, and will copy it to S3//myCloudAcademyBqt/stuff to the directory that we've already created, called stuff. It will sync it, but it will sync or copy up only those files and directories that don't already exist in the bucket itself. This obviously can be extremely useful for doing cost-effective and time-effective backups. Let's say you have a few gigabytes of data in this directory, rather than the three or four empty files that I have. You got a few gigabytes. So you have to upload it at some point. But now you're going to sync the directory from here on. So the only files that will be uploaded from your directory to the corresponding directory in the S3 bucket are those that have either changed or been created. Let's however, add another argument. Not only adding the --delete argument, not only will files that have changed or been added, been created since the last backup be copied, but those files on the home system, my computer at home let's say, that have since been deleted, will now also be deleted from the corresponding directory in the Amazon S3 bucket. We're not going to actually run that right now.

Before we leave, we really should show how to undo what we've done. AWS S3 RB, remove bucket. Let's remove the bucket we just created using S3://myCloudAcademyBqt. But it won't work. If you don't believe me, I'll show you. It failed because there are files already in the bucket, and using RB, remove bucket, with files there, won't work. It's a safeguard to make sure that you don't accidentally delete something of great value. However, if we add the argument --force, then it will work despite the existence of files in the bucket.

About the Author

Students16284
Courses17
Learning paths2

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.