image
Storage Gateway Demo
Start course
Difficulty
Advanced
Duration
32m
Students
48
Ratings
5/5
Description

In this course, we explain how to set up a hybrid storage solution using AWS Storage Gateway. 

Learning Objectives

  • AWS Storage Gateway and its architecture
  • Which type of storage gateway to use for your use case
  • How to create a storage gateway using the EC2 platform

Intended Audience 

  • Those that are looking to implement a hybrid storage solution
  • Those who have an interest in AWS storage gateway that may need a bit more information on how it works

Prerequisites 

  • You should have a strong understanding of storage in AWS, including knowledge of Amazon S3, Amazon FSx, Amazon EBS, and Amazon Glacier
  • Familiarity with Amazon EC2 will help as well 
  • For more information on these services, check out existing content here: 
Transcript

In this demo, I’ll configure an S3 File storage gateway and deploy it using the EC2 instance platform. I’ll then create an NFS file share and mount it on another EC2 instance. From there, I’ll create a file on the file share that will be backed up to Amazon S3. 

To do this, I’ll go to my AWS console and type in Storage Gateway in the service search bar and click on the service. From here, I’ll click create gateway. I’ll then type in my gateway name, which I will call sgw-demo. And choose the time zone - I’m in Seattle, so I’ll select Pacific time (Us & Canada). 

In this demo, I’ll be choosing the S3 File Gateway. Then, I’ll choose the platform to host my gateway on, which will be Amazon EC2. 

To set up the Gateway on EC2, it provides me instructions on how to do so. It’s worth giving these instructions a quick read before setting it up the first time. I’ll be using their recommendations here, so to get started, I’ll click “Launch instance” which will redirect me to the EC2 console. 

The first thing I’ll do is name my gateway instance - I’ll call it sgw-demo-instance. I’ll leave the AMI alone, as AWS has prepopulated that information for me. 

Then I’ll select the instance type. If I go back to the launch instructions on the storage gateway console, it recommends I choose at least an m5.xlarge, so that’s what I’ll choose here. 

I’ll choose a key-pair, in case I want to SSH into this instance later and take a look under the hood of the gateway. 

And then I’ll choose my network settings. To do this, I’ll click edit next to Network settings. I’ll leave the default options selected for the VPC and subnet that my instance will run in and then make sure that my enable public IP setting is set to enabled. 

Then I’ll move on to the security group settings. In this section, I’ll select create security group, choose a name - we’ll call this NFS-sgw-sg and then I’ll set the security group rules.

For my security group rules, I’ll leave port 22 open. Then, I’ll need to open up additional ports based on the protocol I’ll be using. If I go back to the EC2 platform instructions for storage gateway, it will tell me which ports I need to open up.

In this case, I’ll need to open TCP port 80 from anywhere, as the storage gateway service does connect via HTTP to activate your gateway. However, after the gateway is activated, you can close this port as you won’t need it anymore. 

I’ll also be using NFS in this demo, so I’ll open up ports 2049 for NFS, and ports 111 and ports 20048 for NFSv3 access. I’m going to connect to this file share from another EC2 instance, but we’ll just make it easy for demo purposes, and open this up from anywhere. 

After that, I’ll configure the storage. In addition to the root volume, you’re going to add at least one new volume that supports at least 150 GiBs of space. You can add multiple of these if you’d like but I’m going to keep it at one and then select launch instance. 

From here, we can click “view all instances”. On this page, I can see my new instance is initializing currently, so I’m going to wait until my instance passes both status checks and then I’ll come back when that happens.

All right, welcome back, my instance has passed their status checks and I’ve launched my application server instance as well. Now to activate this gateway, we’ll need the public IP address for later. So I’m going to copy the public IP address now. After that, I can go back to the storage gateway configuration tab. I’ve already launched the instance so I’ll check this box saying I’ve done so according to AWS’ instructions, and then click next. 

On this page, I can choose between a publicly accessible gateway which will communicate with AWS over the public internet, or a VPC-hosted gateway which communicates to AWS through a private connection with your VPC. Additionally, if you need to be compliant with the Federal Information Processing Standards or FIPS,  you can check this box.

Now to activate the gateway, you can either use the IP address, or you can use an activation key, which I can get through a CLI command. To make it easy, I’ll use the IP address I copied earlier and paste that here. 

From there, I’ll click next and next again. On this page, I can choose to configure CloudWatch logs and alarms for your storage gateway. I’m going to deactivate logging and choose no alarm for this demo simply for cost purposes. Now, before you click create gateway, you’ll want to wait for your cache storage to load at the top here to ensure it’s allocated. It can take a couple of minutes to load, but once it does, make sure it's allocated to your cache and then click configure. 

Now we have our gateway, we can create an NFS file share. To do this, I’ll click on file shares on the side panel and then create file share. 

The first thing I’ll do is select the gateway I just created and then I’ll choose which bucket I want to link this file share to. I’ve already created a bucket, so I’ll use the bucket name to specify it. I’ll type in the name, which is sgw-bucket-99. I’ll make sure I’m choosing the appropriate Region where the bucket lives. 

Then, I’ll make sure I’ve selected the right protocol. I’m using NFS in this demo so I can leave it as the default. 

If I wanted to, I could configure cache settings, such as setting the time to live or TTL for the cache so that it knows when to refresh objects. I’m going to leave everything as the defaults and click next.

On this page, I can choose which storage class I want objects to be stored in. I need immediate access to these objects so I’ll leave it as S3 Standard. I can also configure additional S3 permissions here to safely access my S3 bucket if needed. After I’m done, I’ll select next. 

On this page, I can select additional permissions for my file share. I’m again going to leave everything as the defaults and click next. From here, I can review and select create. 

So now I’ve officially created my file share. 

The next step is to mount this file share. I can do this by clicking on my file share, scrolling down, and then finding the appropriate command for my operating system to mount this share onto. What I’ll be doing is mounting this onto an EC2 instance, so I’ll use this linux command. 

Then I’ll go into my EC2 dashboard again, and you can see I’ve already created a basic EC2 instance that will act as my application server, that’s separate from my gateway instance. I’m going to ssh into this instance by clicking connect and copying the example command.

So I’ll open my terminal, run the ssh command to connect to the instance. Then, what I’m going to do is make a directory for my mount point. To do this, I’ll run the command mkdir and I’ll call this new directory web-files. I’m also going to elevate my privileges here so I can mount this system onto my file gateway by typing in sudo su. Now that I have that, I can go back to the storage gateway console, copy the linux mount command and paste it in here.

But before I run this command, I’ll change the mount path to point to this directory web-files. Then press enter. 

To double-check that we mounted the file share appropriately, I can run the command df -hT command. This command will list out the file systems on the instance, along with the type of file system and the mount points in a human-readable format. And we can see the mount command worked, as it shows the link to my bucket sgw-bucket-99. 

From here, let’s test this out by creating a file in my file system. I’ll cd into my /web-files directory and Ill create a file called index.html by using vim index.html. Here, I’ll edit this file and include an h1 header that says “Hi from Alana”. Once I’m done, I’ll save the file and exit. All right now that I’ve created a file in my directory, lets go to my AWS console and find the S3 service. 

I’ll type in S3 in the service search bar and click on the service. From there, I’ll find my bucket, sgw-bucket-99 and click on it. And it looks like i have a file in here called index.html. I’ll download this file to see it’s contents. Click on the file, and you can see the hello from alana. So the 1:1 mapping between my NFS file share and bucket works! 

That’s all for this one, see you next time. 

About the Author
Students
3013
Courses
27
Learning Paths
5

Alana Layton is an experienced technical trainer, technical content developer, and cloud engineer living out of Seattle, Washington. Her career has included teaching about AWS all over the world, creating AWS content that is fun, and working in consulting. She currently holds six AWS certifications. Outside of Cloud Academy, you can find her testing her knowledge in bar trivia, reading, or training for a marathon.