1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course: Part 3 of 3

Building a Highly Available WordPress Instance - part two

Start course
1h 10m

AWS Solutions Architect Associate Level Certification Course - Part 3 of 3

Having completed parts one and two of our AWS certification series, you should now be familiar with basic AWS services and some of the workings of AWS networking. This final course in our three-part certification exam preparation series focuses on data management and application and services deployment.

Who should take this course?

This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.

Where will you go from here?

The self-testing quizzes of the AWS Solutions Architect Associate Level prep materialis a great follow up to this series...and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out our AWS Certifications Study Guide on our blog.


Picking up where we left off with the last video from the local machine that until now that has been hosting WordPress, well create a dump, or a copy, of the MySQLdb database where Ubuntu is our local username. wordpressdb is the name we gave the database. -p will require a password and blog.bak.sql.gz is the name of the compressed file that this operation will create. There is no need to copy anything from the WordPress installation itself as besides a few configurations settings which in any case will change, the only custom element of WordPress that will really matter are inside the database. Now with a copy of your public key within reach, we'll upload the compressed dump using scp and our ec2 instances public IP address.scp is an utility for copying files across insecure networks using the ssh protocol.

Lets confirm that the transfer was successful. From an ssh session inside our ec2 instance, we use gzip -d to decompress the file.

Now let's login to our RDS MySQL and create a database called wordpressdb. And then using our endpoint for hostname, the admin username we created when launching the RDS instance, and wordpressdb as our database name, will paste the dump into our RDS MySQL. Now were ready to browse our instances IP address to confirm that our WordPress installation has made it to the internet and there it is. Well create a security group which we'll use for our WordPress instances.

From the VPC dashboard click on security groups and then on create security group. Well call our group WordPress, give it a description, and add two rules. The first one will allow ssh traffic from my IP address. And the second rule to allow our users to reach the site, will allow all http traffic from everywhere.

Now let's associate our MySQL security group with the WordPress group we just created and edit the inbound rules.

Although we didn't show this in the previous video, we originally created the MySQL group just like this from the VPC dashboard but created it to be completely open to all MySQL traffic. Not a great idea. Now we'll restrict access to only instances using our WordPress group rather than having the MySQL source at the 000/0. Remove the zeros.

Click inside the empty box and select the WordPress group instead. Save the setting. Now that our instances are set up the way we like it, we should clone it to an AMI image template that can be used for all of our instances. From the ec2 instances dashboard and making sure that our WordPress instance is selected, click actions, image, and then create image.

Well call our image WordPress and click create image. Once our AMI exists, we'll use it to create two instances and two availability zones within our VPC. Again from the ec2 dashboard, click launch instance and this time click on the my AMI's tab and select our WordPress image.

We'll go with t2 micro and on the instances details page, well select a specific subnet. For this instance, it doesn't matter which one, well accept the default of the next two screens and then select an existing security group, the WordPress group. Click Click review and launch.

Now lets do it all again for a second instance. This time however, choosing a different subnet and a separate availability zone and again, selecting our WordPress security group.

Now from the ec2 menu, click on load balancers and then create load balancer. We'll give our balancer a name, assign it to our default VPC, toggle and enable a advanced VPC configuration that sends all the incoming traffic were expecting will arrive via browsers will leave its http listing setting as default. We'll edit the default health check setting so that balancer will look for an index.php file.

The WordPress front end file to make sure the ec2 instances it will serve are actually working properly before sending them traffic and click continue. We'll select the two subnets in which our running instances currently live and click continue. Once again, we'll select our WordPress security group and select our two instances. Leave cross zone balancing toggled and again click continue. After clicking through the key tag page we can review the configuration and then click create. To provide for high and low demand periods, to make sure that high volumes of user always have access to resources they're after and on the other side we aren't paying for unused and unnecessary instances, we'll configure autoscaling using Amazon's two step process.

From the ec2 dashboard, click on launch configurations in the left menu. Create autoscaling group and create launch configuration. Select the AMI you'd like to use for all the instances this group will launch. We'll click on the my AMIs tab on the left and select our new WordPress AMI. For instance type, we'll click on t2 micro and then click on next configure details. We'll give our group a name. Click next to add storage. We'll accept the default and again click next to select the WordPress security group that we created for this purpose. We'll then review the configuration and create the launch configuration confirming that I have access to my key pair. Now that the launch configurations is complete, we'll create our autoscaling group. We'll give our group a name and since high availability means spreading our deployment among multiple availability zones we'll start with two instances rather than one. Well use our default VPC and click inside the subnet box, selecting two subnets, the same two we're using for our current instances. Under advanced details, we'll associate this autoscaling group with our load balancers so the balancer can spread traffic among all the instances of the autoscaler we'll create.

Click configure scaling policies. Select use scaling policies to adjust the capacity of this group and choose to scale between two and four instances. To guide the autoscaler when to increase group size, we'll have to add a new alarm. We'll go with anytime the average CPU utilization rises above 70 percent for one period of five minutes. We won't bother sending a SNS alarm. Click create alarm.

We would like to take every time an alarm is triggered is to add one instance. Similarly, we'll create a new alarm to trigger a decrease of instances whenever the average CPU utilization falls below 30 percent.

Now click next configure notifications. Since we don't need any SNS notifications or tags, we'll move on to review and then create our autoscaling group. We're just about done. We'll now head over to load balancing dashboard in ec2 to find the end point for our balancer. If we would use a DNS domain name, we'd point the DNS record to this endpoint. For now though, we can simply use this as our access address. We'll paste this address into our browser and see what comes up. So that's it. We've completed this rather tough project and along with it a full three course cloud academy solutions architect certification preparation series. We hope this has been especially helpful for you and we wish you great success with your exams and with your careers in cloud computing.

Good luck.

About the Author
David Clinton
Linux SysAdmin
Learning Paths

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.