AWS VPC With High Availability And Scalability For Your CMS

Amazon has made the Virtual Private Cloud(VPC) as default for all the compute resources in all the regions across AWS. This brings a great opportunity for customers to take advantage of the VPC and design their networks and differ their workloads between public and private subnets. This will give the customers more control over their resources, networking, routing, and security.

Design and deploy a web application in VPC with all the architectural best practices is a challenge. Because we need to separate the workloads properly between the public and non-public zones, routing, and subnetting, high availability, scalability, etc.

Here I proposed an architectural diagram to deploy a CMS application in High Available, Scalable and Secure within AWS.

an architectural diagram to deploy a CMS application in High Available, Scalable and Secure with in AWS
I will explain each component and how this architecture diagram will help you to deploy a great scalable and secure CMS application like Drupal and WordPress in AWS.

As per the above Diagram we have Infrastructure Tier, Web Tier, Database Tier, Cache Tier, and Deployment & Management Tiers

Infrastructure Tier 

The Infrastructure Tier consists of NAT Instances, Load balancers, and Bastion/VPN instances. These instances are in Public Subnet because these should be reachable to the Internet via the Internet Gateway.

NAT Instances are deployed in NAT instances in HA in two public subnets in different zones to avoid the single point of failure.

Bastion/VPN Instance is to reach the rest of the private resources from the Internet. It would be a good idea to enable the MFA for this instance to get one more level of security. If you have any in-house VPN solution, you can use VPC Virtual Private Gateway to connect your network and we can remove the Bastion/VPN Instance.

Web Tier 

Generally, a lot of people deploy the Web Servers in Public Subnet thinking that they are the first level resources to access the application. But, if you use an Elastic Load Balancer (ELB) you can put the ELB in public subnet spanning in multiple Availability Zones and move the Web Servers into Private Subnets completely. Make sure attached subnets for ELB should have at least 20 free IP Addresses in each zone to create an ELB.

Implement Auto Scaling for Web Tier spanning multiple availability zones, this brings the High Availability and Scalability to the application.

Database Tier

Deploy the Database Servers in separate private subnets. Use the Amazon RDS Database service to launch the database in Multi-AZ mode to bring the High Availability and use Amazon RDS Read-Replicas feature to separate the writes and read requests between the master database and read replicas. This brings the great scalability to the CMS applications in Database. This RDS service is hosted by AWS whereas we no need to manage the Multi-AZ and Read-Replicas.

Cache Tier

Deploy the Cache Server in a separate private subnet. Use the Amazon Elastic Cache service, if you are using Memcache or Redis services for your CMS application. This cache service will be helpful for the sessions storing and page caching. It would be a good practice to move the session storing to the Cache Tier rather than storing on Database to avoid load on the Database servers. The Elastic Cache service is hosted by whereas we no need to manage the cache instances.

Deployment & Management Tier

Deploy the Deployment & Management servers in separate Private subnet.

The Deployment Server can be used to pull your version controlled code and static content hosted on either Git or SVN repositories and also can be used to deploy your code to Web Instances and copy/syncing the static content to the S3 bucket. You can use this instance to run some cron jobs or automation scripts.

The Monitoring Server will be used to monitor all resources which we have deployed for this CMS application using Nagios, Opsview monitoring tools, etc. Alone Cloud watch metrics and alarms cannot give you the complete insights of your VM and application-specific metrics. So third-party monitoring solution needs to be deployed to monitor both the VM and application specific metrics and enable the notifications via the Amazon SES, in case if any metric goes beyond the defined threshold level. And the monitoring portal can be accessed via the ELB.

Miscellaneous Resources

Use CloudFront Distribution for Content Delivery Network (CDN) to deliver the static content from the nearest edge location based on your geographic location. This will help us to load the application swiftly in the browser.

Use AWS S3 as the source for the CloudFront to fetch the content. Copy/Sync static content from the deployment server to S3 bucket. S3 bucket is a hosted storage service by AWS, this brings greater scalability and high availability.

Use Amazon Route 53 for low-latency DNS service and Failover feature if you want to build a Disaster Recovery environment in another region for your application.

Use Amazon SES for your mail services which will be helpful to send the bulk emails to users like newsletters, breaking news, important updates, etc.

Use Amazon CloudWatch and Alarms to monitor and alert us on resources metrics and instance reachability checks based on defined threshold levels.

With the above deployment architecture, we have exposed only 3 kinds of resources NAT, Bastion/VPN and ELB to the public with restricted firewall rules and rest of all the resources are deployed and secured in private zone with high availability and scalability in place.

Cloud Academy