AWS Compute Fundamentals
AWS Storage and Database Fundamentals
Other Services Relevant to the SysOps Associate certification exam
The ‘Foundations for SysOps Administrator - Associate for AWS’ course is designed to walk you through the AWS compute, storage, and service offerings you need to be familiar with for the AWS SysOps Administrator–Associate exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:
Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
Storage and Database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
Amazon Relational Database Service (RDS)
Amazon Elastic MapReduce (EMR)
Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon API Gateway
Amazon Data Pipeline
Review AWS services relevant to the SysOps Administrator–Associate exam
Illustrate how each service can be used in an AWS based solution
This course is for anyone preparing for the SysOps Administrator–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.
Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path, although some of the course materials included there are also included here.
This Course Includes:
- 7 video lectures
- Snapshots of 24 key AWS services
What You'll Learn
|Lecture Group||What you'll learn|
|Compute Fundamentals||Amazon Elastic Cloud Compute (EC2)
Amazon EC2 Container Service (ECS)
Amazon Simple Storage Service (S3)
|Services at a Glance||
Amazon Simple Queue Service (SQS)
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
AWS Relational Database Service is a managed database service that lets you focus on building your application storage by taking away the administrative components, such up backups, patches, and replication. It supports a variety of different relational database builders and it offers a reliable infrastructure for running your own database in multiple availability zones.
Now, CloudWatch metrics offer detailed monitoring for RDS. RDS also makes Read Replicas possible. Amazon RDS will keep you databases up to date with the latest patches. You can exert optional control over when your instance is patched. Another benefit is Database Event Notifications. So RDS databases can notify you via email or SMS of database events through Amazon SNS, the Simple Notification Service. So you can use the AWS Management Console or the Amazon RDS APIs to subscribe to over 40 different database events associated with your database instances. Another key benefit is the availability and durability that RDS provides. You get automated backups turned on by default. The automated backup feature of RDS enables point-in-time recovery for your database instances. Amazon RDS will backup your database in transaction logs and store both for a user specified retention period. So this allows you to restore your database instance to any second during your retention period up to the last five minutes. Your automated backup retention period can be configured up to 35 days.
Database snapshots are another benefit. So snapshots are user-initiated backup of your instances stored and they are kept until you explicitly delete them. You can create a new instance from a database snapshot or load database snapshots serve operationally as full backups. They're only built for the incremental storage use.
The other great benefit is Multi AZ Deployments. So Amazon RDS Multi AZ Deployments provide availability and durability for database instances. When you provision a Multi AZ Database Instance, Amazon RDS synchronously replicates the data to a standby instance in a different availability zone. Another benefit is you get automatic host replacement. So Amazon RDS will automatically replace the compute instance powering your deployment in the event of a hardware failure. All very, very useful for highly available fault tolerant solutions.
Amazon RDS allows you to encrypt your database using keys you manage through AWS Key Management Service, or KMS, and on a database instance running with Amazon RDS encryption data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots.
Another great benefit is the resource-level permissions. So RDS is integrated with AWS Identity and Access Management and provides you with the ability to control the actions that your AWS IAM users and groups can take on specific Amazon RDS resources. That can be from database instances, through the snapshots, parameter groups, and even your option groups. So you can also tag your Amazon RDS resources and control the actions that your IAM users and groups can take on groups of resources that have the same tag. Instance types with RDS. A General Purpose SSD, and a Provisioned IOPS SSD. So the General Purpose SSD is solid-state drive backed storage delivering a consistent baseline of three IOPS per a provisioned gigabyte and it does provide the ability to burst up to 3,000 IOPS. So it's suitable for a broad range of database workloads. The Provisioned IOPS SSD storage is designed to deliver really fast, really predictable and consistent I/O performance for those larger database workloads.
Another key thing to remember about RDS is the Maintenance Window. So RDS performs maintenance on RDS resources for you. It's a managed service. So the required patching is automatically scheduled for patches that are related to security and instance reliability. So if there's some patch that needs to be made to an Oracle database or a SQL Server database and it's affecting security or instance reliability, AWS will do that immediately. Now, for other types of patches, if you don't specify a preferred weekly maintenance window when you create your DB instance, a 30-minute default value is assigned. So maintenance items require that Amazon RDS take your DB instance offline for a short time. Now, if you want to change the way maintenance is performed on you behalf, you can do so my modifying your DB instance in the management console, or using the Modify DB Instance API. And each of your DB instances can have different preferred maintenance windows.
Changes to a DB instance can occur when you manually modify DB instance, such as when you upgrade a DB engine version, or when Amazon RDS performs maintenance on an instance. So how does that work in milti AZ environments? When you're running a DB instance as a multi AZ deployment, it does reduce the impact of the maintenance of it. RDS will conduct maintenance using the following steps. Perform maintenance on the standby first. Promote the standby to primary. And then perform maintenance on the old primary, which then becomes the new standby. Now, for DB instance updates you can choose to upgrade a DB instance when a new DB engine version is supported by Amazon RDS. Each DB engine has different criteria for upgrading an instance and what DB engine versions are supported. So when you modify the database engine for your DB instance in a multiple AZ deployment, then RDS upgrades both the primary and secondary DB instances at the same time. So in that case the database engine for the entire multi AZ deployment is shut down during the upgrade.
Alright, so Amazon RDS best practices. Always monitor memory and CPU and storage using CloudWatch notifications. Those can notify you when usage patterns change or when you approach the capacity of your deployment. So that way you can maintain system performance and availability. Enable automatic backups and set the backup window to occur during the daily low in WriteIOPS if you have one. Scale up your DB instances when you're approaching storage capacity limits. You should aim to have some buffer in storage and memory to accommodate unforeseen increases and demand from your apps.
Now, on a MySQL DB, try not to create or do not create more than 10,000 tables using Provisioned IOPS or 1,000 tables using standard storage. Large numbers of tables will significantly increase database recovery time after a failover or database crash. Also on MySQL DBs, avoid tables in your database growing too large. So underlying file system constraints do restrict the maximum size of a MySQL table to two terabytes. So instead of having a large table, partition your tables so that file sizes are well under the two terabyte limit. If your database workload requires more I/O than you provisioned, recovery after a failover or database failure will be slower.
So, how do you increase the I/O capacity with DB Instance? Here are a few options. First, you can migrate to a DB instance class with high I/O capacity. You can convert from standard storage to a Provisioned IOPS storage and use a DB Instance class that is optimized for Provisioned IOPS. If you're already using Provisioned IOPS storage, provision additional throughput capacity. Also, if your client application is caching the DNS data of your DB instances, set a time-to-live value of less than 30 seconds. Caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an API address and that's been changed with a failover.
So, a quick word on RDS security. Okay, so it's different from our security groups, right? So Amazon RDS DB Instance access is controlled by the customer via Database Security Groups, which are like EC2 Security Groups but they're not interchangeable. So Database Security Groups default to a "deny all" access mode. And customers must specify authorized network ingress so you're basically starting with no access. Okay, so there's two easy ways of setting up a new role. You can authorize a network IP range, or you can authorize an existing Amazon EC2 Security Group. So Database Security Groups only allow access to the database server port, and all other ports are blocked. And they can't be updated without restarting the Amazon RDS Instance. So that gives you some control over database access. With IAM, you can further control access to RDS DB Instances.
So AWS IAM enables you to control what RDS operations each individual IAM user has permission to call. Now, RDS instances generate an SSL certificate for each DB instance and that allows you to encrypt data instance connections for enhanced security. Once an Amazon RDS DB Instance deletion API, which is DeleteDBInstance, is run, the DB Instance is marked for deletion and once the instance no longer indicates "deleting" status, it's been removed. At that point the instance is no longer accessible and unless the final snapshot copy was asked for, it can't be restored, it will not be listed by any other tools or APIs. So a few RDS Security best practices. Alright, first off, don't use the AWS root credentials to manage RDS resources. Use AWS IAM accounts to control access to RDS API actions, especially actions that create modify or delete RDS resources. Assign an individual IAM account to each person who manages RDS resources. Grant each user the minimum set of permissions required to perform his or her duties. And use IAM groups to effectively manage permissions for multiple users. And remember to rotate your IAM credentials regularly. So let's just take a quick shapshot through the Database Services.
About the Author
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 80+ courses relating to Cloud reaching over 100,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.