image
Services at a glance
Start course
Difficulty
Beginner
Duration
3h
Students
3917
Ratings
5/5
Description

In this course we learn to recognize and explain AWS compute and storage fundamentals, and to recognise and explain the family of AWS services relevant to the certified developer exam. This course provides you with snapshots of each service, and covering just what you need to know, gives you a good, high-level starting point for exam preparation. It includes coverage of:

Services

Amazon Simple Queue Service (SQS)
Amazon Simple Notification Service (SNS)
Amazon Simple Workflow Service (SWF)
Amazon Simple Email Service (SES)
Amazon CloudSearch
Amazon API Gateway
Amazon AppStream
Amazon WorkSpaces
Amazon Data Pipeline
Amazon Kinesis
Amazon OpsWorks
AWS Elastic Beanstalk
Amazon CloudFormation

Storage and database
Amazon Simple Storage Service (S3)
Amazon Elastic Block Store (EBS)
AWS Relational Database Service (RDS)
Other Database Services
Amazon Glacier

Compute
Elastic Cloud Compute (EC2)
Elastic Load Balancing (ELB)
Auto Scaling
Amazon ECS
AWS Lambda

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Let's get started with a high level view of some of the AWS services that are relevant to the certification exam. Amazon Simple Queue Service or SQS is a fast, reliable and scalable messaging queue that is fully managed. Now messages are stored redundantly in Amazon SQS across multiple servers and multiple availability zones to ensure delivery. Amazon SQS can handle an unlimited number of messages at any given time. The order of delivery is not guaranteed with Amazon SQS and a message can be retrieved more than once from a queue due to the distributive nature of the service. You should consider using Amazon SQS when you have separate components of a system or different systems altogether that need to interact with one another. You can also use this service to offload requests from your primary services, which in turn lets those primary services perform the job that they were intended to perform. With Amazon CloudWatch, you can use Amazon SQS to trigger Auto Scaling based on the number of messages in your queues. You can also set up a message visibility window, which can be up to 12 hours to ensure messages are consumed. Messages can be stored for between one minute and two weeks. The default retention time is four days. You can delete all the messages in an Amazon SQS queue using the purge queue action. When you do purge a queue, all the messages previously sent to the queue will be deleted, as well. Amazon SQS provides the ability to configure what we call a Dead Letter Queue. A Dead Letter Queue is a queue which you can configure to receive messages from other queues, which are referred to as the source queues. Now typically, you set up a Dead Letter Queue to receive messages after a maximum number of processing attempts has been reached. The Dead Letter Queue provides the ability to isolate messages that could not be processed so you can analyze them later. A Dead Letter Queue is just like any other queue. Messages can be sent to it and received from it like any other queue and you can create a Dead Letter Queue from either the Amazon SQS API or from the Amazon SQS console. OK, next up we have Amazon Simple Notification Service. A Simple Notification Service is a fully managed Push Messaging service. It supports a wide variety of endpoints, including email, HTTP REST, SMS and SQS as a few examples. So SNS is integrated with AWS services allowing us to develop loosely coupled systems through messaging publications. So topics are used for subscribing and publishing. Access to topics can be granted and/or restricted through IAM. A popular use case of SNS is to facilitate the development of loosely coupled systems. SNS can also be used in mobile app notifications whether it's via SMS or device specific notifications such as the iOS push notification service. So here we are in the Simple Notification Service dashboard. SNS as we said works with topics so we need to create a topic to start sending notifications. We only need to give it a simple name and create the topic. In topics, we can create subscriptions. Subscriptions are simple endpoint addresses to which SNS can send topic messages. We'll select the email endpoint. We'll need to provide at least one email address to which the notifications that are being published to this topic can be sent. We need to confirm the subscription. Now every message published to this topic will be forwarded to that email. SNS will send the same message to all subscriptions associated with our topic. We can define permissions for our SNS topic by editing the SNS topic policy. SNS is very useful when deploying highly available solutions as it enables our monitoring services, such as CloudWatch and CloudTrail to notify us of changes to our environment. So let's go to CloudWatch and deploy a new alarm to send notifications to the topic that we just created. Let's use the Amazon S3 bucket metric. So every time that we have less than say five files in this bucket, a new notification will be sent to the SNS topic that we've created. SNS also integrates with CloudFormation so during stack creation and updates, we can specify topics to receive notifications about the events related to our CloudFormation stack. OK, up next is Amazon Simple Workflow Service, which is a fully managed State Tracker and Task Coordinator. Now developers can build background jobs with parallel or sequential steps with Amazon Simple Workflow Service. Amazon Simple Workflow Service is built with High Availability in mind, ensuring the reliable execution of workflow steps. It assists developers in keeping State separate from the actual units of work. In addition to automating tasks, Amazon Simple Workflow Service supports human worker tasks. This service is perfect for any processes in which there is a specific flow of events, such as an e-commerce order fulfillment system for example. Any workflow that requires human involvement is a candidate for Amazon Simple Workflow Service. Fully automated processes like video encoding are also perfect use cases. And up next is the Amazon Simple Email Service, which is an outbound email sending service that takes advantage of Amazon's reputation in order to ensure reliable, high delivery rates of emails. The real time statistics are at your disposal with the Amazon Simple Email Service. The Amazon Simple Email Service API you can integrate with your applications and it's very simple to use. Most AWS services are already integrated with Amazon SES through the Amazon Simple Notification Service. So some common use cases for Amazon SES include password reset notifications from your custom applications, confirmation emails, et cetera. If you are using a human worker task in Amazon Simple Workflow Service, you can use Amazon Simple Email Service to email a person to make them aware that a task is waiting for them. A message is defined as one communication to one recipient. OK and up next is Amazon CloudSearch, another managed service that supports 34 languages and includes features such as highlighting search terms, Autocomplete Faceted search, and Geospatial search. With Auto Scaling, you can scale out to handle elevated indexing needs and querying loads that then scale back into normal as the load decreases. So this service is ideal for applications and web sites that need search functionality, whether it be for articles or e-commerce inventory based on a category or a brand. And with Geospatial searching, you can use Amazon CloudSearch to help users find listings, jobs, restaurants, whatever, within say a 25 kilometer radius of wherever they are. Since Amazon CloudSearch runs on managed EC2 instances, you pay the per-hour rate based on the instance type used. Amazon CloudSearch requires that you send in objects to be indexed. Alright, next up is the Amazon Elastic Transcoder, which manages media transcoding processing for you. You create a transcoding job, specifying the location of your source media file and how you want it transcoded. Amazon Elastic Transcoder also provides transcoding presets for popular output formats, which means that you don't need to guess about which settings work best on particular devices. Amazon Elastic Transcoder can do streaming and progressive download. You can store the original versions of your media content in Amazon S3 and configure an Amazon CloudFront download distribution for progressive download of your video and audio files. Frequently accessed media files are cached at the edge to help you scale and give your viewers the best possible performance. Access to Elastic Transcoder is available via service API, via the AWS SDKs, and via the AWS management console. OK up next on our list of amazing services is AWS Lambda, which is a service for running processes without the need for provisioning and managing EC2 instances. So AWS Lambda supports Java, Python, and Node.js applications. Functions can be triggered via events that come from Amazon S3, Amazon Kinesis, or Amazon DynamoDB when they're invoked directly from the AWS console or from the command line tools. Now with the pull model, items are plucked off of the Kinesis stream or DynamoDB update stream by a Lambda function. Systems designed with a messaging bus architecture might be candidates for using Lambda in this way. Now with the push model, that happens with an Amazon S3 event occurs and a Lambda function is invoked in response to that event. And a common use case is performing imaging resizing and/or conversion when a raw image is uploaded to an S3 bucket. OK next up on our services list is Amazon Appstream, a service which makes it possible to delivery Windows applications from the Cloud to end users without any code modifications so this makes updates easy and fast through a centralized management platform. And best of all, you can run your Windows applications on multiple platforms, including Android, Chrome, iOS, Windows and Apple devices so as your user base increases, Appstream can automatically scale to meet demand. And a common use case is when you need to deliver applications to a distributed group of users like a mobile workforce with varying devices and varying data connection speeds. Perfect for graphic intense applications, you can use GPU service to handle the complex visuals and then stream the output to those users. OK, next up on our list is Amazon Workspaces, the desktop computing service that runs various flavors of Microsoft Windows so you can also install any other software that you already own licenses for in your workspaces at any time. Amazon Workspaces integrates with directory services such as Active Directory or AWS Directory Service. Amazon Workspaces is accessible from standard desktops, smartphones, and tablets. All it needs are the devices to have an internet connection. Now Workspaces is perfect for distributed workforces, especially workforces with a bring your own device policy. Amazon Workspaces incorporates PC-over-IP or PCoIP, technology from Teradici. The PCoIP remote display protocol is used between user's devices and their workspaces so this protocol compresses, encrypts, and encodes the user's desktop computing experience and transmits pixels only across any standard IP network to user's stateless PCs, laptops, mobile devices, and zero clients. Amazon API Gateway helps you deliver mobile and web application back ends. It's a great service if you want to provide secure, reliable access to back end APIs for access from mobile apps, web apps, or server apps and they can be built internally or by third parties. Now Amazon API Gateway consists of two services. The Amazon API Gateway control service and the Amazon API Gateway execution service. The control service enables you to create a RESTful API for selected back end services. The back end can be another AWS service, such as AWS Lambda or Amazon DynamoDB, or it can be an existing web application. The execution service lets an app call the API to access the exposed back end features. Now the app can be integrated with the API using standard HTTP protocols or using a platform or language specific STK generated by the API creator. The business logic behind the APIs can either be provided by a publicly accessible endpoint that Amazon API Gateway proxies will call or it can be entirely run as a Lambda function. OK, the Amazon Data Pipeline is a service for reliably processing and moving data between compute and storage services. You can use Amazon Data Pipeline to move data between an on-premise data source to a cloud data source. It executes in a fault-tolerant way and there are templates for executing common transformation tasks. Amazon Data Pipeline is really useful, for example, when you need to move data from say your on-premise system to the Amazon RDS or you want to move Amazon DynamoDB data through Amazon Elastic MapReduce in order to calculate say billing from complex calculation rules. It's a critical service for any scenario where you need to move large batches of data. Next up on our list is Amazon Container Services or Amazon ECS. With Amazon ECS, you can fully utilize the Amazon EC2 instances you pay for without wasting compute cycles. Applications that do not fully utilize an Amazon EC2 instance are really good candidates for moving to the Amazon EC2 Container Service. You can run different layers of the same application or different applications altogether. If your application must scale in a matter of seconds, you just can't beat the speed of scaling in Amazon ECS, especially if your application is under-utilizing its Amazon EC2 instances. And Amazon ECS is a managed service making it ideal if you do not wish to run your own cluster infrastructure. Amazon ECS has no additional cost. You only pay for the Amazon EC2 instances that you're using in the Amazon ECS clusters. Alright, up next is Amazon CloudFormation. Amazon CloudFormation is a building block service that enables you to provision and manage almost any AWS resource via a JSON based domain specific language. You define templates and then use those to provision and manage AWS resources, operating systems and application code. You can deploy and update a template and its associated collection of resources, which are called a stack, by using the AWS Management Console, the AWS Command Line Interface, or Amazon APIs. Amazon CloudFormation supports creating VPCs, subnets, internet gateways, route tables, and network ACLs, as well as creating resources such as elastic IPs, Amazon EC2 instances, Amazon EC2 security groups or their scaling groups, elastic load balances, Amazon RDS database instances, and Amazon RDS security groups in a VPC as examples. Amazon CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types. Amazon CloudFormation templates are JSON formatted text files that are comprised of five types of elements. One, an optional list of template parameters, the input value supplied at stack creation time, an optional list of output values, e.g. the complete URL of your web application, an optional list of data tables used to look up static configuration values, and a list of AWS resources and their configuration values. Now by default, the automatic rollback error feature is enabled on CloudFormation templates so this will cause all AWS resources that AWS CloudFormation created successfully for a stack up to the point where an error occurred to be deleted so it rolls all the way back if something goes wrong. Now that's really useful when you strike a runtime error. You don't want half finished stacks and resources being consumed that aren't being completed. Now automatic rollback enables you to rely on the fact that stacks are either fully created or not at all, which simplifies system admin and layer solutions built on top of AWS CloudFormation. Alright, up next is Amazon OpsWorks, which is a configuration management service that enables you to configure and operate applications of all shapes and sizes using Chef. Alright, OpsWorks Chef. So it replaces manual steps and you specify how to scale, maintain, and deploy your applications and AWS OpsWorks performs the tasks for you using Chef. Amazon OpsWorks can be run on locally hosted data centers so it supports all Linux machines that can install the OpsWorks agent and have a connection to AWS. Amazon OpsWorks also supports Windows Server 2012 R2. Now by default, you can create up to 40 stacks and each stack can hold up to 40 layers, 40 instances, and 40 apps so that's an easy one to remember, isn't it? OpsWorks, Chef, and 40. Next up on our list is Amazon CloudTrail. Amazon CloudTrail is a web service that records API calls made on your account and it delivers log files to your Amazon S3 bucket. An event contains information about the associated API call, the identity of the caller, and the time of the call. Plus you get the source IP address, the request parameters, and the response elements returned by the AWS service. So that's a fantastic tool for your High Availability environments in that it gives you really good auditing and monitoring above what you get with CloudWatch. Now Amazon CloudTrail delivers an event within 15 minutes of the API calls made. Amazon CloudTrail delivers log files to your S3 buckets approximately every five minutes. Amazon CloudTrail does not deliver log files if no API calls are made on your account so you'll only get a log file if something's actually been requested. Alright, another great management tool, AWS CodeDeploy automates code deployments to Amazon EC2 instances with reduced downtime. So you reduce downtime basically doing rolling updates managed through a central management tool that manages and monitors the deployment health. Now Amazon CodeDeploy is language agnostic and it integrates easily to existing software release pipelines. So this service is ideal for situations say where you want to test deployments in a staging environment before pushing to production. And if you're concerned about rolling back if the event has an issue, whether it's deployment or application based, CodeDeploy is built to handle that scenario. It's also meant to work with little or no human involvement, which makes it less prone to human error. Now just like other deployment services offered by AWS, there's no additional charge to use CodeDeploy. You pay for the other services and resources used by it, such as EC2 and S3. It's really great having so many management and deployment services available in AWS and as an architect, you need to be able to recognize and identify when to use one service over the other. So let's look at the difference between AWS OpsWorks, AWS CloudFormation, and AWS Elastic Beanstalk. So AWS OpsWorks is a configuration management service for IT administrators and dev ops engineers. AWS OpsWorks uses a configuration management model based on the concepts such as stacks and layers and it provides an integrated experience for key activities like deployment, monitoring, auto scaling, and automation. AWS CloudFormation is a full building block service that enables you to provision and manage almost any AWS resource via a JSON based domain specific language. So compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application oriented AWS resource types. Now AWS Elastic Beanstalk is an application management service and AWS Elastic Beanstalk is designed for deploying and scaling web applications and services and they can be developed with Java, Net, PHP, Node.js, Python, Ruby, Go, and Doca so you upload your code and AWS Elastic Beanstalk automatically does the rest. So if the development team has limited knowledge or perhaps limited authorization to build cloud infrastructure, then AWS Elastic Beanstalk is a great service for this type of use case. Great, okay, so that wraps up our Services at a Glance. Hope that wasn't too much to take in in one go, but try and think about each of those services and what the uniquenesses of them are, how they can be used, especially in relation to developing and designing highly available, fault tolerant, cost efficient, scalable solutions.

About the Author
Students
236970
Labs
1
Courses
232
Learning Paths
187

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.