EXAM PREP - Compute


AWS Compute Fundamentals
What is Compute?
Amazon EC2
PREVIEW28m 26s
AWS Batch
EC2 Auto Scaling
ELB & Auto Scaling Summary
7m 37s
SAA-C02- Exam Prep
Start course
2h 55m

Please note that this course has been replaced with a new version that can be found here: https://cloudacademy.com/course/compute-saa-c03/compute-saa-c03-introduction/


This section of the Solution Architect Associate learning path introduces you to the core computing concepts and services relevant to the SAA-C02 exam. We start with an introduction to the AWS compute services, understand the options available and learn how to select and apply AWS compute services to meet specific requirements. 

Want more? Try a lab playground or do a Lab Challenge

Learning Objectives

  • Learn the fundamentals of AWS compute services such as EC2, ECS, EKS, and AWS Batch
  • Understanding how load balancing and autoscaling can be used to optimize your workloads
  • Learn about the AWS serverless compute services and capabilities

So you've reached the end of the compute section and that was a big section to complete. So congratulations on getting through. So in that last section, we covered services and features such as Amazon EC2, auto scaling, elastic load balancing and serverless compute as well, which focused on AWS Lambda. So to help you with your studies for the exam I want to call out some key points that you should keep in the forefront of your mind. As my one core focus is to ensure that you are prepared and have the knowledge in need when you're sitting in that exam chair. So let's run through when you might select certain services or make specific configuration changes to meet the requirements of different questions. So starting with EC2 two as this is most frequently mentioned compute service on the exam. We start off by looking at AMI's, Amazon Machine Images. So these are used as the baseline template of your EC2 instances. And now the first element you need to select when creating your instance for the exam, you should be aware of the different options that they offer such as the operating system that you'll be running in addition to any other additional software. So you will be expected to know what comes with the AMI and what doesn't? So let's look at an example question where this might be the case. question is you are setting up your company's Virtual Private Cloud, VPC. It is time to select the virtual hardware and the software to be provisioned for the instances you will launch within the VPC. You will be doing this by selecting the instance types and the Amazon Machine Images, AMI's. But which item is not defined by the AMI? So here you really need to have experience of working with AMI's and selecting them to understand exactly what's involved. We have four options. The operating system, virtual CPU's, application or system software or the initial state of patches. So the question here is, which item is not defined by the AMI. So really all the texts before this question we can kind of ignore. So when answering your exam questions try to eliminate all the texts that doesn't really help you answer the question and all the texts that's really irrelevant. And that will really simplify the question for you. In most of the questions is the last sentence that you need to be interested in. And it's the last sentence that really is asking what you need. So let's go back to it. So is the operating system defined by the AMI? Yes, it is because we're able to select between Linux Oses, Red Hat, Microsoft, et cetera, et cetera. So that is one of the selections that you can make when selecting your AMI. So we know the operating system is included. Virtual CPU's, now this is really relating to the actual instance itself and the underlying host. And you select this configuration when defining what size instance do you want, how much storage, how much network throughput, how many VCPUs? So that isn't actually included in the AMI itself. And let's keep going and read through all of the answers.

 I always recommend you doing that even if you feel you know the answer, it's always worth looking at the remaining answers. 'Cause there could be something slightly different there that would be a better answer. C, application or system software. Now again, we know when selecting an AMI we're actually selecting the type of software that can be on it as well. Especially if we're using an AMI from the marketplace. It could be particular vendor software that you can select to be pre-baked into the AMI image. So we know that is an option as well. And then finally, the initial state of patches. Again, depending on which AMI you select, it will have an initial state of patches. Now this might be patches to do with software or with the operating system that you selected. So an answer to this question, which item is not defined by the AMI, it's B virtual CPUs. As the CPU component is hardware related. Whereas the AMI is all software related. When it comes to understanding a service, it always helps to get some hands on experience with it and EC2 is no exception. And you can use our labs for this and it will really help you to establish familiarity with the different steps involved in creating an instance. And this will help you answer a lot of questions. You need to be aware of the different instance types that are available. And how the compute power and performance values fluctuate with instant size. As we discussed, some instances provide better performance depending on if your workloads are memory intensive or perhaps require that accelerated computing performance to help with data pattern matching. So having an insight into these, will help you answer any questions relating to EC2 workload efficiency. Now more than likely, you'll be asked at some point to determine the best instance purchase option to help you optimize the cost of your environment. I think I've at least one or two questions on this each and every time I've set the exam. So it's imperative that you know the difference between on-demand, spot and reserved instances. Now depending on the scenario you will have to demonstrate your understanding of these different purchase options to help you determine under which circumstance you should use each of them. If the question talks about how your workload is predictable and will be required for perhaps one or three years and you need to optimize costs, then reserved instances should come to mind and would likely be the answer. If the question highlights how the workload can be interrupted. And again you're looking to build a cost efficient solution. 

Then this would be a good use case of spot instances. So review the key differences of the purchase options and understand their specific use cases to help you optimize costs. Let's take another look at an example question on this point. So a company wants to implement a low-cost CI/CD solution for its development team. They will use Jenkins as the CI/CD software and an EC2 instance to run it. The company CI/CD pipeline is configured to tolerate intermittent outages while processing Jenkins jobs. What EC2 instant purchase option will be the most appropriate and cost-effective choice? So again, let's read that last point in the question. What EC2 instance purchase option will be the most appropriate and cost-effective choice? Now some key words here. And the key words that stand out for me are cost-effective. So we're looking to save money here. So now we know what we're looking for. Let's reread the question and see if there's any other key words that jump out. So a company wants to implement a low-cost CI/CD solution for its development team. Okay, that's fine. That doesn't really give us any information. They will use Jenkins as the CI/CD software and an EC2 instance to run it. Again, that's just there for information purposes and doesn't really help us. The company's CI/CD pipeline is configured to tolerate intermittent outages or processing Jenkins jobs. So that's key for us. The fact that we have intermittent outages. So we're looking for a cost-effective purchase option where the instance can tolerate intermittent outages. So now we have that information, let's read through what options we have. So we have on-demand instances. Now we know that on-demand instances aren't the most cost-effective. So they can tolerate intermittent outages but it's not the most cost-effective. So let's keep reading through. Next we have spot instances. Now these are very cost-effective as we just discussed and they can also be used for intermittent workloads because remember with spot instances you get a two-minute warning before you lose it. If your bid price falls below the spot price. So that could be an option. 

Let's keep going. Next we have reserved instances. So reserved instances are very cost-effective but it doesn't actually mention here that the software will be running for long periods of time either for a year or three years. So I'm reluctant to select reserved instances because unless it's going to be running for a long time it's not gonna be in the most cost-effective. And then finally we have scheduled instances. But nowhere in this question, does it mention that you'll be running this job on a set schedule. So for this answer, I would select spot instances because they can tolerate intermittent outages and also they are very cost-effective. So the answer here is B, spot instances. From security point of view, tenancy options of your instances can come into play. Now by default, our instances run on shared tenancy. Whereby we share the underlying host with other customers. However, you might receive questions explaining that you need to secure your infrastructure to maintain compliance and ensure that your EC2 instances do not share any underlying host with any other customer. So how could you do that? Well, the answer would fall under your tenancy options. Either dedicated instances or dedicated hosts would resolve this issue. Now with dedicated hosts, it provides additional control over the placement of your EC2 instances on those hosts. So ensure you have a good understanding of your options here. Let's now take a quick review at an example question covering tenancy. You are leading the design of your company's new AWS cloud environment. Your team is focusing on the options for the instances within a VPC. You present a concept of tenancy options for instances to the team. It's as important to understand tenancy options and use cases for each option. What tenancy options are provided for Amazon EC2 instances? Choose three answers. So again, it's that last sentence that we really need to be aware of. The question simply is, what tenancy options are provided for Amazon EC2 instances? Everything before that is kind of irrelevant. So let's take a look at the answers that we have. So we have shared tenancy, dual tenancy, dedicated instances, dedicated hosts and multi-tenancy. So if you're familiar with the tenancy options then this would be fairly easy for you. So shared tenancy, and this is the default tenancy. So that's definitely one of the options. Then we have dual tenancy. And I've not really heard of this before. So I'd be very wary of that one. Then we have see, dedicated instances. Now we know this is a type of tenancy because we have dedicated instances if we need to have additional compliance and governance controls about having isolated hosts purely for our own account. So dedicated instances is definitely an option as well. Then we have a dedicated hosts. 

Now again, this is also available because dedicated hosts is very similar to dedicated instances but it offers a bit more flexibility and control over the underlying host. So this is also an option. So, so far we have A, C and D. But again, remember to always look at all of the answers because you could get caught out by thinking you know the answer and then the answer you missed could in fact be the correct answer. So let's take a look. The last one is multi-tenancy. Now again, I've not really heard of this as a tenancy option and I know shared tenancy, dedicated instances and dedicated hosts are all viable tenancy answers. So the answers here are A, C and D. Another common question scenario that comes up. Test your knowledge and understanding of how to automatically run commands on the first boot cycle of your instance. For example, you might need to perform operating system updates or install additional software from a repository when your instance first boots up. So how would you achieve this? Well, the answer lies in the user data section of your instance during its configuration. It allows you to enter commands to do exactly that. Also on this point, you can also use metadata of the instance to see the user data configuration for that instance. And this can be found by going to Okay, so let's move on to some thoughts on storage. Here you'll definitely be assessed on your level of understanding between the differences of EC2 instance storage also known as ephemeral storage and also EBS, the elastic block store. You'll be given a scenario that usually relates to the persistence of data. And sometimes the encryption capabilities too. However the question might be phrased, you must know the key differences between these storage types. Remember, EBS provides persistent storage and EC2 instance storage provides temporary storage. Remembering this is key. 

Now the question might explain that your EC2 instance was stopped and restarted and your data was lost. And what was the cause of losing this data? Or which storage option would be better suited to store and protect sensitive data? So here's an example question covering these details. So the question reads, in which of the following scenarios will data be lost from an EC2 instance store? Choose two answers. So this is looking at ephemeral storage which as we know is temporary. It's not persistent like EBS. So the EC2 instance store is ephemeral. So let's take a look. So again, the question is in which of the following scenarios will data be lost from an EC2 instance store? So if the instance stops, yes certainly. With temporary storage, if the instance stops then you will lose data. So A is one of the answers. If the incidence reboots, well, if the instance just reboots then it doesn't lose its data. It retains it. So not B. C, a disk drive failure. So if your harddisk drive failed and thus you don't have any backups of the ephemeral instance store volume, then yes, we would lose access to the data. And then D, a network failure. If we just lost connectivity to the instance then that is absolutely fine. That data will remain 'cause it's not impacting the actual drive. So it's only if your disk drive fails, if your instance stops or if it terminates is when you would lose data from an EC2 instance store. So the answer here is A and C. As you may or may not know, security will always be a part of every AWS certification. And the solutions architect is no different. So what are the types of security questions that may appear from an EC2 point of view? Well, key pairs could be one topic to come up and these are used to encrypt the credentials to your instances, allowing you to connect to them. Ensure you are familiar with how to connect to both Windows and Linux based instances. Now Windows uses RDP on port 3389 and Linux uses SSH, which is on port 22. Now you'll need to know the basic fundamental concepts of key pairs. So you might get a question like this. So with regards to Amazon EC2 instances, what is the function of key pairs? So let's take a look at our answers. So is the function of a key pair to encrypt data held on EBS volumes using AES-256 cryptography and then decrypt the data to be read again? No, now we know that key pairs are related to connecting and logging on to your EC2 instance once it's up and running. So it's not actually to do with the encryption of data on EBS volumes at all. So it's not A. Is the function of a key pair to encrypt and decrypt passwords for AWS IAM accounts on EC2 resources? Again, nothing to do with IAM permissions or accounts, it's purely to do with connecting to your ET2 instance. So it's not B. Is the function of a key pair to encrypt the login information for Linux and Windows EC2 instances and then decrypt the same information allowing you to authenticate the instance? That sounds pretty accurate to me. That's exactly what we would expect a key pair to do. And then finally is the function of a key pair to safely make programmatic API calls over an encrypted channel? Again no, it's relating to connecting to an instance. So the answer here is C. Okay, so we've covered a lot about EC2 and knowing these elements should have you well-prepared for any questions that come up relating to this service. Let's now take a look at auto scaling and how this might present itself in the exam. When questions on auto scaling come up, you'll be expected to know the main function of the service and the benefits it brings such as the ability to automatically increase or decrease your EC2 resources to meet the demands of your applications. For example, you might be asked to implement an efficient way to enhance the performance of the application after users complained of poor response. Now this might be caused by a bottleneck in your EC2 resources not being able to handle and process the amount of traffic. Now by implementing auto scaling, you could automatically increase your ET2 fleet size. Thereby you would increase the amount of resources and remove the bottleneck. Now you might also be assessed on your ability to optimize the cost of your EC2 fleet. Now, one way would be to remove unused resources. By implementing auto scaling, you can scale in your EC2 fleet by terminating unused capacity based on set thresholds. So auto scaling is all about optimizing performance and cost. So look out for this as an option whenever you receive a question covering this topic. Now you will likely see questions with auto scaling interlinking with elastic load balances as well. And they work very well together. Elastic load balances allow you to manage loads across your target groups. Whereas EC2 auto scaling allows you to elastically scale those target groups based upon the demand. So from an exam perspective, ensure you can differentiate between auto-scaling and ELBs. Also, ensure you are familiar with the different ELBs that exist as you'll be assessed on when to use one ELB over another in a particular situation. For example, you might be presented with a network scenario where you need to determine when your ELB should be placed. Should it be an internal or external ELB? And we'll be using to serve encrypted traffic. In which case, what do you need to configure? Well, if using HTTPS you'll need a service certificate perhaps issued by AWS Certificate Manager. Another scenario that I've come across, assesses your ability of understanding how your ELBs react to targets in your target group that are marked as unhealthy following a health check. Now, does the ELB restart the instance? Does it launch another instance or does it just ignore it? Well for the ELB, it just ignores it and continues to send request to healthy instances. It's the job of auto-scaling to launch replacement instances not the ELB. Well, you might be asked to select the most appropriate ELB type. Application, network or classic. Let's take a look at an example question where this is the case. So a telecommunications company is developing an AWS cloud data bridge solution to process large amount of data in real-time from millions of IoT devices. The IoT devices communicate with the data bridge using UDP, the user datagram protocol. And the company has deployed a fleet of EC2 instances to handle the incoming traffic but needs to choose the right Elastic Load Balancer to distribute traffic between the EC2 instances. Which Amazon Elastic Load Balancer is the appropriate choice in this scenario? Okay, so we're looking for the right ELBs. And now we know that, let's read back over the question to see if there's any clues that can help us select the right one. So a telecommunications company is developing an AWS cloud data bridge solution to process large amount of data in real-time for millions of IoT devices. So it needs something that's hugely scalable. Now the IoT devices communicate with the data bridge using UDP. So we need a load balancer that is massively scalable that can handle millions of connections and also runs UDP. 

So and this already gives me a clue as to which one I think we should use. So let's go through and have a look at our options. So firstly, we have the network load balancer. Now we know the network load balancer actually balances requests purely based on both the TCP and UDP protocols. So this is exactly what we're looking for and it also has ultra high performance as well. So I would say this the network load balancer but let's just take a quick look at the other options. We have the application load balancer which operates at the application layer. We also have the classic load balancer, which we know has a minimal feature set and isn't really recommended unless you're running the EC2-Classic network. And then finally we have the gateway load balancer. Now the gateway load balancer is used to easily scale and manage any third-party virtual appliances that you might have. So it effectively acts as a gateway to any tools or these appliances that you're already familiar with as you're moving to the cloud, so it's certainly not the most appropriate choice in this scenario, as we are trying to load balance to EC2 instances, and not third-party appliances. Now I wouldn't worry too much about the Gateway Load Balancer because in our experience, it doesn't appear in the AWS Solutions Architect Associate exam. It focuses more on the network load balancer and the application load balancer. So the answer here is the network load balancer because it uses UDP and it's massively scalable with millions of connections. Okay, so the last area I want to cover is AWS Lambda. Now this service isn't covered extensively on the exam but you certainly need to be aware of it and when it would be used. Now, the key is knowing that it is a serverless compute service designed to run in event-based environments to run application code without having to manage and provision your own EC2 instances. It's really cost-effective as you only pay for compute power when Lambda functions are invoked. In addition to being charged based on the number of times your function runs, known as invocations. So you might be presented with a question where you have an application that allows you to share photos that are uploaded to S3. But every time a new object is created, you want to process code to create a thumbnail of that object. What service would you use to do this with the least administrative effort? Now, this is a perfect example of when Lambda would be used. As its code-triggered by an event. And in this case, when a new object is uploaded is that event. And there were no resources to provision, to administer as it's serverless. So let's take a quick look at another example relating to AWS Lambda. Which of the following best describes the pricing model for AWS Lambda? You are charged only for the number of requests. Yes, you definitely are charged for the number of requests but not just the requests alone. So it's not A. B, you are charged only for the duration of the Lambda function when invoked. You are charged for the duration of the Lambda function, but however, not purely that. So C, you are charged a flat fee based on the memory allocation selected. No, that is incorrect. So it has to be D. So D, you are charged for both the number of requests and the duration of the Lambda function when invoked. That is correct. So the answer here is D. Okay, so that now brings me to the end of this summary. We've highlighted some of the key points that we've learned from the previous course and we've looked at how to approach a number of different questions that might come up that relate to compute. So hopefully you should feel ready and prepared to tackle any questions in this area. So let's now move on to the next section.

About the Author
Learning Paths

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.