1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Certified Developer - Monitoring and Debugging

Review of Sample Questions

Contents

keyboard_tab
play-arrow
Start course
Overview
DifficultyIntermediate
Duration26m
Students1168

Description

This course enables you to identify and implement best practices for monitoring and debugging in AWS, and to understand the core AWS services, uses, and basic architecture best practices for deploying apps on AWS. 

In the first course enables you to identify and implement how to use Amazon CloudWatch to monitor and problem solve environments and applications

In the second course we review some of the AWS sample questions to help us identify and problem solve question scenarios to help us prepare for sitting the Certified Developer exam. 

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

As part of preparing ourselves for the certified developer exam, let's go through some of the sample questions that are provided by AWS. The first one reads, which of the following statements about SQS is true? Option a, messages will be delivered exactly once and messages will be delivered in First in, First out order. Now if we cast our mind back to the session we did on Simple Queue Service, we remember that messages can be delivered more than once due to the distributed nature of the SQS service, i.e. it's able to handle an infinite number of messages at any one given time, and as a result of that due its distributed nature, the order of messages can't be guaranteed. The first option there doesn’t look great. Option b, is that messages will be delivered exactly once and message delivery order is indeterminate. The second half of that option is correct but unfortunately the first half is not, so option b, doesn’t look that good. Option c, is messages will be delivered one or more times, correct, and messages will be delivered in First in, First out order. Well, again the first half of the option is correct but the second half unfortunately is not. Option d, is messages will be delivered one or more times and the message delivery order is indeterminate. Out of those four options, option d, looks the best, and just thinking through some other things that could crop up in this type of question, just remember that the default message or change in time is four days, and you can set the the message or change in period to a value from between 60 seconds and 14 days. Let's have a look at the next one. EC2 instances are launched from Amazon Machine Images. A give public AMI, and keep looking for keywords in the descriptions in these questions. Option a, the AMI can be used to launch EC2 instances in any AWS region. Well, if we cast our mind back to our session on regions and availability zones, we recall that AMIs are tied to the region in which they're stored, and you can copy Amazon Machine Images within or across an AWS region using the AWS Management Console or the command line of the EC2 API for that matter, all of which support the copy image action. Both EBS-backed AMIs and instance-store backed AMIs can be copied, and copying a source AMI results in an identical but distinct target AMI with its own unique identifier when it's copied from one region to another. The first option is incorrect. Option b, can only be used to launch EC2 instances in the same country as the AMI is stored. Well straight away we can discount this option 'cause we don't talk about countries with AWS, we talk about regions and availability zones. Option c, can only be used to launch EC2 instances in the same AWS region as the AMI is stored. Now by default that is looking like the most correct option so far, remembering that we can copy an AMI between regions but that's something that we need to do first before we can launch an EC2 instance from it. Option d, an AMI can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored. Yeah, so we get a little bit more flexibility in there, so for my dollar I'm going for option c, on this one. Let's look at the next question. Company B provides an online image recognition service and utilizes Simple Queue Service to decouple system components for scalability, very good. The SQS consumers poll the imaging queues as often as possible to keep end-to-end throughput as high as possible. However Company B is realizing that polling in tight loops is burning CPU cycles, good keyword is popping up here, and increasing costs, another good keyword, with empty responses. Now how can Company B reduce the number of empty responses? If you think back to our Simple Queue Service session, and hopefully there is some lights going off in your head right now about long polling and short polling, so let's run through these options. Option a, is set the imaging queue VisibilityTimeout attribute to 20 seconds. Now that's a useful feature but I don't think that's going to help us reduce the amount of empty responses we're getting. If you think the VisibilityTimeout window is literally stopping that message from being downloaded for a particular length of time, so I'm going to pass on option a. Option b, set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds. Now that's a good way of changing the polling time on our request queue, so we're essentially looking at a long poll here with additional 20 second set. I like that idea. Our next option is set the imaging queue MessageRetentionPeriod attribute to 20 seconds. I'm gonna see if that's gonna really change what we see as the problem here. Option d, set the DelaySecond parameter of a message to 20 seconds. What is Delay Queue, I hear you ask? Delay Queue allows you to postpone the delivery of a message in a queue by a number of seconds. Delay Queues are similar to Visibility Timeouts in that both features make messages unavailable to consumers for a specific period of time. The difference between Delay Queues and Visibility Timeouts is that for Delay Queues a message is hidden when it is first added to the queue, whereas for Visibility Timeouts a message is hidden only after it's being retrieved from the queue. If we send a message with a DelaySeconds parameter set to 30, the message will not be visible to consumers for the first 30 seconds that the message resides in the queue. The default value for DelaySeconds is zero. Setting that in this instances isn't going to reduce the number of empty requests we get back. I'm thinking with long polling that allows SQS to wait until a message is available in the queue before sending a response. That's gonna reduce the amount of times we might get a zero or empty response if we're polling on a regular basis. One benefit of long polling with SQS is it reduces the number of empty responses you get, and when there are no messages available to return, in reply to a received message request, long polling allows this queue to wait until a message is available in the queue before sending a response. Unless the connection times out, the response to the receive message request will contain at least one of the available messages, if any, up to the maximum number of requests that are in the received message call. Another benefit is that it helps to eliminate false empty responses where messages are available in the queue but are not included in the response. Now that happens with SQS when we're using short or standard polling, which is the default behavior, which is zero, where in that regard audio subset of the service based on a waited random distribution are queried to see if any messages are available to be included in the response. On the other hand when long polling is enabled, SQS queries all of the service. That reduces the number of empty responses and also helps reduce costs. Setting that ReceivedMessageWaitTime is most likely going to reduce the number of empty responses we get in that poll queue session request there. I'm gonna go with option b, on that one. Next question, you have reached your account limit for the number of CloudFormation stacks in a region. How do you increase your limit? If we remember back to the limits, if you need to remind you or refresh yourselves about limits, there is a good page that will list all of the limits that we have available under the AWS default settings, that's called awsservicelimits.html, here is the link. Let's think this through, like this is not really asking us about what the limit is, it's just asking us to describe how we would do a change to it. Option a, make an API call. No, that's unlikely to be something that we would do to change a limit. If you think a limit is there to ensure that we don't abuse a service or to give us the best possible experience, so changing it by an API is not really gonna be practical, is it? Option b, contact AWS. Yep, now that straight away rings out as the most practical way of a change to a limit in our account. Option c, use the console. I can't think of any limits that can be set or changed from the console, so I'm gonna discount that one. Option d, you cannot increase your limit. Now if you ever get really stack in these type of questions and think, "I have no idea," then try and think through the motors operand of AWS, which is a customer-obsessed organization, where trying to answer the customers' requirements ensure customers get the very best value from these great services. That's what Amazon is all about. It can almost take what is the most practical and sensible way of ensuring that that could happen. If you get really desperate and stack then contacting AWS would standout as being the most helpful and likely way that AWS can help you extend your limits. For this particular use case with CloudFormation stacks, if you remember their maximum stack, we should probably don't need to remember this at all for the exam, but the maximum number of stacks you can create is 200, so yeah that would be the way I would go about increasing that if I had exceeded that would be to contact AWS login support call and ask for it to be increased. Let's have a look at our next question. Which statements about DynamoDB are true? All right, so this is a multi-choice question which are always difficult 'cause you have to get them all right. My advice for approaching these is to use an elimination process. Try and eliminate the ones that you think are least likely to be true, and then we can start working with a small set. Let's look at the first one. DynamoDB uses a pessimistic locking model. A lot of this one is in the wording. There is two mechanisms for locking data in a database, there is pessimistic locking and there is optimistic locking. In pessimistic locking, a record is locked immediately when the lock is requested, while in an optimistic lock, the record is only locked when the change is made to the record updated. With pessimistic locking, a resource is not actually locked when you request it, and the downside of pessimistic locking is that a resource is locked from the time it is first exist in the transaction until a transaction is actually finished. If we did have to choose one option over the other we'd said DynamoDB supports optimistic locking over pessimistic locking. You can disable optimistic locking in DynamoDB and you can change the DynamoDBMapperConfig.SaveBehavior, you might get value clobber instead of update. When we're looking at read consistency we can't set whether we want it to be eventually consistent or strongly consistent. Anyway, so let's move on, we'll eliminate option one. Option b, DynamoDB uses optimistic concurrency model. An optimistic concurrency model is impossible. The DynamoDB transaction library provides a way to perform atomic reads and writes across multiple DynamoDB items and tables. It actually does all of those new ones item locking commits, applies, and roll backs for you. You don't have to worry about building your own state machines or other schemers to make sure that writes eventually happen across multiple items. Let's look at option c, DynamoDB uses conditional writes for consistency. We'll come back to that one. Option d, DynamoDB restricts item access during reads. Remember we're looking to eliminate the ones that we think are not likely to be correct. We talked about locking, I think we can eliminate d and e straight away. Let's just go back to conditional writes. A conditional write, when you write an item you can use operations such as support item, update or delete item. Using conditional expressions when you do these operations, you can control how and under what conditions an item will be modified. You can prevent an update from occurring if an item doesn’t meet some conditions that you've specify beforehand. Yes, we use conditional writes for consistency with DynamoDB and I think we use optimistic concurrency control, so option b and c are the most likely to be correct here. Next question, what is the one key difference between an Amazon EBS-backed and an instance-store backed instance? Should be straight away thinking, "I know this." We may bring back to our session on EBS where elastic block store is persistent storage and so a EBS volume that's attached to an instance is persistent and if we stop or terminate that instance, the EBS volume will continue to exist until we do something else with it. Now with an instance-store backed instance, that's not the case. The storage in an instance-store is ephemeral. Once an instance-store backed instance is stopped or terminated, then that data is lost. All right, so let's just think through that quickly and look at these question options. Option a, instance-store backed instances can be stopped and restarted. No, because once the instance is stopped then anything that's attached to or related to it is lost. Option b, auto scaling requires using Amazon EBS-backed instances. Auto scaling is made up of two components, one is the launch configuration, and the launch configuration specifies what a machine is, what AMI will use and what user data will go to present with it where it starts. That launch configuration doesn’t have any option to say whether an Amazon instances EBS or instance-store backed, so that option is not correct. Option c, Amazon EBS-backed instances can be stopped and restarted. Well we're thinking through that EBS storage is persistent storage which means that an EBS-backed instance can be stopped and restarted, so that's looking like a fairly good option. Option d, is Virtual Private Cloud requires EBS-backed instances. Well if you think through what a Virtual Private Cloud dose, VPC is the networking that exists in the security for our environment, it provides all of that infrastructure, it doesn’t specify that we need to use EBS-backed instances. You can quickly eliminate d and eliminate a and b, and in this instance go with option c. Let's look at our next question. Your application is trying to upload a six gigabyte file to Simple Storage Service, which is S3, and you receive a "Your proposed upload exceeds the maximum allowed object size." error message. What is a possible solution for this? Straight away you should be thinking back to our session on S3, and remembering that an S3 object can be between zero and five terabytes in size and that the largest object that we can upload in a single Put command is five gigabytes. Let's look at these options and think about what that might mean. Option a, the option is none, Simple Storage Service objects are limited to five gigabytes. No, Simple Storage objects are limited to five terabytes. The Simple object Put command is limited to a five gigabyte file. That's not correct. Option b, use the multi-part upload API for this object. Yep, that's looking like a pretty good option because we are able to upload large files. Option c, use the large object upload API for this object. Well actually I haven't heard of a large object upload API. Often you’ll get things like this thrown into questions to try and catch you out perhaps. If it doesn’t sound right probably isn't. Option d, contact support to increase your object size limit. Yeah, well this is one thing that isn't actually changeable unfortunately, so with S3 that's a fix setting doing limit and that's not something that we can actually request from Amazon support unfortunately. Option e, upload to a different region. Now that's not going to work, with discount that went straight away. Just going back to our core metrics with S3 that our objects can be between zero and five terabytes in size and that we can upload larger files, but the largest one object we can upload in a single put is five gigabytes. In this question our best option would be option b, to use the multi-part upload API for this object. That's our last question.

About the Author

Students51665
Courses77
Learning paths28

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.