The course is part of these learning paths
FaaS & IaaS
By the end of this course, you will
- Understand and be able to distinguish between the pros and cons of serverless security
- Understand where to focus additional security controls in a FaaS solution
- Have a general overview of how security differs to that of a typical IaaS solution
This content in this course would be beneficial to:
- Engineers who are focused on delivering secure serverless solutions within an enterprise environment
- Security architects looking to enhance their knowledge of FaaS solutions
- Developers deploying applications within a serverless environment
As a prerequisite of this course you should have a basic knowledge and awareness of the following:
- A general understanding of what Serverless means
- Understand what FaaS and IaaS relates to
- A basic awareness of different attack vectors, such as DoS
- AWS Lambda
- Amazon Cognito
- Amazon API Gateway
- Security controls within IAM
This course includes
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
About the Author
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data centre and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 50+ courses relating to Cloud, most within the AWS category with a heavy focus on security and compliance
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.
Hello and welcome to this lecture where I want to highlight some of the security benefits with severless architecture that alleviates security risks by its very nature of design.
With Infrastructure as a Service, for example Amazon EC2 instances, you have complete control of that instance from the operating system and upwards. You have administrative privilege to the instance, and as a result, you have a responsibility to maintain security of that instance. You must harden the OS with various security measures and install the latest security patches to prevent any vulnerabilities. Should essential updates be required to fix wide-scale vulnerabilities, such as the Meltdown and Spectre vulnerabilities, you may have to restart your instances and ensure that the patch is successful before using the instance any further. However, there was no impact to customers using AWS Lambda during the Meltdown and Spectre fixes. No customers had to perform restarts or patching of their infrastructure; this was all performed by AWS.
If we remind ourselves of the Infrastructure as a Service Shared Responsibility Model issued by AWS as below, we can see that the customer maintains security of the operating system. It is not the responsibility of AWS, and this puts a huge amount of responsibility in the customer's hands to ensure that they are applying the latest updates. Even if you are using other security services, such as Amazon Inspector or Amazon GuardDuty to help you identify any potential threats, it doesn't remove the fact that if an organization has a lot of patch administration and management, it can lead to exposures and vulnerabilities across their fleet of instances. In fact, this is the same as if you are maintaining your own infrastructure and servers on premise outside of the public cloud.
Patch management is an important step in reducing and minimizing the attack surface across your infrastructure, meaning it's a crucial aspect of your security process and strategy to maintain that management. However, with the serverless architecture, this responsibility is passed to AWS. How can you possibly manage and maintain patches and security updates if you don't have access to the server or instance? With this in mind, the underlying infrastructure, operating system, and platform is secured by AWS. The effect of this action is that it then makes us, as the customer, assume a level of trust with AWS. We have to trust that AWS are performing an exceptional level of patch management of instance-level security as these servers are generating your ephemeral compute power as and when you need it. It's no secret that AWS treats security as its number one priority, and so, you can be fairly certain that the level of patch management taking place on their infrastructure supporting services such as Lambda far exceeds the level of patch management most customers would be performing on their own Infrastructure as a Service solution. In fact, AWS Lambda is PCI compliant, and as a part of this compliance, section 6.2 states that you must protect all system components and software from known vulnerabilities by installing applicable vendor-supplied security patches and install critical security patches within one month of release.
Trying to maintain this kind of patch management on premise would prove difficult for many organizations. Maintaining patch management doesn't just apply to compute serverless services, though such as Lambda, but also Amazon DynamoDB, Amazon SQS, and Amazon S3 and Glacier, which all provide a multi-tenancy platform on which the underlying infrastructure is shared that AWS manages. This difference of having to maintain the security of the operating system is the biggest security benefit that serverless solutions provide. Not having to implement instance-level maintenance, assigning engineer resource time to perform that maintenance, and scheduling that maintenance saves you a huge headache and significantly reduces the risk to your applications and environment. Not having to manage at-risk vulnerabilities might be the biggest benefit for a serverless architecture, but it's certainly not the only benefit.
For example, if we take a look at a Denial of Service, DoS attack, these have a very different effect on serverless compute resources compared to IaaS resources. As you may know, when a DoS attack happens, compute resources are flooded with requests that then aim to ultimately overload the resources, slowing them down to a point where it becomes unusable and causing a temporary outage to a service. In a serverless environment, we don't have these long lived servers that a DoS attack can happen against. Instead, when a DoS attack occurs against your serverless infrastructure, the resources are automatically scaled out to handle the additional load with ease, so it's much harder to disrupt the service as there is no one server or cluster of servers to flood with requests. Instead, your compute resources needed to handle the flood requests is scaled out quickly and easily, therefore reducing the risk of your application and service being impacted. Also, if you are using AWS WAF in conjunction with Amazon CloudFront and API Gateway, you could add additional layers of mitigation against these DoS attacks.
Although it sounds great that your resources which are scaled during an attack, and as such, significantly reduce the effect of the DoS attack, it does mean that there is an adverse negative effect. Should you be the target of a DoS attack of your serverless environment, unfortunately, there will be a financial impact, resulting in this kind of attack being referred to a Denial of Wallet attack. Consider that your serverless solution is being attacked by a series of requests flooding your resources. If using AWS, then AWS Lambda would handle this by carrying out the execution of these requests across the compute required, which could be hundreds of executions. AWS Lambda has a default soft limit of a thousand concurrent executions. However, you would still need to pay for this compute resource that is being consumed. These requests may also go through Amazon API Gateway as well, which could provide a caching and throttling layer to help minimize the effect. So although your service itself may not be affected as much as it would be in an Infrastructure as a Service solution, your cost would see a spike due to the nature of how AWS Lambda would handle the additional load. Additional compute equals additional cost.
Security is not always about preventing an attacker from the outside network coming in and causing chaos or stealing and manipulating your data. Security can simply relate to the durability of the data itself. If your data becomes corrupted, deleted unintentionally, then that is also a data security risk. When using some EC2 instances, people often use the local ephemeral storage to store data, whether this be best practice or not, people do it, and this can lead to data security issues. By design, data being designed on ephemeral volumes can be lost should an issue occur with the host and/or instance.
Data management is so important when it comes to protecting data. Critical data should always be stored using the correct media and service, which can provide a multitude of data security features, such as Elastic Block Store volumes, S3, and DymanoDB, etc. But sometimes shortcuts are made, and data is stored locally on the instance itself. This poses a risk of data loss. When using a serverless architecture, storing data on an instance is no longer possible as there simply isn't an instance to do so, it's serverless, right? Well, that's not strictly true. For each invocation of your function, AWS Lambda allocates 512 meg of temporary ephemeral disk space. This should only be used for non-sensitive data, which are you are okay to lose at any point, much like local EC2 spot instance storage. The ephemeral storage can be lost at any time. This is often used as a cache if your Lambda's RAM is not enough. You can temporarily store anything you want to in this temp location. Without fixed durable storage associated to a compute resource, this forces developers and architects to use the most appropriate service for data storage, depending on your solution requirements. The risk of storing data on an ephemeral drive is then eliminated and in almost all cases, this is a very good thing. For more information on AWS Lambda limits of invocations, please see the link on screen.
The final point I want to mention is an inherent difference between Infrastructure as a Service and Function as a Service. In an Infrastructure as a Service environment, you are likely to already have a large number of instances that have been up and running for months or even years on end, and these have been buzzing continuously, processing information daily. Over this time period, you may have changed some software on the instance or even altered security policies for access control as your business and staff roles evolve. Maintaining security control to your instance is an ongoing process as well as the upkeep and maintenance of software. This can and often lead to permission loopholes to the instance. The longer a server is operational, the more likely it is to be compromised in some way, whether that be intentional or unintentional through loose permission sets or erroneous software. Again, operating serverless architectures negates this issue by removing long-running servers operating within your solutions. This also has an added benefit from a security stance in that by not having a server with a long living presence for attackers to find a way into over a period of time, it significantly reduces their chances of being compromised within your environment.
That now bring me to the end of this lecture. Coming up next, I want to flip the topic on its head and ask you how serverless can also bring greater increased risks and threats to the environment which are easier to mitigate if we are using Infrastructure as a Service rather than Function as a Service.