AWS Compute Fundamentals
EC2 Auto Scaling
Elastic Load Balancing
ELB & Auto Scaling Summary
Gateway Load Balancer
AWS Outposts and VMware Cloud
The course is part of this learning path
This section of the Solution Architect Associate learning path introduces you to the core computing concepts and services relevant to the SAA-C03 exam. We start with an introduction to the AWS compute services, understand the options available and learn how to select and apply AWS compute services to meet specific requirements.
- Learn the fundamentals of AWS compute services such as EC2, ECS, EKS, and AWS Batch
- Understanding how load balancing and autoscaling can be used to optimize your workloads
- Learn about the AWS serverless compute services and capabilities
Hello, and welcome to this lecture, where I shall explain what VMware Cloud on AWS is, along with an overview of its underlying architecture.
VMware Cloud on AWS is sold as a service by VMware that allows you to run your applications across VMware's vSphere suite of products within an SDDC hosted on top of the AWS Public Cloud. While utilizing VMware's underlying Cloud foundation, it provides the ability to give you access to many native AWS services and features. Couple this with the ability to continue managing your infrastructure with vSphere, vSAN, NSX, and vCenter Server, it enables you to create a secure, flexible, and scalable hybrid Cloud infrastructure model for your organization.
If you are currently using VMware on-premises in your data center, and you're looking for a way to migrate your workloads to the Cloud to take advantage of some of the Cloud technology's key characteristics, such as on-demand resourcing, scalability, flexibility, high availability, security, utility-based metering, and regional expansion, then using VMware Cloud on AWS could be a great solution for you.
One thing to bear in mind, however, is that, at the moment, the service is only available in the US West (Oregon) region. But there are plans to distribute the service to all other AWS regions throughout 2018. Now we know what the service is. Let me now talk a little about the underlying architecture that the service runs on.
The AWS architecture used for VMware Cloud on AWS is different to your standard compute services on AWS, such as EC2 that runs on top of a Xen hypervisor installed on the host, where VMware Cloud on AWS runs on bear-metal AWS infrastructure. This primarily means two things. Firstly, the host itself belongs to a single customer.
And secondly, the host is not running any virtualization software, such as a standard Xen hypervisor that AWS normally uses. Typically, within a normal AWS environment, many customers can share the same underlying host to run their EC2 instances by selecting an option to run their resources on shared-tenancy hosts.
Although it is possible to request a dedicated host if this is required, this is not, however, the same as bare-metal infrastructure used with VMware Cloud on AWS. The difference being is that the EC2 dedicated host will still use the Xen hypervisor to manage underlying virtualization, whereas bear-metal servers are stripped of this virtualization software that is normally included with AWS hosts.
This allows VMware to optimize the AWS host with its own ESXi bare-metal type-1 hypervisor, removing any nested virtualization. In fact, VMware Cloud on AWS does not support nested ESXi virtual machines at all.
From a compute perspective, and during initial availability, there is a minimum and maximum compute cluster size with your SDDC. VMware clusters are comprised of a number of physical hosts. Where the memory and CPU power from those hosts is aggregated into a pool of resources for all virtual machines in the cluster to consume. So the minimum cluster size that you can provision for VMware Cloud on AWS is comprised of four ESXi hosts.
An ESXi host is simply an AWS bare-metal host with VMware's ESXi hypervisor installed on top of the hardware. Each of these four hosts contains the following hardware. 512 gig of memory, dual CPU sockets containing Intel Xeon processors, and each socket contains 18 cores running at 2. 3 gigahertz. As a result, the minimum cluster-size configuration contains 2 terabytes of memory and 144 CPU cores.
The maximum cluster size, if you need to scale your compute resources out, currently stands at 16 ESXi hosts, meaning that the total resources in a maximum cluster configuration will contain 8 Terabytes of memory and 576 CPU cores. The scale and boundaries of this configuration for these clusters are likely to change over time.
So it's always good practice to check on the VMware site for the latest information. Now we know the compute capacity. Let's take a look at the storage.
vSAN storage clusters also draw their resources from a host within the cluster which contains an all-flash array. Each host in the cluster contains the following storage. 8 non-volatile memory express devices, which allows for flash storage to be directly connected to the host, in this case, low-latency SSDs, and solid-state drives. This provides a total of 10 terabytes of raw storage capacity. As a result, the minimum cluster size would provide a 40-terabyte vSAN datastore enabled by 32 NVMe devices.
At the other end of the scale, if we maximize the cluster size to 16 ESXi hosts, the datastore would grow to 160 terabytes across 128 NVMe devices.
It's worth noting that during the initial availability of VMware Cloud on AWS, it's not possible to encrypt data at the datastore level or VM level. To ensure your data remains secure, AWS performs encryption at the firmware level for all NVMe devices. The encryption keys are then managed and controlled by AWS and are not shared with VMware.
There is a restriction regarding clusters when it comes to location, in that the clusters created cannot span multiple availability zones or regions. They are restricted to a single AWS AZ within a single region.
Finally, let me take a look at the networking element of VMware Cloud on AWS, which utilizes VMware NSX. Understand that the networking component on VMware Cloud on AWS is probably the most complicated part. It's important to understand how you can connect to the service from on-premises and also how to integrate the service with your existing AWS account and infrastructure.
VMware NSX is a fundamental component of VMware Cloud on AWS, as it's used for all network connectivity and provides a bridge between the three environments: your own on-premises datacenter, the SDDC running on AWS, and our virtual private cloud's VPC in your AWS account. A VPC is an isolated segment of the AWS Cloud which allows you to provision AWS resources in a virtual network.
Each host within the SDDC cluster contains an Elastic Networking Adapter, an ENA, and these ENAs are network interfaces that provide high networking performance and allow throughput of up to 25 gbps. To allow connectivity between your vSphere environment within your own on-premises datacenter, your SDDC, and extending through to your AWS VPC, two gateways are required for two different networks; one is for management traffic, such as administration of vCenter Server, and another will be used for compute and application traffic, such as workload traffic of your virtual machines.
These two gateways are as follows. A Management Edge Gateway, MGW, and a Compute Gateway, CGW. The Management Edge Gateway for the management networking traffic works in conjunction with NSX Edge, which provides network edge security and allows users to connect through to your SDDC vCenter Server via the Internet.
From the management network, it's then possible to carry out a number of network-administration tasks, such as creating IPsec VPNs, back to your on-premises datacenter, or configuring firewalls. The IPsec VPN can allow communications between your on-premises vCenter-Server instance and components running in the SDDC.
In addition to this, a second VPN connection can be created to allow the connectivity of the VM workloads to transition between on-premises to the SDDC via the compute gateway. Once your management edge gateway is configured, you can then use a feature called Hybrid Linked Mode to connect your vCenter Server in your SDDC with your on-premises vCenter Server.
The compute gateway is used for compute and VM workload traffic, and uses an NSX Edge instance along with a Distributed Logical Router, a DLR, which allows inbound and outbound traffic from your VMs over the second IPsec VPN and to an AWS VPC. Before we move on, I just want to give a little bit more information around the Hybrid Linked Mode.
As I just said, this allows connectivity between your on-premises vCenter Server and the one running in your SDDC. With this active, it allows you to perform a number of management activities, such as performing cold migrations of your workloads between your on-premises environment and your VMware SDDC.
You can use the same credentials that you use for your on-premises vCenter Server with your Cloud SDDC vCenter Server. And using a single vSphere client interface, you can monitor and manage the inventories of both on-premises and in-Cloud SDDC environments. NSX is a complex and essential component, as it manages all networking infrastructure and security across the network, which remains decoupled from the AWS VPC and networking components.
During the creation of your SDDC setup and configuration, you must associate it to an existing VPC within your AWS account. Having your own AWS account is a prerequisite of running VMware Cloud on AWS. During this process, an ENI, Elastic Network Interface, will be created within your own AWS account.
This ENI will then link back to the compute gateway within your VMware SDDC. And this connectivity allows your VMs running in your SDDC to take advantage of and to communicate with other AWS resources, such as S3, EC2, etc. The ENI, essentially, acts as an endpoint for your VMware SDDC to gain access to native AWS services.
Your VMs will use the compute gateway as a bridge between your VMware Cloud and AWS SDDC and the ENI running in your VPC. This traffic between your SDDC and ENI is completely private and will use AWS' own internal network to provide the connection. It does not use an Internet gateway or any public channel. It's an internal private link between your SDDC and your AWS VPC.
The creation of the network configuration of your SDDC can be seen as a split between two roles: Cloud network administrators and Cloud administrators. The Cloud network administrator role will use the VMware Cloud on AWS Web portal to configure the following components. The initial network setup, VPN connectivity, to configure firewall access rules for VM workloads and setting administrator access to vCenter Server.
The Cloud administrator, on the other hand, can then use the vSphere Web client to utilize the infrastructure and configuration made by the Cloud network administrator. The vSphere Web-based client allows you to connect to the SDDC vCenter Server to manage your vSphere environment. Once connected, the Cloud administrator can attach VMs to networks, create new logical networks, and control IP addressing for VMs.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.