1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Configuring Azure VM and Container Security

AKS Security

play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h
Students458
Ratings
4.3/5
starstarstarstarstar-half

Description

This course focuses on implementing security controls, maintaining the security posture of an Azure environment, and protecting data, applications, and networks, showing you how to configure security for your containers and virtual machines.

The content of this course is ideally suited to those looking to become certified Azure security engineers.

For any feedback, queries, or suggestions relating to this course, please contact us at support@cloudacademy.com.

Learning Objectives

  • Understand how to configure VM security including VM endpoints and system updates
  • Configure baselines
  • Understand key Azure networking components
  • Configure AKS security
  • Obtain a basic understanding of Azure Container Registry and how to create registries in Azure
  • Manage vulnerabilities in Azure

Intended Audience

This course is intended for people who want to become Microsoft certified Azure security engineers, or those who are tasked with implementing security controls, maintaining the security posture of an Azure environment, or protecting data, applications, and networks.

Prerequisites

To get the most from this course, you should have a moderate understanding of Microsoft Azure and of basic security principles.

Transcript

Hi there, welcome to AKS security. In this lecture, we're going to take a look at various concepts that you need to be familiar with in order to configure AKS security. We'll look at master components security, node security, network security, and Kubernetes secrets.

In AKS, Kubernetes master components are provided by Microsoft as part of the managed service. That being the case, each AKS cluster has its own single-tenanted, dedicated Kubernetes master. This dedicated Kubernetes master, which is managed and maintained by Microsoft, provides the API server, scheduler, and other functionality.

The Kubernetes API server uses a public IP address and is accessible via a fully qualified domain name. Access to the API server can be controlled via Kubernetes RBAC controls and Azure AD.

We already know that AKS nodes are actually Azure VMs that we need to manage and maintain. Linux nodes run Ubuntu using the Moby container runtime. Windows Server nodes run Windows Server 2019. They, too, use the Moby container runtime. Whenever a new AKS cluster is created or when an existing cluster is scaled up, the nodes of that cluster are automatically deployed with the latest OS security updates and configurations.

Then, in the case of Linux nodes, Azure automatically applies the latest OS security patches on a nightly basis. However, it's important to note that if a particular Linux OS update requires a host reboot, that reboot is not automatically performed. Instead, you can manually reboot the node when it's convenient. You can also use Kured, which is an open-source reboot daemon for Kubernetes.

Windows Server nodes are updated via Windows Update. However, Windows Update does not automatically run nor apply the latest updates. Instead, what you should do on a regular basis is perform an upgrade on your Windows Server node pool, or pools, in your AKS cluster. What this upgrade process does is create nodes that run the latest Windows Server image and patches. Once you have the new nodes created, you can remove the older ones.

I should point out that, when nodes are deployed, they're deployed into a private virtual network subnet. There are no public IP addresses assigned. That being the case, SSH is enabled by default. You can use this SSH access, using the internal IP address, to manage your nodes.

AKS nodes use Azure Managed Disks for storage. The data that gets stored on these managed disks is automatically encrypted at rest by Azure.

It's important to understand that Kubernetes environments, in general, aren't considered to be completely safe for hostile multi-tenant usage. While security features like Pod Security Policies and role-based access controls make exploits a little more difficult, the best way to achieve true security when running hostile multi-tenant workloads is to leverage a hypervisor.

For more information on cluster isolation in AKS, visit the URL that you see on your screen (https://docs.microsoft.com/en-gb/azure/aks/operator-best-practices-cluster-isolation)

To facilitate security and connectivity with on-prem networks, AKS clusters can be deployed into existing Azure virtual networks that have Azure Site-to-Site VPN connections or Express Route connections back to your on-prem network. You can define Kubernetes ingress controllers with private IP addresses, so those services are only accessible via an internal network connection.

I should mention here that as you create services with load balancers, port mappings, or ingress routes, the Azure Kubernetes Service will automatically modify the applicable network security groups to ensure traffic flows as necessary.

The last security component that I want to touch on is the concept of Kubernetes secrets. So what is a Kubernetes secret and what does it do? Well, a Kubernetes secret injects sensitive data, like access credentials or keys, into pods. After you've created a secret using the Kubernetes API, you can request that secret when you define your pod or deployment.

A Kubernetes Secret is only supplied to a node that contains a scheduled pod that requires it, and instead of being written to disk, the secret is stored in tmpfs.

It's important to note that when the last pod on a node that requires a secret is deleted, the secret itself is also deleted from that node's tmpfs. I should also point out that secrets are stored within a specific namespace. That being the case, they can only be accessed by pods within that namespace.

Using secrets reduces the amount of sensitive information that needs to be defined in YAML manifest for pods and services. Instead of defining sensitive data in the manifest, what you do is you request the secret that's stored in Kubernetes API Server. Taking this approach, you only provide the specific pod with access to the secret.

Lectures

Introduction - Configuring Endpoint Security within VMs - Configuring and Monitoring Antimalmare for VMs - Configuring Virtual Machine Security - Hardening Virtual Machines - Configuring System Updates for Virtual Machines - Starting a Runbook from the Azure Portal - Configuring Baselines - Azure Networking - Configuring Authentication - Container Isolation - Azure Container Registry - Creating a Container Registry - Implementing Vulnerability Management - Conclusion

About the Author
Students19877
Courses36
Learning paths8

Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.

In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.

In his spare time, Tom enjoys camping, fishing, and playing poker.