1. Home
  2. Training Library
  3. Serverless, Component Decoupling, and Solution Architectures (SAP-C02)

Kinesis Data Streams Security

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW2m 26s
Utilizing Managed Services and Serverless Architectures to Minimize Cost
Decoupled Architecture
Amazon API Gateway
10
Advanced API Gateway
PREVIEW11m 29s
Amazon Elastic Map Reduce
18
Introduction to EMR
PREVIEW1m 46s
Amazon EventBridge
26
EventBridge
PREVIEW7m 58s
Design considerations
39

The course is part of this learning path

Start course
Difficulty
Intermediate
Duration
4h 43m
Students
80
Ratings
3/5
starstarstarstar-borderstar-border
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces common AWS solution architectures relevant to the AWS Certified Solutions Architect - Professional exam and the services that support them. These services form a core component of running resilient and performant architectures. 

Want more? Try a Lab Playground or do a Lab Challenge!

Learning Objectives

  • Learn how to utilize managed services and serverless architectures to minimize cost
  • Understand how to use AWS services to process streaming data
  • Discover AWS services that support mobile app development
  • Understand when to utilize serverless services within your AWS solutions
  • Learn which AWS services to use when building a decoupled architecture
Transcript

Hello!  I'm Stephen Cole, a trainer here at Cloud Academy and I'd like to welcome you to this lecture presenting a discussion on the topic of Amazon Kinesis Data Streams security.

As a quick review, I'm going to start with a discussion of security principles before moving into the topic of how to secure a Kinesis Data Stream.

In a production account, it's important to follow the Principle of Least Privilege.  

There are a number of reasons to do this and they generally fall under three categories; organizational, technical, and personal.

In general, we are stewards of the data in our care.  From an organizational standpoint, giving people the least amount of access required to do their job ensures that the risk for unauthorized access and use of data is limited.

From a technical standpoint, building a set of controls for data stewardship that is modular makes compliance within an organization easier to manage, obey, and enforce.

Considering personal reasons and human nature being what it is, it is always easier to give people more privileges over time than it is to take access away.  Try taking away permissions for anything and see what happens.  Put another way, can you remember a time that someone took access privileges away from you?  Even if you don't need them--or having them adds unwanted stress to your life--it has an emotional impact.

Save yourself a world of trouble and use the Principle of Least Privilege to protect the data in your care.

I'm going to do a quick review of how Identity and Access Management works inside AWS.  

Cloud Academy has courseware on AWS IAM and, if you're going to spend any amount of time architecting solutions in the AWS cloud, it's an important service to understand.  

For now, if you're new to how AWS IAM works or just need a little reminder, I'm going to take a moment to talk about it.

Once you've spent enough time working inside the AWS cloud, you will become painfully aware that the default permission for all services is DENY.  That is, inside the AWS cloud, in order to be able to do anything, permission has to be explicitly granted.

To do this, a policy document is created and attached to the user, group, or role that needs access to a service or resource.  Inside an IAM policy, there are three basic elements: Effect, Action, and Resource.

Effect is either Allow or Deny.  A Deny permission cannot be overridden.  

The Action is like the verb of a sentence.  It describes what can be done.  

For example, this statement will allow the actions  DescribeStreamSummary and ListStreams.  

Using an asterisk after the word Get means that any action that begins with the word Get is allowed.  This will include statements like GetShardIterator and GetRecords.

The Resource is the Amazon Resource Name of the stream.

If you're new to ARNs it stands for Amazon Resource Name and they uniquely identify resources inside AWS.  Individual fields are separated by colons.

I'll walk through each field starting with arn.

arn stands for--as I've said already--Amazon Resource Name. It's basically self-identification.

aws is in every ARN.

The third field is the service, in this case, Kinesis.

Fourth is the region.  

The fifth field is the account number.

Last is the stream.  It has a prefix of stream and a forward slash that is followed by a specific stream name.

Asterisks are allowed in ARNs and they act as a wildcard character.  

Here is an IAM Policy that allows users to add data to any stream in an account.

The asterisk after PutRecord in this IAM policy means that the API calls PutRecord() and PutRecords() will work.

Here's a longer policy document.  It allows a user or group to perform the DescribeStreamSummary, GetShardIterator, and GetRecords operations on a specific stream--in this case it is stream1-- and ListStreams on any stream.

Notice that, in this policy, the Resource is an asterisk.  This means any stream in the account where this policy is used.

I cannot create a policy that allows access to an account other than my own.  That would be absurd.

To reiterate, it's important to use the principle of least privilege in production accounts.  We do not own data, we're stewards of the data in our care.  It is important to protect information that has been entrusted to us.

That said, for my demos and experiments, it is much easier to either use an account that has full administrative privileges or gives full privileges for Amazon Kinesis Data Streams.

I'm not lazy, I'm efficient.  In spite of how it looks.

This policy gives full access to all streams in a chosen account.  

I could go a step further and use this policy.

This will allow all actions on all streams in an account.

This policy is dangerous and should not be anywhere near a production account.

Using it in production could lead to an RBE, a Resume Building Event.  As in, "Welcome to your last day of work."

You have been warned.

On the topic of security, the best practices for creating IAM policies working with Kinesis Data Streams include creating 4 different policies based on roles.

There should be a policy for Administrators, for stream resharding, for Producers to do writes, and for Consumers to do reads.

To illustrate--and this is not an exhaustive list--consider this example.

Administrators need to have IAM Actions such as CreateStream, DeleteStream, AddTagsToStream, and RemoveTagsFromStream.

For those that need to reshard a stream the IAM actions include MergeShards and SplitShard.

Producers need the IAM actions DescribeStream, PutRecord, and PutRecords to be able to write Data Records.

Consumers, similarly, need the IAM actions GetRecords and GetShardIterator to be able to read from a Kinesis Data Stream.

There will be other actions required for each of these roles.  

For example, as I think about it, the policy for Consumers could probably also use DescribeStream. 

There's no single best policy for any job description.  It depends on what other services and actions are needed in your organization.  Be sure to consult the documentation to see what is possible and match the appropriate actions as needed.

It's possible that you will discover a feature of streaming data that you weren't aware of or learn a better way to manage an existing process.

Also, I should mention that it is a general AWS best practice to use temporary security credentials in the form of IAM roles whenever appropriate.  Assume a role when needed and let it go when done. 

In addition to using IAM policies to control access, Kinesis Data Streams can encrypt data in flight by using HTTPS endpoints.

Using the Amazon Key Management Service, KMS, encryption is available for data at rest.  

Using KMS, Data Records are encrypted before they are written to the Kinesis stream storage layer and decrypted after it’s retrieved. As a result, Data Records are encrypted at rest within the Kinesis Data Streams service to meet regulatory requirements and enhance data security.

It is possible to encrypt data on the client side.  The data then has to be decrypted client side as well.  This is a manual process and much more challenging than using KMS because there is nothing available from AWS to help with implementing client-side encryption.

If you require FIPS 140-2 encryption, FIPS endpoints are available. If you don't know what FIPS is, you probably don't need to use a FIPS endpoint.  

However, for reference, FIPS stands for Federal Information Processing Standard.  It is a set of standards that describes document processing, encryption algorithms, and other information technology standards for use within non-military government agencies and by government contractors & vendors who work with those agencies.

If Kinesis Applications are in a VPC, use VPC Endpoints to access Kinesis Data Streams.  All network traffic between Kinesis and the application will stay within the VPC.

Essentially, VPC Endpoints allow you to connect to AWS Services from within a VPC without accessing the public Internet.  They remove the need to have an Internet Gateway or a NAT Gateway attached to a private subnet.

Endpoints are managed by AWS and automatically scale horizontally and are highly available.

What happens is that an ENI--an Elastic Network Interface--is provisioned using a private IP address inside a subnet in the VPC.  A security group is used to restrict access to this ENI and it is used to send traffic to the desired service such as Kinesis Data Streams.

That's it for this lecture.

As a review, I covered issues relating to security using Kinesis Data Streams.

Remember to use the Principle of Least Privilege when provisioning resources inside AWS.  

This might be less important in a development environment however, in a production environment, one of your personal and professional goals should always be to avoid creating an RBE, a Resume Building Event.

The Best Practices for Kinesis Data Streams include having separate security policies for Administrators, for stream resharding, for Producers to do writes, and for Consumers to do reads.

For streaming data that is in-flight, HTTPS endpoints are available.

Inside a Kinesis Data Stream, Data Records can be encrypted and decrypted using KMS.  

When working with Kinesis Data Streams and VPCs, use a VPC Endpoint to keep network traffic inside the VPC and away from the Public Internet.

That's it for now.  I hope you found this informative and engaging. I'm Stephen Cole with Cloud Academy, thanks for watching!

About the Author
Students
37823
Courses
26
Learning Paths
20

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.