Skip to main content

AWS re:Invent 2015: Designing for SaaS

Next-Generation Software Delivery Models on AWS

Software delivery has been evolving. Not too many years ago most software lived on-premise. Then came the web-hosted app, and then robust cloud solutions like those provided by various combinations of AWS services.

At this week’s re:Invent, Sajee Mathew – Solutions Architect at AWS – talked about the best approach to build SaaS (Software as a Solution) solutions. There are basically three possible approaches:

  • Isolated customer stacks, which offer independent resources for each customer.
  • Containerization on shared platforms which use EC2 Container Service and Docker to provide “slices” of AWS.
  • Pure SaaS shared architecture by means of on-demand resources.

The isolated customer stacks model means that, for every new customer, you simply replicate the stack. It makes billing and provisioning quite simple, but it comes with a catch: thousands of customers will translate into way too many stacks.

Containerization requires fewer resources, as a considerable part of your infrastructure can be shared among all your customers. Assuming you won’t face too many coding changes, it’s a good solution for new apps.

Pure SaaS shared architecture is definitely the best approach for a brand new app. Despite the need for all parts of the application to be multi-tenant-aware, you can benefit from economies of scale. Deployments serving new customers are automatically built on autoscaling single infrastructures.
Slide showing SaaS design

SaaS architecture

As part of his presentation, Sajee defined each component of a pure SaaS shared architecture (along with AWS services built to deliver it):

  • SaaS Ordering: the entry point for purchasing access to SaaS apps, for the orchestration of which AWS Simple Workflow might be useful.
  • SaaS Provisioning: this component manages the fully automated deployment of resources and represents the cornerstone of elasticity and scalability. CloudFormation is used to define the stack, while OpsWorks, Beanstalk, and ECS are used to deploy components.
  • Application Lifecycle Management: the biggest challenge in traditional architectures. Operations need to be transparent and must function with zero downtime as automation layers for continuous integration. CodePipeline, CodeCommit, and CodeDeploy are all powerful management tools.
  • SaaS Billing: this component aggregates per-customer metering and rate information. You can use DynamoDB to store bills and aggregated data, and EMR for processing usage info and generate bills.
  • SaaS Analytics: the aggregation point for all data sources in the development of a data warehouse. Analytics can provide a useful analysis of app performance and usage that drive decisions.
  • SaaS Authentication and Authorization: a single store for all users data, third party SSO, and corporate directories. You could use IAM for policy-based access, KMS for key management, Cognito for mobile and web authentication, Directory Service, and RDS.
  • SaaS Monitoring: real-time monitoring and awareness of application health require the highest scale and availability. There are plenty of off-the-shelf solutions if you don’t feel like using Amazon Kinesis and CloudWatch.
  • SaaS Metering: this component gives your system the ability to understand and track usage and activity, and support audit requirements for billing. You might use Amazon Lambda to feed a metering queue on SQS.
Slide showing SaaS Components Analytics

SaaS best practices

Sajee also provided some best practices for building SaaS solutions:

  • Separate the platform from the program. Avoid tight coupling. Applications will change a lot over time, but core services should remain reusable so they can support a whole fleet of SaaS applications.
  • Optimize for cost and performance. Go for horizontal scalability at every level and create small parallel resource units that scale more efficiently. Also, use scalable services such as DynamoDB and Aurora.
  • Design for multi-multi-tenancy.
  • Know your data lifecycle. Value and usage change over time, you should, therefore, leverage efficient storage options.
  • Collect everything and learn from it. Reliably collect as many metrics as possible and store them long term. The goal is to know your customers in order to learn and profit through analytics.

Written by

Alex Casalboni

Alex is a Software Engineer with a great passion for music and web technologies. He's experienced in web development and software design, with a particular focus on frontend and UX.

Related Posts

Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
  • formation
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...

Read more
  • AWS
  • AWS certifications
Andrew Larkin
— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...

Read more
  • AWS
  • EC2
  • re:Invent 2018
Guy Hummel
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...

Read more
  • Aviatrix
  • AWS
  • VPC
Jeremy Cook
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
Guy Hummel
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
Stuart Scott
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
Stuart Scott
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
Cloud Academy Team
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
  • SpotInst
Guy Hummel and Jeremy Cook
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
Stuart Scott
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS