Skip to main content

Scaling massive content with Alfresco and Amazon Aurora

How Alfresco scaled to billions of documents on AWS

John Newton – Founder and, since 2005, CTO at Alfresco – used his AWS re:Invent presentation to talk about how Alfresco has been scaling to billions of documents and building apps capable of accessing that huge amount of content…all while moving from large data centers to cost-effective management on the Cloud.
Alfresco completely embraced the open-source model and built a collaborative environment that currently supports more than 1800 customers, eleven million users, seven billion documents, and less than 400 employees.
Alfresco on AWS

Why is content at scale important?

The initial challenge was to store one billion documents, which was quite an impressive amount of data ten years ago – definitely over the petabyte scale. Today, of course, searching Google for the word “Amazon” will return that many pages, but things were different in 2005.  Apparently someone tried configuring one million SharePoint servers back then, but of course that doesn’t work well.
The motivation behind this challenge can be identified in the incredible digital transformation that is driving huge flows of content: Cloud, Mobile, Social Networks, Big Data, etc., creating a whole new range of digital business. ECM (Enterprise Content Management), for instance, is a six billion dollar market.
So what are the main use cases for content at scale?

  • enterprise document libraries.
  • medical records.
  • transaction and logistic records.
  • government archives.
  • claims processing.
  • research and analysis.
  • real-time video.
  • discovery and litigation.
  • loans and policies.
  • IoT (Internet of Things).

Given this wide range of use cases, you can see why the numbers have grown so high: users need to search and retrieve documents, sync and share files, manage and archive all kinds of data content like records, images, and media. That’s why we have witnessed a conceptual transition from Content to Data, Files, and then EFSS. And that’s why John Newton admitted that working with such content architectures is a significant big data problem.
Since the main use case that drove Alfresco’s innovation was related to insurance companies, they also jumped on to the new Amazon Aurora database as soon as they could.

What is content at scale?

Content at scale is not just a matter of billion of documents. It also means dealing with a lot of geographically distributed users, who demand a certain level of read/write throughput. Naturally, concurrency and volume size are serious and constant concerns, and large repositories in particular require both scaling up (clustered servers, databases, indexes, read replicas, etc) and scaling out (sharding, federation, replication, shared nothing, etc).
In the face of these issues, traditional approaches are limited in what they can provide for redundancy, elasticity, agility, geographic distribution, provisioning and administration.

Why Amazon Aurora?

Alfresco’s solution is based on Amazon’s RDS, EBS, S3 and Glacier services. Their whole system is open source and developed in Java (you can read more about getting involved here).
John decided to move to Amazon Aurora for three main reasons:

  1. Aurora is highly available (sync/async replication).
  2. Aurora offers a significantly more efficient use of network I/O.
  3. Aurora is self-healing and fault-tolerant, with instant crash recovery.

To illustrate the kind of modifications he required to move his system to Aurora, John showed us a blank page: beyond a simple configuration switch, no modification was required.
Alfresco Amazon Aurora
The Alfresco team also worked on some large scale benchmarking for concurrent loads and access (BM4), involving 1.2 billion documents, 500 simulated concurrent users (with Selenium) during 1 hour of constant load.
The system completed more than 15 million transactions, with a load-rate of 1200/s, 80% DB CPU load in bulk load, and Aurora’s indexes worked efficiently at 3.2TB. There were no size-related bottlenecks and John assured his audience that the very same infrastructure could sustain up to 20 billion documents.

Written by

Alex is a Software Engineer with a great passion for music and web technologies. He's experienced in web development and software design, with a particular focus on frontend and UX.

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless