image
Oracle High Availability

Contents

Introduction & Overview
1
Introduction
PREVIEW1m 27s
2
Overview
PREVIEW3m 42s
Course Summary
12
Summary
2m 56s
Start course
Difficulty
Intermediate
Duration
53m
Students
137
Ratings
5/5
starstarstarstarstar
Description

High availability and disaster recovery are key to ensuring reliable business continuity. While SAP workloads are mainly confined to Azure's infrastructure layer, it is still possible to utilize many Azure functions and features to enhance system reliability with relatively little effort. This course looks at when, where, and how to use Azure's built-in infrastructure redundancy to improve system resiliency and how various database high availability options are supported.

Learning Objectives

  • Understand the key aspects of high availability and disaster recovery
  • Learn about availability and availability zones
  • Learn about Azure Site Recovery and how to implement it through the Azure portal
  • Learn how to set up an internal load balancer in the context of SAP workloads
  • Understand the Azure support options for Pacemaker and STONITH
  • Learn how to implement Data Guard mirroring via the Azure CLI
  • Set up Windows Failover Cluster and SQL Server Always On through the Azure portal

Intended Audience

This course is intended for anyone who wants to use Azure's built-in infrastructure redundancy to enhance the reliability and resiliancy of their SAP workloads.

Prerequisites

To get the most out of this course, you should be familiar with Azure, Azure CLI, SAP, SQL Server, and STONITH.

Transcript

Oracle has several high availability and disaster recovery features supported on Azure virtual machines such as Golden Gate. I want to look specifically at the database-oriented products, Data Guard and Far Sync. Together these products offer synchronized local replication and asynchronous replication at a distance covering scenarios of localized hardware or data center failure and a region-wide failure.

Data guard provides high availability by using a synchronized replica of the primary database. It is the synchronization that ensures zero data loss. Synchronizing is acceptable within a data center or even within an Availability zone but isn't practical at any distance. The Far Sync functionality introduced in Oracle 12c, as the name suggests, takes care of keeping a distant standby copy of the primary database up to date. Oracle synchronizes databases by applying redo logs between databases. This is analogous to log shipping in SQL Server speak. A minimal configuration would be a primary database with a closely located Far Sync instance and a standby or replica database at a distance. The Far Sync instance contains no data files and exists purely for coordinating the asynchronous delivery of redo logs to the standby database. Oracle recommends using Availability Sets and premium or ultra SSD disks to provide as much localized protection as possible. As good as the promise of zero data loss at a distance is, the reality still runs into the same issues of any synchronous data replication solution. 

If the Far Sync instance is located in the same data center, it will fall victim to any data center wide failure. If it is located in another data center within the same region, synchronous replication delay may become an issue. Unless database responsiveness is an absolutely critical priority, having the primary database and Far Sync instance located within an Availability zone at different data centers is the safest option. In theory, this architecture provides a very low to zero RPO but says nothing about the RTO, time to get the standby at a distance database in write mode serving requests. An alternative, albeit more expensive solution, would be Data Guard replication between primary and standby databases within an Availability zone and Far Sync replication between the "local" standby and a standby located in a different region.

About the Author
Students
19499
Courses
65
Learning Paths
12

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.