The past few years have witnessed a growth in popularity of main-memory OLTP engines. Unlike their traditional disk-based counterparts, these modern OLTP engines adopt radically different architectures that offer very high throughput and near-linear scalability under modern multisocket multicores. Rivalling the growth of main-memory OLTP is the rate at which enterprises are migrating to private and public cloud settings. As on-premise database installations move to the cloud, modern OLTP engines face a new set of challenges.

First, energy efficiency has emerged as a new optimization target for cloud-hosted OLTP engines. Disk arrays were the dominating source of power consumption in traditional OLTP engines. Modern OLTP engines, in contrast, use DRAM as the primary data store. In the absence of disks/controllers, CPU dominates power consumption in main-memory OLTP engines. Unfortunately, attempts at reducing power consumption by using CPU power/frequency scaling have not yielded proportionate improvement in energy efficiency due to the associated performance impact.

Second, while offering very high throughput under workloads with low contention, concurrency control (CC) protocols used by modern main-memory OLTP engines buckle under high-contention workloads. Unfortunately, such workloads are increasingly common in today’s cloud-hosted ecommerce workloads where database engines are used as backing stores for popular cloud services. Recent research has shown that as cloud providers migrate from multicore to manycore servers, OLTP engines (concurrency control protocols in particular) will require hardware support to scale to hundreds, or even thousands, of cores.

The ARMADA project is a joint venture between the EPFL DIAS lab and Huawei that focuses on designing cloud-hosted, scalable, energy-efficient main-memory OLTP engines for emerging manycore ARM processors. It is well known that ARM processors dominate the mobile market due to their low power consumption compared to Intel x86 processors. Over the past few years, ARM processors have started making inroads into the server market. With the new ARM v8-A architecture, modern ARM CPUs like Cortex A57 and A72 have started to close the gap with x86 processors by supporting several advanced features used in the high-performance server domain like 64-bit address space support, hardware virtualization and SIMD extensions. In addition to providing server-grade performance at a low power envelope, server hardware vendors can also extend ARM IP blocks with custom logic to accelerate applications. Thus, the ARMADA project focuses on investigating hardware-software codesign opportunities for building a vertically-integrated OLTP appliance that can scale well under all workloads without compromising on energy efficiency.


Analyzing the Impact of System Architecture on the Scalability of OLTP Engines for High-Contention Workloads

R. Appuswamy; A. C. Anadiotis; D. Porobic; M. Iman; A. Ailamaki 

Proceedings of the VLDB Endowment. 2017. Vol. 11, num. 2, p. 121-134. DOI : 10.14778/3149193.3149194.

The Five minute Rule Thirty Years Later and its Impact on the Storage Hierarchy

R. Appuswamy; R. Borovica; G. Graefe; A. Ailamaki 

2017. Proceedings of the 7th International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures, Munich, Germany,

OLTP on a server-grade ARM: power, throughput and latency comparison

U. Sirin; R. Appuswamy; A. Ailamaki 

2016. DAMON, San Francisco, California, 26 06 – 01 07 2016. p. 1-7. DOI : 10.1145/2933349.2933359.