Blogs

A Db2 journey: The hybrid database evolution

Post Comment
Program Director, Core Database Technical Sales, IBM
IBM Distinguished Engineer & Chief Db2 LUW Chief Architect, IBM

The word "hybrid" gets used a lot these days and can refer to many things from a database perspective. For system administrators, a hybrid database can mean provisioning the database in the cloud, on premise, or in an appliance, with the location being transparent to applications or end-users. For application developers, a hybrid database may refer to the database’s ability to handle non-relational data like JSON, as well as SQL facilities to query and manage these types of records. And, for a line of business, a hybrid database could refer to the operational aspects of a database, so that a single database engine could simultaneously run both Online Transactional Processing Workloads (OLTP) and Online Analytical Processing Workloads (OLAP).

The Db2® family of products already encompasses many hybrid configurations. DB2 is available in several on premise configurations, including support for Linux on Intel®, Power®, and zLinux® platforms, Windows®, and AIX®. In addition, virtualization is supported across these platforms, including VMware®, KVM, PowerVM®, and Docker containers. Finally, cloud versions of Db2 allow customers to run their workloads in a hosted or managed cloud environment.

From an application development perspective, Db2 has always supported major development platforms and languages. Traditional programming languages like C, C++, Java, and web development languages like Ruby, Python, Perl, PHP, and Node.JS have been supported for years. Development IDEs, including Eclipse and Microsoft Visual Studio have native plug-ins for Db2 support. Finally, newer APIs like IBM Data Server Gateway for OData enable users to quickly create OData RESTful services to query and update data in Db2.

With this increased focus on supporting development languages came the requirement to support new data types (JSON, Graph) and increased compatibility with the SQL syntax extensions of other database vendors. Db2 has added significant extensions to the SQL syntax to support much of the Oracle® PL/SQL syntax, along with extensions for PostgreSQL, Netezza® (PDA), and SQLServer® SQL. In some respects, Db2 has added hybrid capabilities in the form of supporting other SQL dialects.

http://www.ibmbigdatahub.com/sites/default/files/db2-journey-blog.jpgApplications also needed the ability to consolidate information and build reports based on disparate data sources. Often, this requirement resulted in the extraction and merging of data into a single repository. To support this requirement, Db2 developed federation capabilities to allow standard SQL to be used to across a diverse number of data repositories. Support for traditional relational data sources like Oracle, SQL Server, and Teradata® has evolved to include non-structured data sources like Hadoop®, Spark®, Hive®, MongoDB®, HAWQ®, Hive®, Impala®, and many more. Developers and users can now create queries against non-relational database sources using familiar SQL syntax without having to deal with the underlying complexities of these data sources.

For a line of business, a hybrid database that runs OLTP and OLAP transactions at the same time would have many benefits. Gartner coined the phrase HTAP for hybrid Transactional/Analytical Processing. HTAP is a term used to describe the capability of a single database that can perform both OLTP and OLAP for real-time operational intelligence processing. Although not part of this definition,some will include the requirement of having data "in memory" in a mixed workload environment so that the analytic (OLAP) portion can run substantially faster.

A key characteristic of an HTAP system is that it combines a System of Record (OLTP) with the ability to do OLAP queries against the current data. There is zero or minimal latency between the data being reported on and the real transactional data. This doesn't seem like an unreasonable thing to do with production data, but for years customers have used a variety of techniques to do reporting that do not include running it on the same server as the operational database.

Why is that? Partly this was due to the separation of the two types of processing from one another. OLTP transactions are typically short, require very little resources and need to have very high performance (think banking). OLAP is used for analytics and reporting and has long running transactions that scan thousands or millions of rows consuming large amounts of processing power. OLAP queries also need extensive tuning and optimization to run efficiently. In the past, mixing these two workloads was a recipe for conflict—how do you guarantee your service level agreements (response time and throughput) for your OTLP workloads when you are adding additional indexes and optimization objects for your analytical workloads? 

Customers needed to separate these two workloads. Data would be moved from one system to the other using ETL (Extract, Transform, Load). System Administrators would create several indexes, views, materialized query tables, and other optimization objects to improve the performance of the queries. And in many cases, this approach worked, but there were costs involved. There was increased complexity of running two systems, additional costs with licensing and maintenance and finally a latency was introduced between the system of record and that of the analytic system

Db2 customers have, for many years,been able to achieve a level of HTAP capability on an individual system with a variety of techniques, including partitioning the database (Data Partitioning Facility), Materialized Query Tables (MQT), Read-only tables on HADR (High-Availability Data Replication), and other optimization strategies. If latency was not critical, change data capture (CDC) or queue replication (QRep) could be used. But any one of these strategies required extra administration, tuning, and resources. Another technique would be required to minimize latency, improve reporting performance, and more importantly, simplify the tuning and administration of an HTAP system.

In 2013, IBM introduced BLU technology as an integral part of the Db2 product. BLU is the revolutionary in-memory columnar technology that was introduced in DB2 Version 10.5. This technology gives customers incredible speed and performance for their analytic workloads. Workloads that used to run in hours can now be completed in seconds! Even more important is the “load and go”philosophy of BLU. BLU removes almost all the tuning effort required to efficiently run queries

Db2 BLU has several capabilities that give it remarkable performance. A BLU table is stored in columnar format, which gives Db2 the ability to compress the data typically 10 to 1. The compressed data is further stored in chunks that mimic the size of registers on the hardware the database is running on. This combined with SIMD (Single-instruction Multiple Data) allows Db2 to take advantage of the inherent parallelism already found on the hardware. As data is stored in the database, Db2 automatically tracks the range of values it sees for each chunk of input data, so that during query processing, only the data blocks that potentially include the value(s) needed will be read while others will be skipped. Data pages are stored in memory in compressed format and analyzed in compressed format, eliminating the need to decompress every value. Db2 also uses whatever memory is allocated to it, with no requirement to fit all the data in memory.  An intelligent prefetching system is designed to ensure the system runs at in-memory speeds, even if only a fraction of the database fits in memory.

Initially, Db2 BLU was targeted at analytic workloads since that was where most performance requirements were. In 2015, Db2 BLU product enhancements were introduced to further evolve BLU’s HTAP capabilities. Shadow tables allow a customer to continue running their OLTP/reporting workloads on existing row format tables, but with a set of added side (“shadow”) tables in columnar BLU format. The shadow tables contain a copy of columns that are interesting to the analytic workload and deliver all of BLU’s performance benefits to the OLAP/reporting queries which are transparently routed to them. They are refreshed every five seconds (by default) so that any OLAP query routed to them will get reasonably current data. The user can always override the latency requirement and avoid the use of shadow tables if desired.

In 2016 in DB2 V11.1, additional advances were introduced in BLU including the ability to partition (or “shard”) across several hundred servers. This feature extends BLU’s in-memory processing speeds to a massive PB scale and thousands of cores. Also, included in this release is the first in a series of updates to the internals of Db2 BLU to improve its native OLTP performance. For example, the synopsis tables that track the range of data on a page (for data skipping) were optimized for small OLTP insert jobs, leading to performance improvements upwards of 50 percent in some cases. Further optimizations such as advanced indexing for highly selective OLTP workloads, deep enhancements for small OLTP-style transactions, and further functional improvements are on the way.

Adding more OLTP capabilities to BLU gives customers several significant benefits. First, it removes the need to decide if their tables need to be in a row or columnar format. Second, the additional copy of the data (in a shadow format) is eliminated along with the administration and resource consumption associated with keeping that data up to date. Third, it removes data latency! OLAP queries and reports, and the decisions and transactions they affect, are always based on the latest committed data.

In 2017, additional advancements with Db2 BLU will incorporate more HTAP performance features. Customers continue to adopt this technology at a rapid pace and incorporating OLTP features into BLU will allow for simpler deployment and faster performance for customer workloads.

Learn more