Next-generation DB2 release highlights BLU Acceleration
Let's continue our discussion of DB2 V11.1. In a recent post, I covered some of the highlights in the areas of availability, security and other core technologies. In this blog, I'll focus in on some exciting new developments with advanced SQL and our in-memory BLU acceleration.
MPP Scale for BLU Acceleration
Many of you are familiar with BLU, IBM's in-memory columnar technology that can speed up analytic workloads by factors of 10x and beyond, significantly improve data compression rates, and eliminate the need for complex management tasks like managing indexes, materialized query tables and the like. Well, V11.1 brings MPP scale to BLU. In other words, BLU tables are now supported with the DB2 Database Partitioning Features (DPF), so you can now get those same benefits at a scale that extends well beyond the PB range.
We took great care when we built this feature to ensure it was engineered to provide the linear scaling you'd expect from DPF. For example:
- Data exchanged during distributed joins and aggregation processing occurs entirely in native columnar format, avoiding overhead associated with conversion to row-format (also known as 'compensation').
- Columnar data exchange across members is highly optimized for parallel, stall-free communications. Micro-batches of columnar data from different source members can be combined and processed at a target member, thus avoiding communication stalls, that might otherwise be present. Full exploitation of Fast Communication Manager (FCM) parallelism is implemented.
- Each table uses a common data compression dictionary across all database partitions. This allows data to remain in optimized compressed format during communications, and significantly reduces network bandwidth and CPU requirements.
- We extended the automatic "out-of-the-box" parameter setup with BLU, to also optimize for DPF configurations. This means, the single DB2_WORKLOAD=ANALYTICS setting, will also result in intelligent settings for DPF parameters such as FCM_NUM_BUFFERS, and FCM_PARALLELISM.
So how well does it scale? Here are a couple initial proof points we've recently measured in the lab. The workload used here was internal analytical workload based on the TPC-DS schema. The graph on the left shows throughput increase when we doubled the number of nodes, while keeping the data size constant. The graph on the right shows throughput variation as both the data and number of nodes is doubled. As you can see, both of these results are a nice demonstration of virtually linear scaling with BLU on DPF: throughput nearly doubles (x1.92) when compute resources double at constant data size, and throughput remains very stable when both compute resources and data size increase similarly.
BLU reloaded: New developments in the core of BLU Acceleration
Aside from MPP scaling, V11 delivers a large number of advances in the core of BLU acceleration. For example, we've integrated a new sorting technology into the BLU engine. This new sort is based on research driven by IBM TJ Watson research center and provides what we believe is industry-leading multi-core scalability. In addition, unlike the previous sort which executed after conversion to row format, the new sort executes natively in columnar format. This means fewer conversions between row and columnar format ( less 'compensation'), and commensurate improvements in performance. For instance, the diagrams below show the query access plan for a particular example query for both V10.5 and V11. All parts below the “CTQ” evaluator are executed natively within columnar processing. Here we can see that V11, pushing down the SORT evaluator keeps the data processing in columnar format only, which contributes significantly to a 4x speed-up that we observed for this query in V11 relative to V10.5.
Some of the other core BLU improvements V11 include new function support that will allow more workloads to exploit BLU acceleration, for example...
- Columnar tables can now participate in Role and Column Access Control
- Columnar tables can now store data with code page 819
- Columnar tables can now use IDENTITY and GENERATED columns
...as well as several performance features that will improve the overall performance of many workloads. For example:
- Faster SQL MERGE processing
- A BLU nested-loop join method to support inequality joins within columnar processing
- Improved SORTHEAP memory management
- Query rewrite improvements
- Parallel insert support for Declared Global Temporary Tables
- Push-down of a number of SQL and OLAP function into columnar processing
In addition, there are several internal memory management and other core improvements that will significantly improve overall system behavior—particularly in workloads with high degrees of concurrency and/or significant demand on memory for sorting, aggregation or other purposes.
Netezza compatibility and other SQL advances
Another key focus in the release is SQL itself. You'll find lots of new goodies in V11 to help you drive more analytical smarts into your applications. One of my favorite examples is a new way of implementing advanced User Defined Functions (UDFs) - in particular aggregate UDFs. An aggregate UDF is a function that returns a single value that is the result of an evaluation of a set of like values, such as those in a column within a set of rows. Unlike other UDFs, when you define an aggregate UDF, you include four 'helper' functions: an 'initialize' function, an 'accumulate' function, a 'merge' function, and a 'finalize' function—each of which will be invoked by DB2 at a specific point during query execution. These aggregate UDFs enable you to develop advanced analytical functions that can operate on the data while it remains in the DB2 engine's address space, as opposed to pulling that data out of the engine and operating on it within the client address space.
And, in case you're wondering, this aggregate UDF infrastructure was adapted from our IBM's Netezza database technology. In fact, it is just one of several advanced SQL additions we've delivered in V11 that were adapted from Netezza. Indeed, another of the key goals in the release was to increase the level of SQL and application compatibility across Netezza and DB2. Just use this session variable—SET SQL_COMPAT='NPS' —and you'll get support for several Netezza SQL features including:
- Routines written in NZPLSQL
- Double-dot notation to specify a database object
- TRANSLATE(char-string-expr, from-string-expr, to-string-expr)
- Grouping by SELECT clause column aliases or ordinal position
Other new SQL features that increase Netezza compatibility are on by default and include:
- JOIN USING support
- UPDATE FROM support
- CREATE TEMP alternative syntax to DGTT
- LIMIT OFFSET
In addition, there are brand new BINARY and VARBINARY datatypes, especially designed for fixed and varying length strings of binary data. There are also a large variety of new SQL functions including:
- Date and Time Functions (31)
- e.g. OVERLAPS(), DAYOFMONTH(),AGE(), …
- Statistical Functions (9)
- e.g. COVARIANCE_SAMP(), MEDIAN(), (), WIDTH_BUCKET(), …
- Regular Expression Functions (7)
- e.g. REGEXP_LIKE(), REGEXP_SUBSTR(), REGEXP_COUNT(), …
- Bitwise Functions (12)
- e.g. INT4AND() INT4OR(),INT4XOR(), INT4NOT(), …
- Misc. Functions (7)
- e.g. BTRIM(), RAWTOHEX(), …
In summary, V11.1 advances the core database engine in several ways, including revamped internal concurrency controls that improve performance by more than 100 percent for some highly concurrent workloads, and providing comprehensive enterprise-ready key management. It extends our pureScale technology for continuous availability and scale. It provides MPP scale to our BLU Acceleration technology, allowing this in-memory columnar technology to span well beyond the PB range. It advances the core of our BLU and SQL engines significantly as well, providing performance and functionality that allows smarter and more advanced application development. This is a highly significant release for almost every workload: from departmental workloads, to mission critical OLTP and analytical workloads.
Thanks for taking the time to read! Stay tuned for upcoming posts for more information about the release of DB2 Linux, UNIX and Windows Version 11.1. Also, watch a DB2 technology review webinar, and continue your research on the next-generation database software by downloading a 90-day trial.