In a recent blog, Greg Rahn of Oracle responded to Phil’s “Oracle Exadata and Netezza TwinFin Compared” eBook; before commenting on an Oracle engineer’s views, I’ll restate the eBook’s larger themes.
Exadata connects Oracle’s RAC database, its architecture designed for online transaction processing (OLTP), via a fast network to a massively parallel processing storage tier. As an OLTP database paired with a specialized storage subsystem, tuning Exadata to function as a data warehouse is complicated and demands skilled, highly trained, experienced technical staff. Mitigating the shortcoming of an OLTP database pressed into service as an analytic database with expensive network and storage makes Exadata costly: to acquire; to design, tune and maintain as an optimally-configured data warehouse; to run in the data center.
Netezza TwinFin, designed as an analytic database, brings the power of massively parallel processing to manage and exploit data at terabyte-to-petabyte scale. TwinFin is an appliance–easy to install, easy to operate and easy to manage. TwinFin offers value: fast performance for advanced analytics at an affordable price.
Now I’ll discuss the detail of Greg’s blog and respond from a Netezza perspective.
Claim: Exadata Smart Scan does not work with index-organized tables or clustered tables.
Greg responds that “IOTs and clustered tables are both structures optimized for fast primary key access, like the type of access in OLTP workloads, not data warehousing” and suggests our intent was to mislead by quoting from an old Oracle datasheet. It wasn’t. Oracle 11g Release 2 documentation reads “Index-organized tables are suitable for modeling application-specific index structures. For example, content-based information retrieval applications containing text, image and audio data require inverted indexes that can be effectively modeled using index-organized tables.” Elsewhere the documentation states “Index-organized tables are useful when related pieces of data must be stored together or data must be physical stored in a specific order. This type of table is often used for information retrieval, spatial and OLAP applications.” In the eBook Phil discusses first and second generation data warehouses; many of the applications described by Oracle as candidates for IOTs are typical of those our customers run on TwinFin – these are second generation data warehouse applications. Greg believes Exadata smart scan not working with index-organized tables has zero impact on Exadata customers. Is it reasonable to conclude that Exadata is not being used for second generation data warehousing?
Claim: Exadata Smart Scan does not work with the TIMESTAMP datatype.
Since we published the first edition of the eBook Christian Antognini, the original source of this information, goes to the heart of the matter in his blog: “The essential thing to understand is that this limitation is due to bug 9682721. The fix is expected to be part of 188.8.131.52. According to my test cases (that Greg Rahn was so kind to execute against an early release of 184.108.40.206), offloading works correctly for all datetime functions but for the following three predicates.
- months_between(d,sysdate) = 0
- months_between(d,current_date) = 0
- months_between(d,to_date(‘01-01-2010’,’DD-MM-YYYY’)) = 0”
Note that the MONTHS_BETWEEN function can basically be offloaded. The problem in these cases is that the offloading does not work when, for example, SYSDATE is used as a parameter.
While happy to let this one pass, I have a question. Do organizations accrue value or cost from a technology requiring its administrators understand all combinations of functions, their predicates and their parameters before they are capable of designing queries to be processed in parallel?
Claim: When transactions (insert, update, delete) are operating against the data warehouse concurrent with query activity, smart scans are disabled. Dirty buffers turn off smart scan.
In my opening comments I compared TwinFin’s simplicity to the complexity of Exadata. All queries submitted to TwinFin are processed in its massively parallel grid; no tuning, no special database design. This is appliance simplicity. In Exadata whether a query benefits from smart scans (massively parallel processing) can depend on the state of the data being read. Exadata requires developers to understand at great depth the physical path a query takes to access data. This is complexity.
While Greg concedes Exadata’s MPP processing is disabled for those blocks containing an active transaction he is confident that “Not having Smart Scan for small number of blocks will have a negligible impact on performance”. My experience with Netezza’s customers and their applications prompts me to take a more circumspect view. I’ll explain why in the next section.
Claim: Using [a shared-disk] architecture for a data warehouse platform raises concern that contention for the shared resource imposes limits on the amount of data the database can process and the number of queries it can run concurrently.
Greg argues contention for shared disk is not a problem for Exadata and cites Daniel Abadi’s blog in his defense. Let’s take a look at what Daniel says on this subject “If you are going to make an argument that shared-disk causes scalability problems, you have to make the argument that contention for the one shared resource in a shared-disk system is high enough to cause a performance bottleneck in the system - namely, you have to argue that the network connection between the servers and the shared-disk is a bottleneck.” This is the argument Phil makes in our eBook. Consider a query analyzing correlations between equity trades in a sector of a stock market. The algorithm calculates Spearman’s rank correlation coefficient (Spearman’s rho), measuring statistical dependence between two variables by assessing how well the relationship between them can be described. This analysis creates valuable insight in to whether specific equities influence behavior of other equities in the same market sector within a window of one to ten minutes.
The customer loads a massive volume of trading data into TwinFin and constantly trickle feeds data from live markets into the warehouse. The query is run and re-run constantly to assess behavior of different equities in dynamic markets. Each time TwinFin completes a Cartesian join between all the equities in the sector while at the same time calculating a Volume-Weighted Average Price and a Return From Previous Close value for the equity under investigation. The results pass to Spearman’s rank correlation coefficient function to calculate the Population Covariance and the standard deviation of every equity combination for the time period. Netezza executes every step of the query in parallel utilizing all TwinFin’s hardware and software resources. Netezza’s intelligent storage selects only the rows needed for that market sector and projecting only the columns needed for assessment. The join result is directly streamed to the code implementing the statistical analysis which TwinFin downloads to every processor in its MPP grid, running the complex calculations in parallel. Results from each node in the MPP grid are returned via the network to the host for final assembly and rendering back to the requesting application. TwinFin completes the analysis in a few minutes, and then runs it again, and again for as long as the market is open.
After several hours Oracle 10G was still attempting to complete its first round of analysis. What difference will a new version of the Oracle database paired with an MPP storage system and a fast network make? Exadata’s MPP storage grid is unable to process Cartesian joins, the first step of in this analytic process, meaning it brings no performance gain but must put all records on the network and send them across to Oracle RAC. Even if it we able to process the join Exadata cannot push down user defined functions, used to implement the calculations, to MPP - in Oracle functions always execute on the RAC servers. In processing the algorithms Oracle must create and manage temporary data sets and write these out of memory for storage. Exadata’s flash cache may play some role here, but the size of the data sets and the complexity of the algorithms will force database processes to write to disk. This flow from Oracle RAC is back across a network still clogged with coming from the MPP storage tier data, queued and unprocessed waiting for attention from a fully-consumed Oracle RAC. I contend that Exadata’s network connection between the servers and the shared-disk is a bottleneck. Not Exadata’s only bottleneck. TwinFin demonstrates how a true MPP architecture excels in calculating Spearman’s rank correlation coefficient - a real workload on a real dataset. Oracle’s OLTP database, simply not designed to process large-scale analytics, is overwhelmed. Exadata suffers contention on its network and in its database system’s shared disk architecture.
Back to the previous point about Exadata’s MPP processing being disabled for blocks containing an active transaction – the customer is constantly loading new market data and analyzing it in comparison with a massive volume of historic data. While entirely appropriate for transaction processing, Exadata’s architecture of disabling an entire block from parallel processing when a single record in the block is being updated can only hinder and never help in the data warehouse. The very point of a data warehouse is that all data should be available to the business as quickly as extract-transform-load processing allows. By pressing an OLTP database in to service as an analytical database Oracle unnecessarily burdens customers with creating database designs to work around this complexity and, developing a thorough understanding of how each query accesses the data model. While not having Smart Scan for small number of blocks may or may not impact performance, as an unnecessary complexity demanding the attention of database specialists, it costs customers real money.
Claim: Analytical queries, such as “find all shopping baskets sold last month in Washington State, Oregon and California containing product X with product Y and with a total value more than $35” must retrieve much larger data sets, all of which must be moved from storage to database.
Greg shows some nice SQL to demonstrate how Exadata processes the beer and pizza query. Give the business an answer and they always come back with a new question: “Greg, what was the total value of Brand #42 beer’ sold in each basket?” Greg can now update his SQL with the clause:
sum(case when p.product_description in ('Brand #42 beer') then td.sales_dollar_amt else 0 end) sum_productX,
and re-run the query. Business users love IT when we give them a fast performing system but are less forgiving when a query, that yesterday ran blazingly fast, today slows to a snail’s pace. Exadata cannot push down the newly introduced sum for parallel processing by its storage nodes as the join must be processed first, and the storage nodes cannot process joins. Any function or calculation that uses columns from two or more tables must be evaluated on the RAC database servers. The query performance is going to degrade significantly sending the database expert back to the Oracle documentation in an attempt to find a new way to resolve the amended query so it completes at a time acceptable to the business.
Claim: To evenly distribute data across Exadata’s grid of storage servers requires administrators trained and experienced in designing, managing and maintaining complex partitions, files, tablespaces, indices, tables and block/extent sizes.
While conceding Oracle Automatic Storage Management automates the task of striping partitions across all available disks, the ASM administration team must still create partitions, configure and manage disk groups for shared storage across instances, choose and implement either 2-way mirroring or 3-way mirroring, and configure Allocation Unit sizes. Additionally, Exadata configuration requires administrators create and manage tablespaces, index spaces, temp spaces, logs and extents.
In conclusion, Netezza entered the data warehouse market convinced the products offered by the dominant vendors, in particular Oracle, were ill-suited to meet the challengers of Big Data and of such complexity to make them exorbitantly expensive to acquire and use. Exadata only increases the complexity and expense of an Oracle warehouse. Greg draws his readers’ attention to the excellent blog at http://dbmsmusings.blogspot.com/ where Daniel Abadi muses “Both Oracle and Teradata are too expensive for large parts of the analytical database market.”
Greg’s blog reveals one path available to organizations wishing to generate greater value from their data. CIOs willing to build, train, and permanently assign a team of technical experts to choosing just the right combination from a myriad of settings, can be continuously employed coercing a database designed for OLTP to function as a data warehouse. I’ll close this blog with a manager’s perspective, from someone who focuses an organization’s limited resources on its highest priorities. Peter Drucker, who introduced us to the concept of the knowledge worker, gave us a pragmatic measure to evaluate our own and our team members’ activity - am I merely efficient (doing things right) or truly effective (doing the right thing)? All the workarounds and clever tuning demanded by Exadata simply don’t exist in TwinFin, Netezza has proven them unnecessary.