Blogs

Post a Comment

Controlling Your Data Footprint

August 14, 2013

As your data grows, how’s your data footprint? Have you outgrown the widest shoes you can find?BLU Footprint chat 300px.png

Compression offers plenty of methods to help reign in your data, controlling your data footprint and all the associated costs. But compression techniques have evolved. Have you kept up?

Craig Mullins recently looked at why and how compression is becoming more important in the age of big data, explaining some of the new alternatives and options, including BLU Acceleration, which offers extended compression that eliminates the need for indexes. As Craig notes, “So compression is becoming cool… who’d have thought that back in the 1980s when compression was something we only did when we absolutely had to?”

We discussed Storing Big Data last spring, and Susan Visser compiled a great list of resources on how the storage market is changing.

Join us for #ibmblu chat on Wednesday, August 21 at 1 PM ET to discuss how to control – and even shrink – your data footprint in big data era. Some same discussion questions are below – feel free to add yours in the comments!

Q1 How is big data changing our need for data compression? What’s your data growth rate?  

Q2 Are you archiving or purging data more frequently now?  

Q3 If you can’t manage the incoming volume of data, how can you compress/minimize cost of storing it?  

Q4 How much can secondary tuning (indexes, MQTs) inflate the footprint to support big data performance goals?  

Q5 Can compression replace some of the tuning objects?  

Q6 How does a DBA manage a warehouse full of secondary tuning objects that may or may not be needed for current data access?

Q7 How can compression improve performance? On what levels? (memory usage, disk usage, query performance, etc)

Q8 Is there a need to control compression? (e.g., pick candidate columns)