Traditionally, the amount of available memory limited how businesses could – or couldn’t – run complex queries or analytics. Disk space was a constant challenge, and architects and DBAs performed a continual balancing act, trying to put the most valuable information in the most accessible, fastest storage possible.
As data sets sprawl larger, many organizations now note they have more data than memory! But in-memory processing lets you load terabytes of data in random access memory rather than hard disks. This helps streamline processing and queries, even when you have more data than memory.
But DBAs and business analysts alike still have work to do, finding the right balance between disk and memory that enables them to run fast, efficient queries while controlling costs.
Join us as we discuss this balancing act during our next #ibmblu twitterchat, Wednesday, October 2 at 1 PM ET. We’ll talk about the relative benefits of each type of storage, how workloads are changing, and how to keep costs in check.
Start thinking about the questions below, or add your own in the comments! And read Jessica Rockwood’s recap of last week’s chat for a taste of the conversation.
- What are benefits of each storage type - Disk, Flash and Memory?
- Do you see all workloads migrating to in-memory systems or is there a place for disk in future workload needs?
- Do you have to choose affordability vs performance? Have we reached the price tipping point on SSD vs HDD?
- What does it mean to be 'storage-optimized'?
- [Why] Are storage and memory issues a time-consuming area for administrators?
- What limitations of SSD/flash must be overcome for broader enterprise deployment?
- What are some approaches to reduce management costs?