Quick: how fast do you need to understand your data?
There’s really no good, universal answer. As big data becomes more about using data to actually generate insights and understanding, we’ve begun talking about “speed of thought analytics.”
BLU Acceleration helps make this possible by integrating in-memory processing that can speed up both transactions and analytics. But how does it work? And how can you apply speed of thought analytics to data warehouses or other big data infrastructures?
We’re hosting an #ibmblu twitterchat on Wednesday, September 4 to discuss these questions, and hope you’ll join us.
To get started, read James Kobielus’ blog post, In-Memory: The Lightning in the Big Data Bottle and listen to the podcast with IBM Champion David Birmingham and IBM’s Jessica Rockwood. Check out Chris Eaton’s post, Is Your Database Ready for Big Data?
Here are some of the questions we’ll discuss during the session. Feel free to add your own in the comments below. We hope to chat with you on Wednesday, September 4 at 1 PM US-ET for the #ibmblu chat! Just go to Twitter and search for the #ibmblu hashtag to follow along. Add #ibmblu to your own tweets to join in.
- What is the “speed of thought”? What is its value to businesses?
- How does in-memory technology support speed-of-thought analytics?
- What are in-memory’s applications in transactional computing?
- What are the economics of in-memory technology?
- How does in-memory support or supplement data warehousing?
- Can in-memory techniques be applied to non-relational databases and/or Hadoop?
- What is the principal application of in-memory in big data infrastructures?
- Is there ever a “fast enough” scenario for most businesses, or will they always push for faster?