Will data management's trends collide or converge in 2018?

Post Comment
VP, Offering Management, Analytics, IBM

This year, the uptick in the volume and variety of data being created will set the stage for important organizational decisions.

While the proliferation of data will be readily apparent, deciding what to do in response will be less straightforward. The majority of workloads currently sit in traditional, on-premises environments but we’ll see many of them move to private and public clouds over the next over the next five years. In turn, the trends of greater architecture flexibility and demand for greater simplicity and speed will reach a crossroads where one of two things will happen. Either these trends will collide, each attempting to win at the expense of the other, or they will converge and work together in harmony toward a common goal. The strides made by data management as a whole in 2018 will ultimately determine the result.

With this undercurrent of uncertainty, it’s no surprise that flexibility will be a top priority. Companies continue to progress in their exploration of hybrid architectures, choosing deployments that suit their desired outcomes, such as agility and privacy, and situational constraints, including regulations and budget.

Yet even as people build their experience with cloud platforms and cloud providers quickly evolve to suit new needs, organizations are discovering that hidden costs can lead to surprises in the long run. Moreover, accommodating changing workloads and consumption patterns is becoming increasingly important as businesses work to tailor environments more precisely.

To help mitigate the risk of hidden costs and unforeseen circumstances, businesses will want to seek solutions that prevent multiple kinds of lock-in, including lock-in related to vendors, data and workload locations, and rates of consumption. The ability to move workloads from one cloud to another or from on-premises deployments to cloud should be viewed as a top priority. The more easily this can be done, the less likely a company will be locked into a single vendor or data/workload location.

Similarly, data virtualization capabilities should be a key consideration when building out a data management architecture. Finally, for even greater flexibility, organizations should make sure their cloud architecture can scale up and down elastically to meet the actual rates of consumption more closely.

Of course, there is a tendency to look at that level of flexibility and conclude it must be done at the expense of simplicity. Many companies are still struggling with data that is isolated and spread out across multiple repositories. It’s reasonable to think that further increasing the amount of data and workload locations would only exacerbate the problem. Yet that doesn’t necessarily need to be the case.

A well-thought-out hybrid data management architecture provides the nuance required to enable a flexible solution while still delivering an uncomplicated experience for both enterprise architects and users. For enterprise architects, the secret lies in architecture convergence. Individual pieces of the hybrid data management architecture should not only integrate with one another, but maintain a likeness to one another so that moving between them is as easy as possible.

On the other hand, data virtualization will benefit users by distancing them and applications from the underlying intricacies and topology. When paired together, architecture convergence and data virtualization have the power to turn a would-be clash into a beneficial merger of flexibility and simplicity.

As the underlying processes become simpler this will in turn lead to insights being delivered faster, a crucial advantage as all industries seek to gain access to and act upon data in a much more instantaneous way. Further simplicity efforts capable of increasing speed of insight will become more of a focus. Self-service solutions, for example, will garner more attention as a method to empower users and scale their access to data.

With self-service options in place, developers will be able to spin up a dev environment quickly, line-of-business users can gain access to trusted data and IT can more easily avoid becoming a bottleneck. All involved will be able to get started on important projects with even greater speed.

Still, hybrid data management strategies will need to take into account the fact that data is coming in much more rapidly too. Internet of Things (IoT) devices and digital apps are supplying large amounts of event data at rates that can reach hundreds of thousands of data points per second. Those most capable of capturing this data and putting it to use instantaneously will be in a better position than competitors who cannot.

Traditional solutions are not able to handle those data volumes at that velocity due to physical or economic constraints. Therefore, in 2018, more business will look to in-memory databases designed for massive structured data volumes which can both perform real-time analytics and provide the option to capture and store that data for later use.

In general, the year seems primed to deliver on the promise of flexibility, simplicity and speed. However, this will only happen if smart hybrid data management strategies are put in place which help those trends converge rather than collide.

To learn more about adapting your current data management to changing business needs, visit our interactive point of view on the topic.