Imagine a day in the life of Sarah, a hypothetical Chief Data Officer at a major bank in South Africa. There are many expectations on her shoulders. She struggles to deliver business-ready data to fuel her organization and support the decision makers within the bank. It is her job to put in place a
High-quality data is the core requirement for any successful, business-critical analytics project. It is the key to unlock and generate business value and deliver insights in a timely fashion. However, stakeholders across the board are responsible for data delivery, quickly evolving requirements,
The expectation to achieve faster results continues to rise. Businesses everywhere are looking for ways to improve their operational efficiency and effectiveness to enable the best decision-making. The need to optimize typically comes to a head with the reality that there are many silos within any
Most businesses collect data but are unable to use it to generate business value or deliver insights in a timely fashion. Data volume and data types continue to grow, as do the different types of data citizens—ranging from business users to data scientists. As a result, data management and delivery
Global data privacy compliance regulations like the General Data Protection Regulation (GDPR),California Consumer Privacy Act (CCPA) and Brazil’s LGPD have created scrutiny around personal, customer and employee data. This data is growing at a rapid pace, and so are the mandates requiring
Haruto Sakamoto, the Chief Information Officer at a Japanese multinational imaging company, had a few challenges to contend with. His business units had a presence in 180 countries worldwide with geographically-dispersed data warehouses and business intelligence applications in various locations.
The number of business segments requiring data to drive contextual insights is increasing. Leaders are seeking new ways to manage the pressures of delivering high-quality data faster across their businesses. To date, many of these projects have focused solely on ingesting data into a data lake
From reading the news headlines of yet another retail chain closing its stores, one can easily be left with the impression that we’re in a retail apocalypse. But in reality, the overall retail industry is very strong and healthy—especially online.
What we’re witnessing however, is a transformation
The conversation around data preparation has been evolving. What started as a push for self-service access for specific use cases has now expanded to operationalizing a data pipeline across the enterprise. The goal is to create efficiencies and eliminate workflow silos to propel data strategy
Let’s say you’re the Chief Technology Officer of a bank or retailer struggling to infuse AI that aims to improve customer experiences. You likely face three main challenges:
Data sprawl: Your customer data is currently on multiple clouds, including on-premises and a cloud data lake storage
In my last blog post, I explained why businesses need product information management (PIM). I will now dive deeper into the key factors an organization must take into consideration when evaluating a PIM solution. Note that I am not going to cover anything about catalog, hierarchy, category
The best data catalogs can automate the process to collect, classify and profile data to ensure the highest standards of quality. Here are three popular use cases detailing why companies are moving towards IBM’s Watson Knowledge Catalog.
How much time do your data scientists and business analysts spend looking for the right data? How much time do they spend preparing data? And how much time is wasted because they don’t know how trustworthy the data they find is; they find several people have unknowingly spent time looking for the