The Under- And Overestimation Of Data.
What? How can data be as well underestimated as overestimated?
Allow me to explain, quite some companies underestimate the power and value of data and also overestimate the current shape of their data. With recent news on Citibank and H&M, it shows that data governance, data quality, and adherence to privacy legislation are still challenges for a lot of companies.
First, let’s look into some definitions, data are pieces of information created within processes to further facilitate processes, in this, you can distinguish between master data, the more static basic data, enabling further processes, which generate data providing functioning of the processes and giving insights. Like master data is a customer record + a product record, which via a sales order are combined into a sell, which triggers the fulfillment, invoicing, and payment. Data quality can be defined as complete, correct, and timely available, which does not always mean it sets just 1 standard, as it depends on the processes using data to determine what complete, correct, and timely means. Like in B2C, a VAT registration of a customer is not expected, while in B2B it is. Data governance is to ensure the data meets the set requirements on quality and compliance, like ensuring in B2B every customer has a VAT registration or a clear mark why not, making sure privacy sensitive information is only stored where needed and able to remove it when requested, etc.
So why state that companies underestimate the power and value of data? I see that in how companies position master data management, it’s regarded as data entry, not always with specialized or dedicated people, there is more and more drive to leave it up to self-service via web forms and portals for data to be supplied. But that works, right? Does it? There are numerous examples where outdated and incomplete information limits or even prevents clear and critical insights and further automation to take place. As if it fails in data quality, the universal truth of “garbage in = garbage out” still applies. So a lot of investments are made in data science, machine learning, and artificial intelligence, which rely heavily on the master data and data itself, including it accessibility and interpretation.
And that leads to my next statement, a lot of companies do overestimate the shape their data is in, resulting in a lot of data legacy clean-up initiatives, creating data lakes, data warehouses, and so on. Trying to get the data to be easily accessible and convert to useful data to generate the insights required and having it in the right shape to further facilitate automation.
Both come down to that everyone seems to have an opinion about data and data quality, just ask, but it’s no surprise if you ask 10 people on their definition of data and data quality, you get 10 different answers. Fortunately, you do see more and more data professionals at different levels within an organization and within consulting companies, like if you want your car in good shape, you let a professional deal with it, so should it be with data!
What can or should be done? Invest in data! Have it go hand in hand with all initiatives on digital transformation, data science, automation, machine learning, and artificial intelligence, invest in a solid basis, have it right, set it straight, and keep it straight. So have a clear purpose definition for data, an own vision, and mission with a clear strategy suitable for what your company wants to achieve, and have the right people, processes, and systems in place to get it done.