Some of these figures are quite stark: of the CFO’s polled, 78% fear their company may submit inaccurate data during financial reporting; 82% of respondents are not completely confident in the quality of their company’s data. This industry has been talking about this issue for many years – why is it still such a challenge?
I think that for many of the companies that SAS works with, these are challenges that go back some way and are yet to be addressed. It goes all the way back to the proliferation of client-server systems and increased flexibility at the application tier. It’s also been compounded by spreadsheet sprawl. You’ve got data silos popping up all over the place, and often no common data governance, or even the realisation that data is a key corporate asset.
You’ve got all this processing power, network bandwidth like we’ve never seen, and a proliferation of mobile devices too. It’s all led to a proliferation of the data itself, often with little or no governance in place. Organisations haven’t had the courage or the opportunity to address this.
One of the challenges is that amongst finance professionals you find they are often the worst at helping to create spreadsheet sprawl! Yet it is often in finance where these initiatives are born, or at least funded.
So despite all the articles and reports from the analysts and vendors, there’s been very little improvement?
I would say there are pockets of improvement. Some are able to elevate the issue into the boardroom, and that helps to get the problem addressed. But for others it’s still very difficult organisationally and culturally. The tooling and technology that supports this is maturing but there’s a big gap between those organisations that are doing something about it and those that are not. And I think that gap is, if anything, going to get wider.
Once you start to manage and govern data properly you can see a measurable benefit, because there are cost and risk perspectives.
So where is a good place to start – to take a more proactive approach to data quality and governance?
I’d say you need to take a step back and ask how the IT department has changed in the past 20 years, because it will change again in the next ten years. Maintaining a quality of service, keeping the lights on, that’s just table stakes. What’s needed now is a collaboration [between IT and the business]. It’s got to the stage where there really needs to be someone with a foot in both camps – data stewards, who can look at the issue from an IT perspective as well as a business perspective.
Oh no, that old problem of whether IT should or can understand the business, and whether business people understand the language of IT…
It’s a valid point. I think the key thing is collaboration. The business is IT and IT is the business, that’s how to think about it. You then need the tooling and technology to support that collaboration.
Haven’t recent regulatory changes – the way the FSA is trying to get tougher, or the increased sizes of fines being handed down by the Information Commissioner’s Office – sharpened the corporate world’s mind regarding these data governance issues?
It’s a good question. There are certainly areas where it has. Even though the Insolvency II legislation has effectively been parked for the moment, it’s an excellent example of how recent compliance legislation means that a business needs to prove the accuracy, completeness and appropriateness of their data. There are three measures from a quality perspective. They can be measured, and organisations will have to for Solvency II. Once you measure it you can make moves to improve it further.
Your research also found that one in five companies quantify the financial value of data on the balance sheet, like other more traditional assets…
They do. You can put a value on it, but just starting to report on it has benefits. Reporting on it helps to give people the confidence that the data is correct, processed appropriately and accurately. That’s where you should start: is the data complete, appropriate and accurate? Then the next stage is making the data governance into a series of repeatable processes. Have you noticed how organisations who get this ask you if your details are still valid each time they interact with you?
True. What are the implications of data governance for the most-hyped topic of the moment, Big Data? The analysts have talked about the three ‘v’s: the volume, variety and velocity of data.
There’s also a fourth ‘v’ which is the value. What is the value of data? To some degree Big Data could just be like the exhaust fumes from a car: some companies will have a huge amount of data, a lot of which may actually be of very limited value. I wouldn’t go so far as to say all Big Data projects will have a data governance element to them. But it’s likely Big Data projects will involve some management of metadata, and that is likely to have data governance repercussions.
You talked earlier about data stewards. How common are they becoming?
In pharma and financial services, where they are regulated to do this, they are becoming quite common. But a lot of organisations don’t have the skills in-house to do this, so I am seeing the emergence of third party, professional data managers – people that understand the policies and processes, because it’s not just about the technology.
There are a set of transferable skills emerging, and forward-thinking companies are aiming to educate some of their own people in this or bring in outside skills and hope that there is some skills transfer.
You say it’s not just about technology but also people and process. But what do you think are the main benefits of underpinning data governance with technology from a company like SAS?
It helps alleviate the challenge of metadata management – to take one example – across an organisation. It’s about having the tooling and technology to automate many of these processes at a low people cost. Data quality and governance go hand-in-hand. SAS software is capable of improving confidence by creating the right rules and the right policies.