What is data enablement software?

this is an image of code

There has been a common narrative for years that says: ‘in order to understand and make use of our data we need to put it in one place so that people can make use of it for their business processes.’ Some describe these as their “use cases”. Perhaps this is a slightly improved situation than before because businesses normally store their data in numerous technology silos, and vendor lock-in is common. The theory, therefore, makes sense. Move it to a new silo, or system, and everything will be accessible and useable.

As I mentioned, that’s how the narrative goes. Perhaps some organizations might even decide to use a Data Lake. Great! Now we’re raring to go! But, not so fast. Do we actually understand the data any better? No. Have we perhaps added value? Unlikely. Can we access the data simply? Perhaps. But not guaranteed. Ah, they will say, ‘but let’s use this super new fancy tool {insert flashy new data platform provider here} to store and access our data.’ The new company will talk up its revolutionary new cleverness and NASDAQ 100 execs will race to adopt the new tech, to ensure they are part of the zeitgeist.

Markets will react and before we blink, said new tech company is worth a billion dollars! But, unfortunately, we still haven’t solved the problems we started with. We still don’t understand our data, we can’t meaningfully access it, unless we use this new technology’s agreed methods, and now instead of paying “old cranky tech company” a large annual license fee, we now pay “fancy new tech company” a sky-high monthly subscription fee. Clever fancy new tech company.

Solving your data problems

So, what’s the answer if you want to understand your organization’s data, find it, meaningfully use it as and when required, and ensure that it’s of high enough quality to be useful? These are common questions that very few company execs will be able to reasonably and authoritatively answer. In many cases across all industries, no one really knows all the answers. In fact, around 95% of the time, the question will result in blank looks and a shrug of the shoulders.

The solution?

‘Think differently’. Store your data in whatever data storage technology serves your needs.
Don’t adopt a new platform if it locks you in. Feel free to do it on-premise if you want to own the servers, or in the cloud if you don’t. If you want hyper-scale, use grids, if you want fast access, use in-memory low latency technology. If you want, you can use an open-source scalable database like PostgreSQL if it makes you comfortable.

However, that’s not the real problem to solve – at least for the points outlined above. The real problem is understanding the data you have, how it connects, where it is created, and who can or should be allowed to access it. Focus on technology that maps your data estate and shows you where you can find your actual data. Use software that creates linkages between the data to ensure you have a “roadmap” of your data and how it links from the front of your data creation journey to the back.

Most importantly, create a standardized glossary of terms or ontology (words your organization uses to describe a data attribute when you’re talking to each other), for instance: “What’s the PCID please?”
PCID might stand for “Primary Customer ID”, so spend time labeling your data sources with these terms.

Invest in building “Data Assets”, a data asset is an authoritative logical view, gathered from many underlying sources that represent a domain. Simply put, when someone says: “Can I have our list of suppliers”, the answer is to hand them the “Suppliers” data assets.

Find govern access

Crucially, once you build this data fabric of the future, ensure that you are only letting the right people see the right data. Data Security and Data Sharing is an article in its own right for another day, but this
is key. Find a way to enforce data security and data sharing policies across your data domain, in as automated a way, as possible. All organizations have rules they abide by, so ensure your data meets those rules, for people to have confidence that the data is being used in the correct fashion and by the correct people.

Lastly, but by no means least, provide a discovery and access mechanism so that users can explore the data you’ve spent so long finding and curating and ensure access is seamless without your teams having to spend hours, days or weeks trying to get the system access to some underlying datastore, whilst someone tells them the 100 reasons they’re not allowed. It’s all about three key terms…Find, Govern and Access. And it’s what data enablement start-up, WCKD RZR, was founded to solve.

WCKD RZR’s pioneering software, Data Watchdog, eradicates a myriad of international data privacy, management and cost issues for multi-national businesses. The business was founded because as HSBC’s Head of Transformation for Global Banking and Markets I discovered there was no automation tool that allowed a multi-national bank, for instance, to use machine learning to catalog its cloud-based data while ensuring all regulatory requirements and access controls were met.

Businesses often face a range of problems caused by conflicting data governance policies and authorization controls in different locations and jurisdictions. If a business wants to fix its messy data estate by moving to a single cloud or database management system, it usually costs millions of dollars to do so. Then data is often not labeled or can’t be found and is subject to different regulations. WCKD RZR’s mission is to enable large global organizations to access all their data around the world wherever and whenever they want, fully compliant with the mountains of conflicting global rules. So, for true data enablement: find and map your data, ensure you have suitable access policies applied and make sure you have a seamless method to access the data. Simple.

By: Chuck Teixeira, co-founder and CEO of WCKD RZR.