From Shadow IT to Shadow AI

The first law for AI was approved recently and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to utilise AI, especially in sensitive areas, will have to strictly control the AI data and its quality and create transparency – classic core disciplines from data management.

The EU has done pioneering work and put a legal framework around what is currently the most dynamic and important branch of the data industry with the AI Act, just as it did with GDPR in April 2016, and with Digital Operational Resilience in January 2025. And many of the new tasks from the AI Act will be familiar to data protection officers and every compliance officer involved in GDPR and DORA.

The law sets a definition for AI and defines four risk levels: minimal, limited, high and unacceptable. AI applications that companies want to use in aspects of healthcare, education and critical infrastructure fall into the highest security category of “high risk”. Those that fall into the “unacceptable” category will be banned, for example if considered a clear threat to the safety, livelihoods and rights of people.

AI systems must, by definition, be trustworthy, secure, transparent, accurate and accountable. Operators must carry out risk assessments, use high-quality data and document their technical and ethical decisions. They must also record how their systems are performing and inform users about the nature and purpose of their systems. In addition, AI systems should be supervised by humans to minimise risk, and to enable interventions. They must be highly robust and achieve a high level of cybersecurity.

The potential of generative AI has also created a real gold rush that no one will want to miss. This is highlighted in a study by Censuswide on behalf of Cohesity, a global provider of AI-supported data management and security. 86 percent of the 903 companies surveyed are already using generative AI technologies. 

Mark Molyneux, EMEA CTO from Cohesity, explains the challenges this development brings with it and why, despite all the enthusiasm, companies should not repeat old mistakes from the early cloud era.

The path for users to AI is very short; entry is gentle, easy and often free, and that has big consequences that should be familiar to companies from the early phase of the cloud. That’s why it’s particularly important to pay attention to the following aspects right now:

Avoid loss of control

In the past, public cloud services have sparked a gold rush, with employees uploading company data to external services with just a few clicks. IT had temporarily lost control of company data leading to it accepting risks in terms of protection and compliance. The birth of shadow IT.

Respondents now expect something similar with AI, as the survey shows. Compliance and data protection risks are cited as the biggest concerns by 34 and 31 percent respectively. 30 percent of company representatives fear that the AI could also spit out inaccurate or false results. After all, most users do not yet know how to optimally interact with the AI engines. And last but not least, the generative AI solutions are still new and not all of them are yet fully developed.

The media often reports on companies that have had this experience. In April 2023, engineers at Samsung uploaded company confidentials to ChatGPT, making them the learning material of a global AI – the worst case from a compliance and intellectual property perspective.

Since the innovation cycles in AI are extremely short, the range of new approaches, concepts and solutions is exploding. Security and IT teams find it extremely difficult to keep up with this pace and put the respective offers through their paces. Often they are not even involved because, like the cloud, a business unit has long been using a service – after shadow IT, shadow AI is now emerging and with it an enormous loss of control.

Make people aware of dangers

At the same time, new forms of possible misuse of AI are becoming known. Researchers at Cornell University in the USA and the Technion Institute in Israel have developed Morris II, a computer worm that spreads autonomously in the ecosystem of public AI assistants. The researchers managed to teach the worm algorithms to bypass the security measures of three prominent AI models: Gemini Pro from Google, GPT 4.0 from OpenAI and LLaVA. The worm also managed to extract useful data such as names, phone numbers and credit card details.

The researchers shared their results with operators so that the gaps can be closed and security measures can be improved. But here a new open flank is clearly emerging on the cyber battlefield where hackers and providers have been fighting each other with malware, spam and ransomware for decades.

Speed without being hasty

IT teams will not be able to turn back the clock and keep AI out of corporate networks. Therefore, bans are usually not an appropriate approach. IT cannot and should not be tempted to rush and make quick decisions, but rather regain control over its data and responsibly govern the AI.

This allows IT teams to accurately assess the risk and rule out possible external data sharing. The AI is self-contained and can be introduced in a controlled manner. IT teams can also be very selective about which internal systems and data sources the AI modules actively examine. You can start with a small cluster and introduce AI in a highly controlled manner.

AI models that have already been introduced by third parties can be tamed by specifying exactly which data these models are allowed to access. A decisive advantage for slowing down the uncontrolled dynamics of AI, because data flows can be precisely controlled, useful information protected and legal requirements adhered to. 

Mark Molyneux

Mark Molyneux is CTO for EMEA at Cohesity

Data-Sharing Done Right: Finding the Best Business Approach

Bart Koek • 20th November 2024

To ensure data is not only available, but also accessible to those that need it, businesses recognise that it is vital to focus on collecting, sorting and governing all the data in their organisation. But what happens when data also needs to be accessed and shared across the business? That is where organisations discover a...

Nova: The Ultimate AI-Powered Martech Solution for Boosting Sales, Marketing...

Erin Lanahan • 19th November 2024

Discover how Nova, the AI-powered engine behind Launched, revolutionises Martech by automating sales and marketing tasks, enhancing personalisation, and delivering unmatched ROI. With advanced intent data integration, revenue attribution, and real-time insights, Nova empowers businesses to scale, streamline operations, and outperform competitors like 6Sense and 11x.ai. Experience the future of Martech with Nova’s transformative AI...

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...

Is automation the silver bullet for customer retention?

Carter Busse • 22nd October 2024

CX innovation has accelerated rapidly since 2020, as business and consumer expectations evolved dramatically during the Covid-19 pandemic. Now, finding the best way to engage and respond to customers has become a top business priority and a key business challenge. Not only do customers expect the highest standard, but companies are prioritising superb CX to...

Automated Testing Tools and Their Impact on Software Quality

Natalia Yanchii • 09th October 2024

Test automation refers to using specialized software tools and frameworks to automate the execution of test cases, thereby reducing the time and effort required for manual testing. This approach ensures that automation tests run quickly and consistently, allowing development teams to identify and resolve defects more effectively. Test automation provides greater accuracy by eliminating human...