The lingering effect of blind spots in the cloud

Jesse Stockall, Chief Architect at Snow Software, explores the lingering effect of blind spots in the cloud during a time of increased reliance on these technologies.
Jesse Stockall, Chief Architect at Snow Software, explores the lingering effect of blind spots in the cloud during a time of increased reliance on these technologies.

For many organizations, 2020 was the year of maintaining business continuity. No matter the experience, many also learned a lot about resilience. But if last year was about keeping the lights on, 2021 must be about operationalizing what is now our new normal. 

Without a doubt, the pandemic has proven work from anywhere was feasible. It could be said though, that Covid-19 only accelerated what was inevitable. For the sake of improved productivity and efficiencies, organizations were already diversifying their technology stack by adopting SaaS models and migrating mission-critical business to the cloud (or multiple clouds), often in addition to maintaining valuable, on-premises legacy solutions. 

Add to that increasingly complex technology mix, a sudden shift to remote work where users could access cloud services and freely download applications directly onto their home-stationed laptops outside their company’s network. As a result, IT now faces a mountain of technology sprawl, not to mention real shadow IT challenges. 

And if that wasn’t enough, alongside a growing reliance on cloud instances is emerging security risks. Early in 2021, the SolarWinds breach became public knowledge, and the details are worrisome. Their Orion software is a popular network management system that monitors and manages the various components of an organization’s network. A long list of organizations use Orion to sort through the full scope of their network, including multi-cloud services. Malicious code was inserted into the Orion development process and pushed out as a typical software update, thereby infecting any organizations using it. 

This breach taught us two important lessons: failure to defend your technology supply chain could give attackers the one weak link they need to enter your network and, while complexity is inevitable in modern technology stacks, unnecessary complexity is risky. 

The complexity conundrum 

Today’s IT teams are challenged to operationalize their changing technology mix and manage the risks that come with it, especially when it comes to cloud environments.

For example, do you know how many cloud environments your organization uses today? What workloads are running on them, and who is using them? Do you have more licenses than you need? Are you re-harvesting unused subscriptions rather than simply buying more? These and other questions have financial implications certainly.

Without visibility into what you have and how you use it, you’re likely overspending and underutilizing. Additionally, the lack of visibility and control over your cloud computing resources creates a tangled web of complexity that presents a significant security risk and potential compliance failures. Take application development as one example. Much of today’s work has shifted from a completely built-from-scratch model to one where you’re likely building while assembling a vast collection of open-source components and cloud services. This enables fast, easy development, but it also presents blind spots when open-source projects receive updates and fixes that are not propagated to your product. This could lead to increased supply chain risk, as was the case with the SolarWinds breach. If your developers aren’t properly sourcing open-source code, you are not only at risk for noncompliance fines or requirements to divulge source code but susceptible to security vulnerabilities.

When considered on a larger scale, the complexity-driven security and compliance risk can be even more costly. Suppose you’re a hybrid or multi-cloud customer who also relies on certain on-premises solutions, a co-location centre and public cloud services. In that case, your legacy security stack probably doesn’t support the mix as well as it should. And your security team may not have the skills to fully understand cloud containers, on-premises legacy systems, mobile devices and endpoints to any in-depth level. Your choice then becomes sub-standard security or far too many cooks in the kitchen, each with their own technology agenda, which only raises the threshold of your complexity.

The need for a better mousetrap

Further complicating the issue of security in the cloud today is the Shared Responsibility model. When you rely on a third-party cloud service like Amazon AWS, Microsoft Azure or Google Cloud, they secure only a baseline level of security for their platform. This is a too-often forgotten fact, and the tendency is to think ‘Amazon is protecting our data’ when, in reality, you have an interconnected spiderweb of applications and permissions, each impacting all other systems.  

When something goes wrong, you can’t call up Amazon and ask them to fix it. Instead, who can help you address the problem? Your internal staff? The cloud provider? A software vendor? Your networking provider? Figuring out where the issue arose in this sense is more like a game of Clue where you’re searching for who did it and with what. It’s a vicious cycle that can result in no real progress.

The best defence to this complexity is to understand what your third-party cloud provider (or providers) are responsible for, as written in their Shared Responsibility policy, and communicate that with your IT and security team. With that baseline in place, you can build out incident response plans from there.

The second step you can take in shoring up your security and compliance risk is found within the power of automation. To continue the application development example, your team may have hundreds of source code repositories with dozens to hundreds of components that are all pieced together into a portfolio of products. It isn’t humanly possible to stay on top of everything being built with a manual process. Automation quickens the pace and drastically improves accuracy, so details aren’t missed.

Again, looking at this problem on a larger scale, maintaining visibility over the complex menu of cloud services, applications, on-premises legacy systems, mobile endpoints and whatever else is mission-critical to your organization is a complex task. Still, visibility is essential to get a handle on your security and compliance risk, not to mention performing the necessary due diligence over your IT budget. 

READ MORE:

Shine some light on the blind spots with visibility

Having a heterogeneous IT environment has its benefits – it allows you to choose best of breed tools, maximize your budget and build a resilient technology backbone. But one chink in the armour, and everything is suddenly precarious. Sorting out how to fix the problem is no easy task. However, with visibility into your network, cloud services, product development, and users, you can make significant gains across your security and compliance risks and your budget. Without it, you’re left floundering in the dark.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

Amber Donovan-Stevens

Amber is a Content Editor at Top Business Tech

Custom Software Development

Natalia Yanchii • 04th October 2024

There is a wide performance gap between industry-leading companies and other market players. What helps these top businesses outperform their competitors? McKinsey & Company researchers are confident that these are digital technologies and custom software solutions. Nearly 70% of the top performers develop their proprietary products to differentiate themselves from competitors and drive growth. As...

The Impact of Test Automation on Software Quality

Natalia Yanchii • 04th October 2024

Software systems have become highly complex now, with multiple interconnected components, diverse user interfaces, and business logic. To ensure quality, QA engineers thoroughly test these systems through either automated or manual testing. At Testlum, we met many software development teams who were pressured to deliver new features and updates at a faster pace. The manual...

Custom Software Development

Natalia Yanchii • 03rd October 2024

There is a wide performance gap between industry-leading companies and other market players. What helps these top businesses outperform their competitors? McKinsey & Company researchers are confident that these are digital technologies and custom software solutions. Nearly 70% of the top performers develop their proprietary products to differentiate themselves from competitors and drive growth. As...

Six ways to maintain compliance and remain secure

Patrick Spencer VP at Kiteworks • 16th September 2024

With approximately 3.4 billion malicious emails circulating daily, it is crucial for organisations to implement strong safeguards to protect against phishing and business email compromise (BEC) attacks. It is a problem that is not going to go away. In fact, email phishing scams continue to rise, with news of Screwfix customers being targeted breaking at...

Enriching the Edge-Cloud Continuum with eLxr

Jeff Reser • 12th September 2024

At the global Debian conference this summer, the eLxr Project was launched, delivering the first release of a Debian derivative that inherits the intelligent edge capabilities of Debian, with plans to expand these for a streamlined edge-to-cloud deployment approach. eLxr is an open source, enterprise-grade Linux distribution that addresses the unique challenges of near-edge networks...
The Digital Transformation Expo is coming to London on October 2-3. Register now!