Challenges of moving to a multicloud strategy

Hybrid, or multi-cloud strategies are used by businesses for a variety of reasons from legal to cost saving, and application agility. Often businesses want the flexibility to take advantage of best prices or capabilities from differing public cloud vendors. Avoiding vendor lock-in is another reason a business will desire a multi-cloud strategy.

While a business may not want to be tied to any one cloud provider indefinitely, moving to a multi-cloud strategy is not right for everyone. In fact, the risk of being tied to one cloud provider is not necessarily the risk it’s often perceived to be.

This article sheds light on why moving to a multi-cloud strategy is not for everyone, and the barriers and challenges to overcome if this is option is considered.

Not all clouds are the same

In theory an application can be stretched over multiple cloud providers environments, however “True Application Portability” involves barriers that must be overcome prior to the app becoming portable, or truly multi-cloud enabled.

First, Virtual Machines (VM’s) must be eliminated from the application architecture. Not all clouds are the same under the covers. Each cloud has its own underlying abstraction layer, often called the Cloud Service Fabric, this is where network, compute and storage are presented to the VM that is usually used to host applications. For Example, AWS Networking is significantly different to Microsoft Azure Networking and in turn both AWS and Azure Networking are significantly different in their set up from GCP Networking. It is the same for virtual machines, a VM running on Azure cannot be automatically moved to AWS and an AWS VM cannot be directly moved to GCP.

If a VM is to be moved to a new cloud provider, the VM image will need to be modified to match the cloud fabric of the underlying target platform. Compute and storage offerings differ between cloud providers and often provide their own challenges in relation to application portability. These issues are compounded if the application uses Platform-as-a-Service (PaaS) offerings, for example Database-as-a-service (DBaaS), WebApps, or message broker components where the underlying data storage elements, website delivery mechanisms and application logic are proprietary to that particular cloud offering.

If the application uses Functions-as-a-Service (FaaS) (cloud services that provide an automated platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure themselves) then the task of portability becomes even more complex as once again each cloud’s FaaS nuances are specific to the cloud provider.

But if VM’s, PaaS and (serverless) FaaS architectures aren’t available, what can a developer do?

Breaking down the multi-cloud challenge

This interoperability issue between clouds can be viewed as the problem an international traveller has when they want to charge their laptop in several different countries. You either carry a plug for each power system (so one for the UK, one for Europe and one for the USA) or you carry an adapter that works anywhere.

For application architecture, that adapter would be a product like Docker, a container framework that is the same no matter what cloud it sits upon. However, containerizing an existing application is not necessarily an easy or cost-effective thing to do. Containers are a lightweight hosting architecture designed to spin-up and shut-down rapidly, scaling to match resources directly to demand. Containers allow you to pay the least while delivering the most from your cloud. The heavier a container the less portable it is.

Challenge one: Breaking an application down to microservices

To be able to run containers at scale across any cloud, the first thing you need to do is to break the application up into small component parts called microservices. Each microservice has its own container. Microservices are discreet capabilities within an application which, when put together, present a larger more complex application to the user. For example, if we look at a simple e-commerce application with: a website, a database that holds customer preferences, a catalogue of goods for sale and a payment capability, that would require a minimum of four micro-services within its architecture. If you then added in payment notification and fulfilment messaging to a warehouse to deliver the goods you would need at least six microservices. And so, it goes on. Each new application function requires its own service components.

Taking an existing application and breaking it down into a microservices-based architecture is expensive and time-consuming. But it is a pre-requisite to application portability because the container is the universal adapter.

Challenge two: Applying an orchestrator to run containerized applications

Unfortunately, the story doesn’t stop there. To run a complex containerized application, you need one essential component – an orchestrator – that will monitor and manage your containers. The orchestrator understands the overall microservices architecture including which containers, and how many of them, make up a specific application. It can tell each container where to find the other containers used to deliver the application. The most common orchestrator for containerized application delivery is Kubernetes, which is quite complex to operate.

Breaking the application down into microservices is the greatest blocker to becoming multi-cloud enabled. You might avoid vendor lock in from a single cloud provider, but you will still find yourself “locked-in” to docker and whoever provides the Kubernetes capabilities that you leverage to orchestrate your microservices-based application. Vendor lock-in at some level is, unfortunately a fact of life, you just need to decide at what level you want that lock-in to occur.

Challenge three: it’s extremely difficult to avoid cloud vendor lock-in

If cost is your concern, then you need to leverage PaaS or FaaS capabilities within the cloud. However, whilst both these services are cheaper to run, they will increase your level of lock-in to your chosen cloud provider. If you rely on FaaS or PaaS and the vendor decides to retire or alter those services, you in turn will be forced to rework the services that rely upon them. You are locked into the provider’s upgrade cycle as well as their technology!

Unless your application is already microservices-based, the redevelopment effort will massively outweigh any cost savings made by using a single cloud provider. Additionally, if it’s a commercial, off-the-shelf software application – you are completely at the mercy of the software vendor as to when or if they ever intend to convert the software into a microservices-based architecture suitable for containerisation and Kubernetes delivery.

Using a cloud that’s best for each workload

Most multi-cloud strategies in use today do not involve stretching a single application over multiple clouds (see above). It is much better to focus on leveraging the strengths and cost advantages of running the workloads that are best aligned to that cloud. For example, if you build your own software by knitting together Open-Source capabilities with a DevOps base you will probably run that workload on AWS. If you need huge amounts of compute to run scientific or statistical modelling, you’ll likely do that on GCP, or if you have information security assurance concerns, then you may choose to run those in Microsoft Azure. (Disclaimer: each cloud provider mentioned will happily expand on why their corner of the cloud is super-wonderful for each of the workloads mentioned above – the stereotypical labels are blurring as capabilities mature – although…).

There are three important considerations to make before moving your cloud service.

1. Avoiding downtime is an important consideration

Many vendor-specific services are delivered from single regions within a cloud but are presented to multiple regions as if they are native to that region. Some recent high-profile AWS and Azure outages have been tracked down to specific failures within one or two USA based datacentres – be careful which services and regions you rely upon. Centralized DNS and authentication are the lifeblood of applications, so you might want to build in resilience for these critical services. One thing to note is Hyper-scaler outages are infrequent and when they do happen, they are often resolved much faster than traditional IT outages in an on-premises datacentre. If an outage affects multiple customers, you can guarantee the cloud provider will be analyzing how the outage started and how to stop it from occurring again, which often means investing to make the service more resilient.

2. Make architectures and applications geographically resilient

Distributed consumption of services should be delivered from at least two regional datacentres, keeping the data close to where it is consumed, reducing latency for users. All the hyperscale cloud providers have a good regional split of datacentres, so this isn’t necessarily a reason for a multi-vendor cloud strategy, it’s just simple best practice to avoid a failure in a single datacentre affecting all of the users in that region.

3. Securing multiple clouds is always far more complex than securing a single vendor cloud architecture.

Leverage CIS (Center for Internet Security) benchmarks for cyber resilience, use multi-factor authentication, and if you do move to the cloud have one SIEM system that monitors what is happening in every environment in one console. If you don’t have one pane of real time visibility into your environment, it will be difficult to look for suspect activity, and you will never spot an attack whilst it is happening. This means the damage done will always be significantly higher than it would be if you are monitoring things in real time. Ultimately, public and multi-cloud environments grow beyond human scale to monitor and manage. This is where cloud agnostic application resource management (ARM), advanced observability, and extended detection and response (XDR) systems, fuelled by artificial intelligence allow the automated optimization, remediation, and protection that manual human efforts will never achieve.

In summary

I would always recommend that an organization starts with a single cloud provider and becomes proficient in using that provider’s native tooling and services. Then as the organization matures its use of cloud and has mastered all the aspects of operating and utilizing that cloud provider’s capabilities, it can begin to assess what real benefits can be achieved by moving to a multi-cloud strategy. Don’t try to boil the ocean – it is better to be an expert on each cloud than a generalist on multiple clouds. Third party toolsets will be required for multi-cloud strategies, but vendor native toolsets usually have a more complete feature set for that vendor’s cloud capabilities.

Nick Westall CTO CSI Ltd

Nick has over 29 years of progressive international leadership, management & business development within the Technology, Data & Media B2B industries, direct & channel.

Choose an AI solution to transform beyond technology

Kit Cox • 09th December 2024

The first step is knowing exactly what your business wants to achieve with AI; think faster, smarter and more efficient. Once you know what you are working towards, you can start looking for a solution that can help you make it a reality. AI integration can feel like a daunting task at the beginning, so...

A Roadmap to Security and Privacy Compliance

John Lynch Director of Kiteworks • 04th December 2024

Only by understanding the current regulatory environment and implementing robust data protection measures, can organisations enhance their security posture, ensure compliance, and build resilience against the latest cyber threats. This article provides a comprehensive roadmap of how to do it.

Data-Sharing Done Right: Finding the Best Business Approach

Bart Koek • 20th November 2024

To ensure data is not only available, but also accessible to those that need it, businesses recognise that it is vital to focus on collecting, sorting and governing all the data in their organisation. But what happens when data also needs to be accessed and shared across the business? That is where organisations discover a...

Nova: The Ultimate AI-Powered Martech Solution for Boosting Sales, Marketing...

Erin Lanahan • 19th November 2024

Discover how Nova, the AI-powered engine behind Launched, revolutionises Martech by automating sales and marketing tasks, enhancing personalisation, and delivering unmatched ROI. With advanced intent data integration, revenue attribution, and real-time insights, Nova empowers businesses to scale, streamline operations, and outperform competitors like 6Sense and 11x.ai. Experience the future of Martech with Nova’s transformative AI...

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...

Is automation the silver bullet for customer retention?

Carter Busse • 22nd October 2024

CX innovation has accelerated rapidly since 2020, as business and consumer expectations evolved dramatically during the Covid-19 pandemic. Now, finding the best way to engage and respond to customers has become a top business priority and a key business challenge. Not only do customers expect the highest standard, but companies are prioritising superb CX to...