Cloud portability: the regulator’s call for operational resilience answered

Dan Holt, Director, Sales Engineering at Cockroach Labs
When systems fail, the primary tool in the engineer’s toolkit has always been recovery: first, get things back online and then restore things to their pre-outage state. A new evolution in application architecture, however, replaces recovery tactics with zero-downtime survival — particularly for digital banking.
Concerns about the risk to financial services infrastructure saw British and European Commission officials introduce rules on digital resilience, with service levels and penalties now hot topics of conversation between regulators and industry.
Multi-cloud – as illustrated by this example from Lloyds Banking Group – has become table stakes for regulators signing off on resilience, yet many are using architectures and practices that introduce complexity, delay and risk to their data recovery.
Data portability – the ability to move, copy and transfer data rapidly to minimise disruption – is key to multi-cloud zero-time recovery. Achieving that takes an architecture that works without special engineering or human intervention – that is operationally efficient.
Zeroing in on recovery
Data availability and reliability are the cornerstones of zero-time recovery and, therefore, the key measures of resilience. Multi-cloud can deliver both – lose one cloud provider but remain active using the others in your IT infrastructure.

Several factors, however, undermine availability and reliability. One is the distance data must travel between a back-up database and applications. Size and complexity of applications is another: a microservice with a simple architecture and a few gigabytes of data can be recovered in a few hours but something with 400 interlinked services and terabytes of data will take longer.
Pathway to portability 
Exploiting multi-cloud means building an infrastructure primed for data portability without ensnaring IT in a strategic attempt to replicate the database to achieve recovery.
There’s a three-step process you can take to achieve this.
First is to embrace cloud-agnostic database software. It’s important you have the ability to run on-prem – offering an availability zone in scenarios where you may be prohibited from using  cloud providers based on geography and sovereignty. It’s also important to ensure your multi-cloud architecture won’t sacrifice transaction reliability and data correctness for a cross-domain footprint.
Next, craft an operational resilience plan around data portability. Most CXOs push back on the idea their big cloud-service provider could fail.  Many argue it’s sufficient to deploy across different regions of the same provider. This, however, won’t protect you from a full service outage. Providing proof that you can redeploy applications within a certain timeframe and planned for all scenarios will help in demonstrating operational resilience. 
Finally, there’s application modernisation. As your business expands, so your data estate will grow and applications increase in complexity. Multi-cloud data portability needs data replication that’s both synchronous and automatic for guaranteed availability and reliability of applications. This is possible in an environment where application infrastructure is consistent and predictable – so no silos or coding gymnastics.
Conclusion
Resilience is the new king of digital financial services but achieving that and navigating new rules doesn’t mean surrendering principles of reliability and availability. It takes a database architecture that swaps the replicated practices of the past for a distributed future of consistency by design. 

spot_img

Explore more