Mike Walton, Founder and CEO at Opsview
It’s fair to say that digital transformation is a buzzword across business operations right now, with firms increasingly ploughing time and budget into projects. In fact, research from Deloitte found that the average digital transformation budget has increased by 25% over the last year, and 19% of respondents confirmed that they were planning to invest an incredible $20 million alone on transformation this year.
Financial Services is certainly a sector which will be leading this overhaul. Banks and institutions are looking to integrate increasingly digital infrastructures to meet growing customer demand, following a rapid change of pace over the past two years. This is confirmed by PwC’s report which revealed that 77% of financial institutions are increasing efforts to innovate – with a strong, reliable digital presence at the heart of this.
However, whilst digital transformation provides the opportunity for innovation, achieving the desired outcomes is not without its challenges. This is especially true when you consider that successful digital transformation relies on an efficient IT infrastructure – arguably the kryptonite of Financial Services over the past couple of years. Without this, any attempt at innovation is faced with failure, customer satisfaction is also put on the line and business operations will face certain disruption.
In essence, Financial Services needs to get its digital house in order, quickly.
The banking sector is certainly a standout example of complex and disjointed IT systems. According to the recent Which? report, which followed a Financial Conduct Authority survey in November 2018, UK banking has been in meltdown. The sector was hit by IT outages on a daily basis in the last nine months of 2018 – six of the major banks suffered at least one incident every two weeks. Perhaps the worst culprit was TSB, which lost 12,500 customers and £330 million in the wake of its IT systems mitigation failure.
Financial Services is not alone, however – IT failures are a worryingly common issue across some of the world’s biggest firms. Just look at British Airways’ IT failure which affected 75,000 passengers, or Delta Airlines’ IT woes which led to $100m losses and thousands of cancelled flights. Let’s not also forget O2’s outage which resulted in 30 million customers not being able access their data.
In today’s ‘always on’ world, customers expect to be able to use a firm’s services whenever they wish. Downtime therefore is not acceptable – especially in mission-critical industries like Financial Services where people rely on apps and online systems to complete vital everyday tasks. Furthermore, it is not in the banks’ interests to continue suffering these frequent outages. Downtime is costly. Firstly, it affects brand reputation – customers don’t forgive easily – just ask WhatsApp. Just a few minutes of downtime can completely destroy the customer experience and if organisations fail to deliver exceptional customer service in today’s fast-moving world, competition will waste no time trying to steal customers and swallow market share. Telegram, a small rival to WhatsApp gained three million new customers during the WhatsApp, Facebook, Instagram blip a few weeks ago.
IT outages are also damaging to the balance sheet. Gartner has previously estimated that IT downtime costs $300,000 per hour, rising to over $500,000 for the biggest brands (it famously cost the New York Stock Exchange $2.5 million per hour in its four hour outage).
As a result of ongoing IT failures in banking, the regulator has since stepped in and called for a maximum outage time of two days. Whilst this is a step in the right direction, it’s still too long – customers won’t accept this now or in the future. Businesses must adopt new processes and tools that leverage the very best systems available today, and seek to reduce the two-day maximum to a mere matter of minutes in the next two years, working towards a new virtual zero-downtime model; if they want to stay competitive.
So what can the industry do to turn its fortunes around? For me, the biggest issue that needs overcoming is legacy IT, which is one of the primary reasons for repeated IT failures. The reason challenger banks such as Monzo and Starling have not suffered like the more established players have is due to the fact they have built digital into the heart of operations, instead of it being an afterthought eventually built into established systems.
These sprawling IT systems are being continuously patched up. Behind a new breed of innovative customer and employee-facing digital services lies a hotchpotch of disparate and decentralised systems – virtual machines, hybrid cloud accounts, IoT endpoints, physical and virtual networks and much more. These disparate, decentralised systems don’t talk to each other, and they frequently fail. To make things worse, many of these systems are outside the control of IT, adding an extra layer of opacity and complexity. In fact, a recent report from Parliament’s Public Accounts Committee revealed that the Bank of England’s IT expenditure is being inflated by the use of legacy systems – the bank is reportedly spending 33.6% more on IT than other central government departments.
In a time of increased competition and customers able to voice their dissatisfaction with just the click of a button, financial institutions have too much at stake to risk the continuation of IT outages. Therefore, they need to adopt best practice operational activities and processes, such as running regular threat and vulnerability assessments, conducting configuration reviews and including operation process validation checkpoints. This significantly reduces the chances of suffering from a systems failure, by enabling IT teams to anticipate problems and quickly deal with them before they become outages, simply by increasing the visibility into the entire IT network.
Gaining this insight, however can often be a challenge. Possibly because the tools being used were designed to only monitor the static, on-premise infrastructure of the past, rather than today’s dynamic, cloud and virtual-based systems. More commonly, however is because firms are using multiple tools, thereby producing varying versions of the truth for siloed IT teams. Research from analyst firm Enterprise Management Associates has indicated that it can take businesses between three to six hours to find the source of an IT performance issue, due to the volume of monitoring tools being used.
The only answer is to unify IT operations and monitoring under one, single pane of glass. This not only provides a holistic view of what’s happening, but also a single version of the truth, thereby avoiding duplication of effort and uniting siloed teams. Outages can occur suddenly and without warning. In such cases, it’s vital to detect the failure quickly, and know the impacted systems. Once identified, organisations should have processes in place to rapidly mitigate the issue – reducing downtime, unsatisfactory user experience and lost revenue.
Today’s 24/7 society means that any slip up will be amplified far more than ever before, putting financial operators in the firing line. Add fickle customers into the mix and companies working in Financial Services need to change their approach to IT outages otherwise they will suffer the consequences. Whilst an outage may not always be the fault of IT, financial institutions need to invest heavily in managing their processes if and when outages do occur or risk losing market share. Learn from the mistakes of others and prepare for failure – otherwise prepare to fail.