Paul Mercina, Director of Product Management, Park Place Technologies.
The cost and impact of IT system downtime has never been greater due to businesses’ increasing dependence on IT systems and infrastructure across all areas of their operations. Any system outage can have catastrophic impact on an organisation in terms of costs, lost trade and reputation. Gartner estimates the average cost of network downtime at a staggering $5,600 per minute. This figure is even more startling when you consider that British businesses are reported to suffer at least three days of downtime per year.
Downtime is a major problem for any industry but is particularly damaging for the finance sector where consumer trust is paramount. Many high street banks have made the headlines in the last year after suffering system outages breaching customers’ data and affecting their access to accounts. Beyond these high profile cases, the true scale of the problem is evidenced by a report from Which? Money revealing that, between 1 April and 31 December 2018, there were 302 reports of IT systems failure affecting customer transactions – equivalent to an incident each day. Which? Money said that six of the major banks had suffered at least one incident apiece every two weeks.
Banks are, therefore, under increasing from politicians and regulators to improve their response to IT problems. In November last year the Financial Conduct Authority said it was “deeply concerned” after finding that technology outages had more than doubled over the preceding 12 months, while the Treasury Select Committee launched an inquiry into the issue. The Bank of England has also threatened banks with higher capital charges if they do not do enough to deal with technical problems.
Human error main cause
There is a common misconception that IT outages are an unavoidable part of business operations; however, a large percentage of all downtime is not related to a failure in the technology itself but to how that technology is being used, configured and administered. The failure is usually down to a combination of a lack of training and planning.
So how can financial organisations minimise the risk of IT failure causing them to become the next unwanted headline?
Prevention is better than cure
The best way to avoid losing revenue, reputation and customers is to prevent outages, especially the type of routine failures that can’t be blamed on a major disaster. Adopting best practice processes – such as running regular threat and vulnerability assessments, conducting configuration reviews and including operation process validation checkpoints – can significantly reduce your chances of suffering from a systems failure.
Testing of different systems requires time and resources that can sometimes be difficult to justify. However, it’s important to remember that thorough, targeted real-life testing can reveal incompatibilities, glitches and capacity issues unforeseen at planning stages. It was reported that one of the key causes of the Lloyds Banking Group outage which left customers unable to access their online banking services was the result of various systems not being as thoroughly tested as they should have been when accounts were migrated to the Group’s new core banking platform.
Staff engagement and training
According to report by the Ponemon Institute, human error is the second most common cause for system failure – accounting for 22% of all incidents. Employees must be regularly trained on how to avoid an outage as well as how to mitigate the damage and impact should one occur. Within financial organisations staff will be using a myriad of complex systems and technologies and it’s important to remember these technologies are only ever as good as the people using them. Clear, precise and regular usage guidance is imperative to minimise the chances of human error.
Remain vigilant at all times
Vigilance should be an essential part of any financial organisation’s IT strategy. Organisations should be working with an IT managed service provider to ensure that they are always following up to date best practice guidelines and pro-actively questioning their IT set-up and the associated risks.
Well-rehearsed recovery plan
Although an IT outage is sometimes unavoidable, prolonged downtime does not have to be. Having a well-rehearsed business continuity plan in place can help to mitigate the impact of any system failures.
Any business continuity plan needs an executive owner/sponsor who has the experience and authority to get things done in a timely and processed manner. All action plans should be regularly reviewed at board level and shared with all stakeholders across the organisation so that all the risks and organisational implications are planned for to avoid its implementation being hampered by budget or knowledge constraints.