For the PRA and the Bank of England, Operational Resilience requires better impact tolerances and more sophisticated service mapping

Jon Bennett, Chief Growth Officer at CloudStratex

 

Many financial organisations have now taken vital steps towards achieving an operationally resilient (OR) status. However, a speech recently delivered by the Bank of England’s Duncan Mackinnon rightly suggests that the process is far from complete.

With the passing of the March 2022 deadline, financial firms will have identified important services, set impact tolerances, and undertaken mapping and testing. However, the Bank of England has immediately turned its attention to the actions needed by 2025 – alongside some of the deficiencies identified in its findings thus far.

But what does this mean in a practical sense?

A key theme of the speech is that many organisations don’t yet have a detailed or consistent understanding of their own capacities for absorbing disruption – which means they need to embrace practices that promote visibility and understanding of their business and IT infrastructure

Jon Bennett

Further work for setting tolerances

Perhaps the key takeaway offered by this speech is simply that operational resilience involves a high degree of complexity.

This has certainly been our experience in helping clients to improve their resilience. After all, in large financial or finance-adjacent organisations, risk can take on a number of forms and appear across any and all aspects of an enterprise’s operations.

Finance departments will think about disruption in terms of its impact on financial reporting or project funding, for example, whereas the security side of a given firm might be more concerned over infrastructure vulnerabilities.

As a result of this layered and challenging environment – what Duncan Mackinnon calls the “ever-more complex and interconnected” operational nature of finance organisations – the speech suggests that firms moving towards operational resilience will need to make sure their processes for setting impact tolerances are suitably sophisticated.

Duncan Mackinnon illustrates this need by pointing out a high degree of variance between organisations that offer the same services, yet which point to highly different impact tolerances for those services.

The safety and soundness tolerances for CHAPS payments, for example, varied from two days to two weeks depending on the firm in question.

For Duncan Mackinnon, this means that “firms will have to justify how they came to the conclusions they have,” meaning that firms will need to have a clear understanding of the underlying causes of disruption in order to validate their self-assessments.

Understanding through effective service mapping

In order to achieve this level of understanding – particularly in light of the high degrees of complexity and interconnectedness in today’s IT infrastructure – service mapping is essential.

Service mapping is a means of discovering the application services in a given organisation, allowing firms to build a map comprising its various devices, applications, and configuration profiles.

The value of mapping isn’t just implicit in the broader project of achieving OR, but a primary focus of Duncan Mackinnon’s speech.

As he points out, “we expect firms’ mapping to include all critical resources and consider internal and external dependencies. Mapping should rapidly become more sophisticated, in line with firms’ potential impact. It should enable firms to identify vulnerabilities and inform the development of scenario testing.”

The firm message here is that current service mapping processes are not currently hitting the heights of sophistication that regulators require.

This isn’t surprising. Service maps are difficult and time consuming to create manually, and a lack of business context – especially when combined with the dynamic nature of modern networks – often leaves IT teams struggling with limited, out-of-date service maps which aren’t equal to the task of providing a full view of possible outages and impacted services.

Upgrading mapping practices

Addressing the common flaws in current mapping processes isn’t straightforward.

With the right third party support, however, it’s possible to take mapping to the heights of sophistication required for full OR compliance.

A good advisory service will, for example, consider opting for a top-down discovery process as opposed to horizontal. This means that devices and applications won’t be considered as independent or standalone, but as deeply interconnected.

By extension, a top-down approach to mapping helps organisations to immediately identify the impact of a compromised or disrupted object on the rest of the application service operation.

These changes will be increasingly essential for firms looking to shore up their OR to the regulatory standards suggested by Duncan Mackinnon.

Time is of the essence

Service mapping isn’t the be-all and end-all of operational resilience – but it represents a vital building block for identifying and correcting and possible causes of disruption, and one well worth establishing as soon as possible.

As the speech notes, “the longer firms take to map to the required level of sophistication and to run robust scenario tests, the shorter the period they will have to address their vulnerabilities and build resilience.”

Operational resilience is a journey – and, like many journeys, it will be greatly facilitated by a reliable map. With regulatory compliance located at the end of the road, it’s a journey well worth taking properly.

spot_img

Explore more