Time is fast running out for G-SIBS (and indeed D-SIBS) to demonstrate compliance with the principles of BCBS 239. Many surveys have been conducted by firms such as EY, McKinsey and Deloitte – none of which paint a particularly pretty picture in terms of readiness. Most suggest that the majority of banks will only be able to demonstrate compliance with between 25% and 60% of the listed principles by the January 2016 deadline. Why is this?
At a recent industry event discussing barriers to compliance one of the top issues that participants highlighted was difficulty understanding what data and metadata the G-SIBS and D-SIBS possess within their systems, shared drives and documentation. It seems that for datasets that are relatively well understood many have made great strides towards improving governance, architectural and sourcing issues. For those datasets that are less understood and defined using artefacts such as data models, dictionaries and other documentation however progress has been much more limited.
‘Risk data’ by its very nature often opaque - typically featuring many aggregations, data flows, summaries, data inputs and different flavours of complex underlying calculations. So far this opacity has been addressed by many G-SIBS using largely traditional methods such as workshops and stakeholder interviews. These often generate reams of documentation requiring numerous peer reviews. Unfortunately however the devil really is in the data and too often this approach has only analysed the underlying schemas, content and calculations buried within applications too late in the process. Clearly if banks are to speed up progress in the time remaining more automated techniques need to be leveraged.
Leveraging the latest Data Exploration & Discovery techniques
Fortunately many automated techniques already exist to rapidly discover and understand the characteristics, lineage, relationships and linkages buried in disparate and complex datasets that can be applied to BCBS 239 requirements. In fact Data Exploration and Discovery techniques are often now considered an essential starting point for most Business Intelligence and Data Science projects.
Regulatory data is still a relatively untapped opportunity for these techniques however. Typically the discovery process works by applying different algorithms to a dataset in order to harvest metadata characteristics such as formats, standards, uniqueness and completeness. In many tools more advanced key and relationship discovery capabilities also exist enabling users to discover mappings, transformations and the true lineage of individual data points. Essential requirements for understanding where your risk data outputs originated. This valuable metadata can then be integrated and enriched with traditional sources such as data models, dictionaries, workshop notes and stakeholder interviews to create a detailed understanding of a bank’s risk architecture, data, processes and governance. All essential for BCBS239 compliance.
So for those lagging behind perhaps it’s time to find out how these proven techniques can help your bank hit the 2016 deadlines? Contact us for more information.
Data to Value are an innovative Data Consultancy that specialise in applying the latest tools and techniques to complex Data Management requirements.