Robert Binder, Daniel Kämmerer and Daniel Münch of BearingPoint RegTech explore the architectural principles crucial for successful supervisory technology – or SupTech – applications, and examine the future of regulatory reporting in this context, with the flexibility and adaptability offered by latest generation of BearingPoint RegTech’s Abacus Regulator.
The SupTech generations and Abacus Regulator
A recent paper by the Financial Stability Institute of the Bank for International Settlements (BIS) explores views on the different generations of supervisory technology – known as SupTech. In this article, SupTech refers to the use of innovative technology by central banks and financial supervisory authorities – which, in this article, we describe as regulators – to support their tasks in supervision and statistics. According to the BIS paper, the technologies used by regulators can be grouped into four distinct generations. The first and second generations offer only limited automation, and data from different sources is typically distributed across disjoined data silos, impeding gainful insights. In contrast, the third and fourth SupTech generations are characterised by end-to-end automation and consolidation of data from different sources in a single ‘data lake’, facilitating access, analysis and informed decision‑making.
However, the envisioned capabilities of the third and fourth SupTech generations are impossible without a modern software architecture. Indeed, BearingPoint has been deeply influenced by the concept of SupTech generations when developing its new generation of Abacus Regulator, a standard software solution for central banks and financial supervisory authorities for data collection, management and supervisory workflows. To our knowledge, the new generation of Abacus Regulator is the first standard software solution of the third SupTech generation. With Abacus Regulator, BearingPoint RegTech has been awarded Cloud-Native Solution Partner in the Central Banking FinTech RegTech Global Awards 2020. This article then outlines the architectural principles that are crucial prerequisites for successful SupTech applications, and that are the technical foundation of our new generation of Abacus Regulator. The final section of this article provides a brief outlook on how we perceive the future of regulatory reporting in the context of SupTech, utilising the flexibility and adaptability of our new generation of Abacus Regulator.
Architectural principles for SupTech
Distributed computing and cloud-native software architecture
The software solutions used by regulators belonging to the first and second generations are typically of a classical, monolithic architecture. This leads to significant restrictions regarding scalability and elasticity. Here, scalability refers to a software solution’s capability to cope with increasing workloads by utilising more computational resources. Elasticity means its ability to allocate resources only when they are needed and to free them automatically afterwards. Curiously, scalability and elasticity have long been overlooked as essential requirements for regulators: the amount of regulatory reporting data received by regulators has increased significantly in the past decade. The shift towards more granular data, as evidenced by the introduction of AnaCredit, has further sped up this development.
In addition, peak workloads of regulators are not evenly spaced out in time: most regulatory reports have quarterly or annual reporting frequencies leading to peaks in workload during specific, narrow time periods. Software solutions belonging to the first and second generations have not kept up with these developments, so regulators are often facing processing backlogs at reporting dates, which can span multiple working days. Furthermore, the infrastructure provided is typically occupied by these solutions in an inelastic fashion throughout the year, leading to a significant waste of computational resources. The solution to these issues is to move away from a classical, monolithic software architecture to a more modern, modular and containerised architecture built on the principles of distributed computing.
Modular software architecture founded on containers – sometimes dubbed ‘microservice-driven’ architecture – allows software solutions to reach much higher degrees of horizontal scalability by spawning an additional, containerised processing instance for each submission or task received. As this upscaling is undertaken in a dynamic fashion and instances are terminated automatically after their task is completed, such software architecture intrinsically advances elasticity as well. Traditional, on-premise IT infrastructure restricts the achievable degree of scalability and elasticity.
It is typically difficult to scale up physical hardware resources on short notice and often equally onerous for regulators to utilise excess computing resources during non-peak times meaningfully. This software architecture, dubbed ‘cloud-native’, therefore lends itself to deployment in the cloud, which allows for acquisition and de-acquisition of computing resources on the fly. In recent years, the open-source projects Docker and Kubernetes have been crystallised as the de facto standards for software containers and container orchestration, respectively.
This software architecture, however, does not only advance scalability and elasticity, but brings an additional perk in comparison with a monolithic architecture – it lends itself to extensions tailored to the evolving needs of regulators. Containers are orchestrated in a well-defined fashion, so additional functionality can be implemented in an encapsulated way as dedicated plug-in microservices, leading to a shorter time-to-market for new and improved solution capabilities.
Big-data-oriented storage
Increasing data volumes not only require a paradigm shift for solution architectures as a whole, but also for data storage in particular. Software solutions belonging to the first and second generations typically rely on classical relational databases for storing regulatory report data. These databases – especially with the advent of granular data – have proven to be bottlenecks in many use cases. For the third and fourth generations, data storage must shift from classical, relational databases to distributed, cloud-native storage formats – such as the open-source project Apache Parquet – which are commonly referred to as second-generation big data technologies. Instead of residing in a single monolithic database, regulatory report data is spread out across a data lake filled with a multitude of small, individual files, allowing for tremendously increased performance of parallel data access. With specialised structured query language (SQL) engines – such as the open-source project Presto – these distributed storage setups can still be queried like classical relational databases. Furthermore, most cloud-native distributed storage formats are column-oriented – in contrast to row-oriented relational databases – allowing significantly improved compression, as rows generally exhibit just a single data type and often redundant information.
Flexible workflow engine
The increasing complexity of regulatory reporting and supervision brings increasingly complex and evolving business workflows for regulators. Software solutions belonging to the second generation, however, typically feature hardcoded process steps with limited automation, which curtails adaptability and puts an undue burden on users. Advancing to the third and fourth SupTech generations, by contrast, requires a flexible workflow and decision automation engine that can cope with both regulatory change and evolving business requirements of regulators. In addition, workflow engines should allow seamless integration of custom plug-in microservices – as previously described – into workflows. During the past decade, the business process model and notation (BPMN) has crystallised as the standard for adaptive business process modelling, allowing for both textual and graphical descriptions of workflows. Thus, BPMN-compliant workflow and decision automation engines, such as the open-source project Camunda, provide SupTech solutions with the desired flexibility – for example, allowing fully automated end-to-end processing of regulatory reports, but also outcome-dependent human decision points. This way, adaptation of business workflows does not typically require changes to the software code base, but only to its BPMN representation, which can be ingested by the workflow
engine on the fly.
Application programming interface (API)-driven information exchange
Software solutions for regulators need to integrate with a diverse IT ecosystem. Common tasks include:
- Automatically submitting regulatory reports by supervised entities as well as automated remittance of these reports to other competent authorities, including feedback loops.
- Federating user accounts and roles as well as master data of supervised entities from external solutions.
- Importing reference data from external data warehouses – for example, to transform raw data from regulatory reports into special-purpose data marts.
- Exporting data to external data warehouses – for example, to distribute information across departments, to external stakeholders or to web services.
- Accessing data from external business intelligence solutions.
The monolithic architectures of software solutions belonging to the second generation have often significantly complicated such integration. In many cases, tasks, such as those to integrate software solutions with a diverse IT ecosystem, have been relegated to cumbersome and error-prone manual performance. Where actual integration has been attempted, it has typically necessitated time-consuming project work resulting in custom alterations to the core code base of the solution, which are often cumbersome and error-prone in maintenance. To advance to the third and fourth SupTech generations, solutions must adopt a thoroughly API-driven approach to information exchange. By replacing tasks that require human interaction from graphical user interfaces as well as proprietary application-to-application interfaces with de facto standard API approaches – such as representational state transfer (REST) or GraphQL – integration is either streamlined or, in many cases, made possible in the first place. Furthermore, while solutions belonging to the first and second generations have often locked away their data in inaccessible databases or proprietary storage formats, software solutions belonging to the third and fourth generations should strive to make their data as accessible as possible to external solutions by employing standard technology. This can be achieved either via dedicated APIs or even via direct SQL queries to their data lakes. To conclude, an API-driven architecture is indispensable for solution deployments in the cloud, for which standardised channels for information exchange with external solutions via the web are pivotal.
Outlook on the future of regulatory reporting
In its work with financial institutions and regulators worldwide, BearingPoint RegTech has identified two major deficiencies in regulatory reporting: a lack of system integration and a lack of standardisation of data models and regulatory processing logic, which greatly decreases equality and operational resilience, while imposing very high costs on financial institutions.
To overcome these issues, we have proposed introducing a new system, RegOps, to facilitate the direct access of regulators to the regulatory databases of the financial institutions via APIs and the use of so-called functional units, which contain the regulatory processing and allocation logic. We are confident we could operate this model by jointly deploying content from our Abacus Banking and Abacus Regulator products in a novel setup and that we could realise a prototype in a very short timeframe.
We have conceptualised a system that provides the following novel key features, which are currently lacking in the world of regulatory reporting:
- A common, standardised granular data dictionary, data model and processing logic.
- End-to-end decentralised integration of regulator and financial institutions via APIs.
- Integration of a big data-enabled tech backbone based on our Abacus Regulator product.
These novel key features allow for the following benefits:
- Improved efficiency, transparency and stability of financial markets:
- High-quality, highly timely, granular regulatory data in a standardised data model for deep-data insights.
- Standardised regulatory processing, allocation and validation logic strongly improves reporting quality and comparability.
- Automatic, fast data routing and processing via an API-enabled network.
- High efficiency and robustness:
- All components of RegOps are generally available and initial deployment can begin within a short timeframe.
- New regulatory logic can be deployed rapidly by the regulators.
- The RegOps model is highly cost-efficient for financial institutions because of the strong reduction in regulatory change costs.
- Due to the high degree of automation, this system is operationally highly robust.
- Open for extension and innovative tech:
- The RegOps framework is highly flexible and globally adaptable to all scenarios of regulatory reporting.
- It is also compatible with modern and upcoming technologies such as cloud computing, artificial intelligence, distributed ledger technology, and more.
The authors
Robert Binder
Associate product manager, BearingPoint RegTech
Daniel Kämmerer
Product manager, Abacus Regulator, BearingPoint RegTech
Daniel Münch
Business advisor, emerging technology, BearingPoint RegTech
by :
Source link