1.1. Data as the Lifeblood of the Industry
The book gives an overview of the challenges in content management in the financial services industry. It is both an update and an extended version of a book I wrote back in 2007 just before the onset of the financial crisis: Managing Financial Information in the Trade Lifecycle: A Concise Atlas of Financial Instruments and Processes. The current book differs in two important ways:
⢠Since the 2007ā09 global financial crisis, business models of financial services firms have undergone enormous change and regulatory intervention and regulatory information requirements have significantly increased.
⢠The technological drivers for change have accelerated andāif a crisis and regulatory scrutiny were not enoughāthe financial services industry is also challenged by disruptive new entrants. Customer expectations on interaction with their financial services suppliers pushes firms to change.
In other words, an updated version is in order, a version that takes the notion of a Primer as a starting point: back to first principles when it comes to information management in financial services. What do regulatory intervention and common regulatory themes, such as solvency, liquidity, investor protection, and pre- and posttrade transparency in OTC markets mean from a financial services information perspective? What do customer interaction expectations mean for the back-end infrastructure? What does the move to the cloud and mobile interaction mean for security and for the information supply chain? How can financial services firms innovate and capitalize on new technology?
These are some of the questions we will be exploring in this book. We will discuss best practices and recommendations on information management seen from the data perspective. A financial institution and increasingly any kind of business can be seen as a collection of data stores and processes to manipulate that data and to bring new data in as well as to push data outāto regulators, investors, business counterparties, and customers. If we see the financial services industry as a network consisting of actors (clients, banks, investment management firms) and transactions (account opening, money transfers, securities transactions) between these actors. We can see business processes from the perspective of transaction life cyclesāresearch, trades, and posttrade activitiesāas well as master data, changes, such as product and customer lifecycle management.
No other industry is as information hungry as financial servicesāall the raw material is information itself. More than in other industries, capabilities in information management are more important. The potential impact of the financial services industry (especially the adverse impact) on the real economy has been well documented (see, e.g., United Nations Environment Programme, 2015). The irony in financial services is that this is an industry where the need for information at the point of buying is largestāgiven the length of some of these products (life insurance, mortgages) and the far-reaching impact they can have. The far-reaching impact of financial products buying decisions for consumers (insurance, investment/retirement plans, and mortgages) contrasts with the relative ease by which these products are marketed and bought.
Information and timing is critical both in wholesale banking and in retail banking due to the speed of technological innovation. The large amounts of additional data generated and the different ways in which customers transact with their financial service provider have led to new demands on information technology, information availability, and security.
In this introductory chapter we will discuss some of the recent developments in data management. This will be followed by an overview of the supply chain perspective in information managementāseeing it as a logistics problem. We will end this introduction by stating the various aspects of the data management problem to set the stage for the next chapters.
The reach of the book is broad so necessarily some topics will be discussed at an introductory level and some areas will be explored more in depth. Focal areas are information management from a process perspective and how data management considerations differ by the type of information and its use cases.
1.2. Developments in Information Management
Data management has come on the radar in recent years since its successful rebranding into ābig data.ā Big data is nothing more than the application of todayās information aggregation tooling and hardware processing capacities to business processesāranging from upsell suggestions to call center staff to credit scoring to uncovering investment strategies. The main developments that have made data management more critical than ever in financial services are as follows:
⢠Growth in the volumes of information. Customers interact using mobile devices and leave an extensive digital trail.
⢠Faster transaction and settlement cycles shown by the advent of high-frequency trading and shrinking settlement windows.
⢠Speed of technological innovation and the competitive changes introduced by those. Computing power has increased and technologies created and brought to fruition by internet retail companies and social media start to become applied in financial services.
⢠Regulatory information and process demands. Regulators ask a lot more detail and since regulatory reporting is a central function, this is where the onus is on connecting different internal information sets that are typically scattered by customer segment or product verticals. Simultaneously, regulators scrutiny the quality of internal processes and quality metrics.
⢠Less tolerance and more demands on interaction from customers. Financial services are no longer a āspecialā service. Used to other retail services provided over the internet, clients expect high standards when it comes to their account overview, order status, and response times. This puts pressure on the back-end infrastructure and information aggregation capacities of banks.
To start, letās look at the growing volumes of information. Traditionally, in data management the focus of volume growth had been in the wholesale markets. Rapid economic developments in certain areas of the world, a move to on-exchange trading and more trading venuesāas well as growth in the number of hedge funds and the rise of high-frequency tradingāall led to more transactions. To give some idea, large exchanges have a daily volume in the millions of trades (see https://www.nyse.com/data/transactions-statistics-data-library), central securities depositories clear in the hundreds of millions, and swap trades may be in the single millions (see https://www.euroclear.com/dam/PDFs/Corporate/Euroclear-Credentials.pdf for statistics; see http://www.swapclear.com/what/clearing-volumes.html).
Postfinancial crisis, the growth in available information on retail and SMEs is perhaps more important. Due to mobile interaction and the online presence of consumers and companies, the amount of available information to be used in credit scoring, prospecting, and upsell decisions has exploded. Customers, often inadvertently, leave a lot of information.
The lag between the moment of the transaction and the moment of settlement is shrinking. A lengthy settlement time brings operational risk into the process. The longer this lag, the larger is the potential outstanding balance between counterparties and the higher the settlement risk. At the same time, regulations, such as DoddāFrank in the United States and EMIR in the European Union have pushed product types, such as interest rate swaps that were cleared bilaterally to central clearing. This means information needs to be available faster and the time available for error correction is lower.
Hand in hand with the volume developments are the available technologies to act on these new information sets. Recent developments in hardware have lowered the cost of storage and of computing power. On the software side there are many more tools that access dataāso the cost of manipulating data has become lower.
The advent of Web 2.0 and social media have pushed a revolution in data storage and access technologies. The introduction of NoSQL and other nontraditional database technologies made for cheap ways to achieve horizontal scalingāwhich offers ways of handling and processing much larger sets of information. Historically, data needed to undergo an elaborate curation process before it could be used to feed analytics. New ETL (ETL stands for Extract, Transform, and Load) and analysis tools will absorb whatever data they can and get cracking. This is potentially dangerous as data may be misinterpreted or ignored without the user drawing on the resulting statistics being aware o...