Financial Research: The Final Frontier for Fintech Innovation
Traders Magazine Online News, November 21, 2019
Despite significant technical advancement across the financial markets, the way that financial research is distributed and consumed has remained largely unchanged for more than a decade. This is limiting the value that high-quality research generates both for its consumers and producers. In this article, Rowland Park, CEO and co-founder of Limeglass, explains how technology can help to transform how financial research is handled by market participants, ushering in a new age of technological innovation in the space.
It has become almost a cliché in today’s financial markets to talk about the importance
of data. The market for data and data analytics continues to break records year after year. The biggest institutions race one another to produce the most sophisticated technology to best leverage that data.
This is perhaps unsurprising. Having more data to make trading decisions is one of the most fundamental advantages a market participant can have.
Yet, when it comes to honing and modernising data consumption and processing, one area has conspicuously lagged behind the rest: the consumption of financial research.
On a daily basis, a bank is producing and receiving thousands of pages of research on everything from the global economy to Alphabet’s share price. Currently however, sales/trading desks and portfolio managers’ primary tools for extracting the relevant insights
from this deluge of information is an inefficient and slow email inbox search or traditional research portals that rely upon classic full-text search techniques which were not designed for this type of content.
In this paper, we will show how a lack of digitisation and modernisation in the financial research space is leaving both research producers and market participants with sub-optimal outcomes and suggest how a new approach to research consumption can rectify the situation.
The three-legged stool
This discussion is vital because, while it may have been neglected by financial technology innovation in the past, financial research is one of the fundamental data sources for decision making in the financial markets.
We think about financial decision making as resting on a three-legged stool.
The first leg is raw market data. This is the form of price data which most readily springs to mind when we think about financial market decision making. It was the first area to see technological innovation and has seen the most progress to date. This is because it is the most clearly ripe for automation and digitisation, given the structured nature of the data.
As a result, firms have invested billions in low latency market data processing which enables sophisticated quantitative modelling and automated trading strategies which are operating across almost every market.
More recently, we’ve seen innovation in the second leg of this stool - breaking news. Key
events - be they political, economic, company news or even a natural disaster - have an impact on market activity, sometimes quite dramatically. It was therefore only a matter of time before institutions and news providers looked at ways to integrate breaking market news into their financial market models.
Of course, Refinitiv (formerly known as Reuters) and Bloomberg terminals have been a mainstay of trading desks for decades, but now firms are using algorithms to transform headlines into data which can be utilised by both human traders and algorithms, to drive trading decisions.
The development of tools to better handle market data and breaking news have transformed, and continue to transform, the way activity in the financial markets is conducted. However, financial research and analysis (no less important as a source of information) has seen comparatively little to no innovation in the last few decades.
Financial research provides a vital role in placing the insights generated from market data and breaking news into a wider context. It gives traders the perspective to understand the importance of the other information they are handling while also providing a framework for decision making. That’s why it functions as the third vital leg of the stool, without which effective decision-making will always be suboptimal.
Imagine if financial market participants’ only source of breaking news was still picking up a copy of the Financial Times on the way to work; or if the only source of trading data anyone could access was a spreadsheet of prices at market close. Yet, with the majority of financial research consumed as multiple page PDF and HTML documents sent via email, this is effectively the world we live in when it comes to financial research.
Information overload
The volume and lack of innovation in the world of financial research means that it is very difficult for market participants to effectively handle and consume the information they are receiving, leading to less effective outcomes.
The problem is essentially one of information overload compounded by documents with a unique level of complexity. Research publications have a rich structure and cover a wide variety of topics with multiple articles where context shifts from paragraph to paragraph.
For example, market participants like to use their inbox for storing research and somewhere in the mass of emails there will be a number of extremely useful insights which will add real value to their work. The problem is that they have no effective way to quickly locate that information. The best they can do is either search for a phrase within their inbox or look at the headlines of various emails and research documents to find likely candidates, then see if the documents indeed include the topic.
Aggregators have tried to solve this issue but are still relying upon a document centric approach which is not particularly suitable, as indexing this type of content at document level has its limitations. Full-text search struggles with the rich structure and the shifting context of these publications. Reliance on other techniques, such as document tagging tend to lead to documents being either under or over-tagged, regardless of whether they were classified by an analyst or an automated system.
Ultimately whether using an inbox or a document aggregator, results are typically presented as a list of documents or emails requiring you to click into each one and read through its contents for the relevant information.
This is an extremely inefficient workflow which is very likely to miss key analysis. Take for example a government bond trader looking at purchasing Bunds. They might well find useful information in a publication on the 10-year Bund but miss some very useful analysis on a potential shift in the ECB’s stance in an article within a document on the outlook for German GDP.
For research providers, this is a significant challenge as it means that only part of the value of your research is being utilised. There are hours of work and potentially thousands of valuable insights which are not finding their way to the right audiences because they are simply not being discovered.
This problem is compounded by the fact that the research producer has little way to understand which parts of their research library are most useful to their audiences, even their internal audiences.
Returning to our bond trader, let’s imagine they discovered the paragraph mentioning the ECB. An external research provider would know that the institution had paid for the research. An internal service might know that they had opened the document. However, neither of them would know what information in the document the user was interested in. This gives them little useable information about which of their research insights are most useful to their audiences.
Bringing granularity to financial research
Despite the vital importance of research to market participants, the research budgets at buy-side firms continue to be scrutinised and sell-side research teams find themselves having to fight harder to justify their roles.
One of the big drivers of this trend is change in the business model of research teams within sell-side firms, who are struggling to adapt to these changing business conditions. The current model of distribution does not enable them or their audiences to extract maximum value from their research.
The fundamental problem is that research is generally being distributed in an inherently non-digital, impersonalised and unresponsive manner.
We need to completely rethink the way we approach a research document. Instead of thinking about it as a whole, the challenge is to understand a document as a series of interrelated insights and details, some of which will intersect with other potential areas of interest and other articles.
Document Atomisation
Ideally each and every paragraph of every document would be tagged in context in real-time, transforming a body of research from a series of unstructured documents sitting in a digital library, into a huge and complex web of granular pieces of tagged information. At Limeglass, we define this ability as ’Document Atomisation’. Essentially this means unlocking the value buried deep within the research without requiring the analyst to change how they write or publish their articles.
Once the insights in the research have been broken down in this manner, they can be reassembled in any number of different combinations to perfectly suit the needs of the individual market participant at any given moment.
Returning to our Bund trader, using research atomised in the way we’ve described, their search would bring up just the relevant paragraph within the German GDP document, as well as the useful sections of the 10-year Bund article and all manner of other useful insights on anything from the Euro to the latest German Purchasing Managers Index numbers.
Such a presentation of the information enables them to quickly read all the relevant paragraphs within their research library without having to sift through entire documents one-by-one to find those paragraphs, saving considerable amounts of time. Not only that, but because they are only seeing the relevant paragraphs, that in itself produces far more granular and accurate metrics for the research provider on what information within their output is most useful to their clients and exactly what content they are producing.
Just the beginning
This kind of technological innovation within the financial research market need only be the start of innovation in the space. Once research is atomised in this manner on a regular basis and at scale, it provides all kinds of interesting and useful opportunities for further innovation.
The atomisation process we have outlined enables structured data topics to be gleaned from the otherwise fully unstructured data. Facilitating access to the research paragraphs via APIs can enhance both a publisher’s research offering as well as providing a better reading experience for the research consumers.
The heavy lifting behind our context aware paragraph tagging and Research Atomisation technology involved developing solutions from the ground up over several years and leveraging proprietary rich Natural Language Processing (NLP), AI, machine learning and building out a comprehensive cross-asset and macro taxonomy. As well as providing un-paralleled access to research paragraphs, part of our mission is to deliver metrics to allow smarter analytics which empower financial institutions to quantify and qualify exactly how market participants utilise and invest in research.
Research Atomisation is a fundamental building block in providing personalised research to users whilst delivering a trackable and traceable model for how research is generated and consumed.
The opportunity is now to use smart technology to transform the liability of information overload in financial research into the asset the analysis was designed to be in the first place.
-- Rowland Park is CEO and Co-founder of Limeglass
For more information on related topics, visit the following channels:
Be the first to comment on this post using the section below.