With Jon Williams, Head of Fixed Income, Refinitiv
Fixed income data can be a challenge, given disparate sources, a lack of liquidity, and opaque/fragmented markets. Is that changing and how?
When we talk about fixed income data, we need to determine what we’re referring to in terms of the underlying instruments. Some areas within fixed income, like U.S. Treasuries, U.S. residential mortgages, and vanilla interest rate derivatives, are liquid markets with fairly broad pricing information available, and tight, orderly bid-offer spreads updating essentially in real time.
But there are fragmented, more opaque markets that lack the same degree of liquidity. With the global move towards more regulatory oversight over the past decade, there have been improvements in price availability and transparency. The continued growth of electronic trading across the entire spectrum of fixed income has also been significant. Hundreds of billions of dollars a day trade electronically, all of which generates data that market participants package, normalize, standardize and distribute.
So technology has been a big driver of change, as has regulation, and there’s a lot more data produced, which is a trend we expect to continue.
As a company with deep roots in data, how is Refinitiv approaching the data challenge?
The challenge used to be a lack of data, but now it’s managing the sheer volume of data. In positioning ourselves technologically to handle the deployment of data to our clients, we have developed a technological and business infrastructure which we call RDP, Refinitiv Data Platform.
When I visualize Refinitiv’s data platform it looks like an hourglass. At the bottom there is a broad array of data from numerous sources, such as trading venues, exchanges, and evaluated pricing providers. We aggregate, curate and enrich the data, and as we move up the hourglass visualization, the data becomes more broadly deployable. We then progress up through the hourglass to authenticate and permission the data for distribution. That can take two paths for us — internally powered analytic functions, i.e. bespoke capabilities within our various desktop offerings, or distribution across different managed services that we operate. At the top of the inverted cone, it’s back to the notion of delivery across the client communities.
The idea of optimizing, normalizing, authenticating, permissioning and distributing data is all done to enhance the usability of the data and improve the client experience.
As pricing improves, even for illiquid securities, what is the impact?
The more illiquid the security, generally speaking, the wider the bid-offer spread, and the higher the cost of execution. So when you consider the by-product of dynamics like TRACE, the European MiFID II regulation, and other mandates around price transparency, the result is more price information available, which makes determining relative value easier. This compresses bid-offer spreads, which increases the number of willing counterparties and results in more transactions and pricing
There’s another consideration. Historically, when we speak about illiquid securities and opaque markets, it’s in the context of a two-tiered market structure, with the buy side as liquidity consumers and banks as liquidity providers. Within that construct, there has been a perception that liquidity providers have an advantaged position concerning price information, so there is informational asymmetry. In reality, it’s the reverse. It’s the buy side who has access to price information from multiple counterparties; liquidity providers only have their own pricing. So when a buy-side client requests a price quote, the sell-side trader is the one working from a position of a relative disadvantage, because they’re seeing only their single view of the market.
How are new technologies like AI and machine learning providing solutions to traders, and how is it changing the way traders interact with counterparties?
AI and machine learning are not new ideas, but they are relatively new phenomena in terms of practice and application, because there were two components missing from the equation to allow the real application of this technology.
One of those was scale. Cloud technology allows for a significant increase in access to computational power, enabling firms to scale essentially in real time computational utilization based on need. The other critical component in delivering solutions around AI and machine learning is a broader pool of data.
AI and machine learning have the capability to change the fundamental way that counterparties interact with each other. Historically these interactions have been reactionary — something happens in the market, and that necessitates a trader response. But AI and machine learning are the inputs and the drivers into newly evolving, continually innovating suites of analytics that one might call predictive analytics. Increasing data is not just the availability of pricing data, it’s also metadata associated with the transactions themselves, that can be used to make predictive analytics calls and construct predictive analytical calculations. How big was the transaction? What type of customer? What else was going on in the market? If a client went out to more than one bank, how many prices came back and what was the range of those prices?
There’s a broad utilization of a much deeper dataset with AI and machine learning.
As market participants navigate the challenges of COVID-19, what is important for traders? Are they using different tools and solutions? What will be the lasting impact?
Whatever the ‘new normal’ ends up being, we will be more reliant on technology. The most critical input into any financial professional’s job is access to information. There’s a presumption that the box, or the collection of boxes on a trader’s or portfolio manager’s desktop, is their sole information source. But when you look at a trading floor, whether buy side or sell side, there’s so much activity going on and the flow of information can be intentional, like two people sharing information, or almost accidental, like a trader on a crowded floor overhearing information and using that to make a decision.
That ‘accidental’ information is gone now that physical proximity has been removed, and collaboration is 100% intentional and entirely technology-driven. The importance of having capabilities on the desktop is critical, as is the ability to then leverage tools around collaboration, whether functionality such as Microsoft Teams, or Refinitiv’s client messaging system, Refinitiv Messenger, to communicate.
When we do come back together, we must presume we’re not going back to exactly the way it was, and it will be that way for a meaningful period of time. It probably will be a blend of where we are today and where we were several months ago, and it will be interesting to see how much need that was filled by technology moves back in favor of physical information flow.
Can a ‘single view’ be created for fixed income traders to give them all the data/tools/analytics in one desktop? Or is another model better suited to the needs of this complex market?
On a trader’s desktop 15 years ago, you saw two fairly ubiquitous boxes. These were essentially ‘contained’ desktop solutions, whether they were a legacy Reuters desktop or another from a competitor. Today, you don’t see recognizable branded boxes, but you see monitors that are essentially a proprietary collection of applications. A Treasury trader’s desktop might be an aggregation of capabilities made up of six or eight or 10 monitors, and that is different from what you see on the desktop of a mortgage trader, or a corporate bond trader, or a swaps trader.
As workflows become more automated, the trader’s desktop will be a single desktop, however that desktop will be a collection of tools that, driven by data, allows that trader to optimize his or her execution and workflow.