For a while now, there has been a lot of debate and handwringing about the relative slowness of the Securities Information Processor (SIP). It has been identified as a single point of failure, which the three-hour complete halt in the trading of Nasdaq stocks on August 22, 2013 made quite clear. It has also been blamed for creating a two-speed marketplace since SIP data moves slower than exchange direct-feed data.
Although it seems clear the shortcomings of the SIP are structural in nature, efforts so far have been narrowly focused on workarounds to make the SIP a little faster (but still not as fast as direct feeds). Its time to take the discussion a few steps ahead and enact concrete proposals to fundamentally update this critical piece of market infrastructure mandated by regulatory directives such as Reg NMS and the vendor display rule.
Last year, BlackRock wrote a letter<http://modernmarketsinitiative.cmail20.com/t/y-l-dtddijk-jiqdidkld-y/> to the Securities Exchange Commission (SEC) suggesting exchanges should make the necessary investments in technology to reduce the latency between the SIP and private data feeds to market acceptable standards. Two months prior, SIFMA recommended<http://modernmarketsinitiative.cmail20.com/t/y-l-dtddijk-jiqdidkld-j/> the central SIP structure should be eliminated and replaced with commercially competitive Market Data Aggregators. And nearly 15 years ago, the SECs venerable Seligman Commission recommended competing consolidators to evolve from the current unitary consolidator model.
I believe these goals can be reached with a simple change in the technical structure of the SIP that would eliminate the latency difference between direct market data feeds and the SIP, remove the SIP as a single point of failure and create a platform that makes the market better for all investors.
Here is how the SIP currently works for a firm trading NYSE stocks on Nasdaq using the SIP data:
* Nasdaq receives an order to trade an NYSE stock in its data center from a dealer in their very same data center
* That order leads to an update in that stocks quote that must be reported to the NYSE SIP in NYSEs data center
* Nasdaq sends the update to the NYSE SIP in a data center several miles away
* The NYSE SIP processes the update
* The NYSE SIP sends the updated information back to the trading firm in the Nasdaq data center
* The trading firm in the Nasdaq data center can then process and use the data
If this strikes you as extremely inefficient its because it is. Its like if youve ever spoken to someone on a cell phone when you are close enough to talk to them in person. The lag of your voice and theirs pinging a cell tower and returning back to your phones is much less efficient than just communicating directly. Think of your live voice in earshot is the way direct feeds are disseminated, the cell phone lag is how the SIP is disseminated. It doesnt have to be this way.
A better method is the competing consolidator model. Under this proposal: 1) firms would order the SIP data as they do today, by contacting their vendor or the SIP administrator, 2) the firm/vendor connecting to the SIP would get a connection to each exchange to listen to their data where the data is produced (rather than getting the data from a central location) and 3) the firm would receive and process the data similarly to how it handles direct market data feeds.
A minority of members on the Seligman committee noted the technical challenges associated to moving to this model. But given the technical advances in the 15 years since, the challenges have been solved. And surely there would be a host of vendors competing to consolidate the data much quicker than the current SIP model. By moving to this more open architecture model, we could substantially improve the performance of the SIP and introduce competition into the process. In addition, because there would be multiple consolidators using different code bases, we would remove the single point of failure that exists with the current SIP structure.
Instead of focusing on just increasing the speed of the SIP, I think it is time to consider changing its structure to be quicker, more resilient and better positioned to deliver data that is critical to the efficiency of the securities markets. In the process, it would alleviate the concentration risk for a processor solely responsible for all NASDAQ stock quotes and a separate processor solely responsible for all NYSE and Tape B quotes and help bolster investor confidence in the markets. Unleashing these competitive forces would be an important first step in improving investors access to market data.
Adam Nunes is Head of Business Development at MMI Member Firm Hudson River Trading.