A few years ago, Danish brewer Carlsberg hired 148 of the meanest-looking bikers to sit in a 150-seat movie theater, leaving two seats open smack in the middle. Unsuspecting couples entered one at a time, with most leaving immediately. The few that stayed were rewarded with two Carlsberg beers and raucous applause.
In a similar way, traders enter orders with a general distrust of their surroundings. They commonly view the market as a hostile environment and rely on their emotional intelligence to be the warden of vigilance. It makes sense, then, that their guard would rise as trading moved from the public forum of open outcry trading pits to the dimly lit chasms of dark pools.
Dark Pool Hopscotch
Most dark pools were originally intended to internalize a broker’s institutional order flow, first as blocks, then eventually between smaller lots usually sliced from algorithmic orders. But as these pools evolved to accommodate the speed and sophistication of our electronic markets, eventually dark pools — similar to displayed venues — became heavily populated with transient liquidity. In short, a steady stream of orders from competing brokers’ dealers and HFT firms.
With the same order flow hopscotching between dark pools, this highly processed liquidity resulted in very low hit rates (in some cases, less than 1 percent) and smallish execution sizes of 300 shares and under.
In an attempt to distinguish their venues from the competition, brokers retrofitted a caste system that allowed for order segmentation and participant exclusion. Anti-gaming, anonymity and adverse price selection became common buzz phrases among brokers encouraging direct or indirect participation in their pools.
For institutional customers, deciphering which pools to route to for best execution became challenging, time consuming and frustrating. The buyside lacked resources to continually rebalance orders between pools and the ability to track real-time quality of execution and depth of liquidity among dozens of sources.
Aggregation Maturation
As a result, over the last few years brokers progressively began to build dark pool aggregators, algorithms that allocate varying portions of an institutional order across (what the strategies determine are) the best dark pools. In theory, the ideal execution report would show minimized information leakage and little to no price impact with maximized fill rates.
As market complexity has grown, institutions have benefited from the maturation of these routing strategies. But the growing acceptance of aggregation may cause traders to overlook some inherent design flaws and apparent contradictions that should be reason for greater scrutiny.
Blurred Lines (Maybe I’m Goin’ Blind)
A few years back, as the number of dark pools rapidly grew, the brokerage industry began classifying non-displayed venues. This taxonomy was one of the drivers to the initial development around dark pool aggregation, with a strong emphasis placed on the class of pool to which an order was routed.
Initially, these classifications defined dark pools as either principal or agency. Principal pools were further delineated between bulge brackets, which offered a hybrid of institutional and prop flow in their pool; or EMMs, electronic market-makers, which acted as the counterparty on all of the venue’s trades. Agency dark pools were split between traditional brokers offering dark liquidity alongside their full range of equity services; and consortiums, which were essentially standalone vendor solutions funded by a group of brokers (sometimes principal themselves) via the agency paradigm.
To most, these classifications made sense. Using a sliding scale from prop to agency, the assumption was that the less order flow in a pool that was driven by a firm’s principal trading desk, the less likely the pool contained toxic flow. But over time, with broker proprietary flow and HFT orders gaining access to all categories of dark pools, the lines became blurred.
A quick scan of accusations or issuances of impropriety toward dark pool providers (by the SEC, FINRA or the New York Attorney General) shows eight pools recently under examination or issued fines for regulatory failures, or worse. Interestingly, half of them were managed by agency firms and the other half were principal.
As dark pools have rapidly evolved and competition for liquidity has intensified, categorizations seem useful but may mislead traders to make assumptions based on superficial appearances. If the Carlsberg Group had dressed up a bunch of their business executives in leather jackets, fake ZZ Top beards and temporary tattoos, my guess is that the experiment would have been just as successful.
Constructing and employing classifications is our inherent nature. Overvaluing them is our inherent flaw.
The Catch-22 of Segmentation
Beyond the flawed management of dark pools by type, the growth of customer segmentation within many pools alongside the use of dark aggregation creates a conundrum.
That is, if institutional customers are most concerned about controlling segmentation and exclusion of counterparties in dark venues, once they turn over control to a single broker to access all the available dark pools for them, don’t they lose their unique stature in each individual pool? And if their order flow is bearded through an intermediary, is it possible that toxic participants are bearded as well?
The primary purpose of buyside institutions is to make investment decisions, not to build technology to access and manage liquidity. By outsourcing aggregation to a handful of approved brokers, they solve the issue of reaching the breadth of liquidity available. But they ultimately sacrifice their status in those pools that implement a hierarchy, which, ironically, built segmentation specifically for the institutional client base.
As we discussed in Part 2 of this series, on Wall Street it’s the trade that gets you paid. In this case, brokers have incentive to send institutional orders to other brokers’ pools since they are receiving a healthy commission to perform this act. They are even glad to disclose to their clients the pools in which their orders were filled. But a dark pool aggregator cannot provide the same level of control and segmentation that is available to the institutions that route those pools directly. Therefore, the only pool an institution using an aggregator has full control over is the one run by the aggregating firm.
The buyside may chalk this up to a situation where neither option is ideal yet one is more practical than the other. Assuming this is the case, the institutions should confirm that their best-execution policies and procedures take such variables into consideration, and that the transaction analyses they implement provide the depth needed to track performance across so many disparate execution platforms and venues.
The Infancy of In-Trade Analysis
Given the challenges to oversee broker technology on a continual basis, it makes sense that institutional traders in recent years have incrementally increased their use of transaction cost analysis. In our buyside survey on TCA usage, Greenwich Associates found increased adoption of transaction analysis every year over the past four years.
Even though TCA adoption has increased, it appears that buyside desks are still likely to mainly leverage traditional TCA functionality, the most popular being post-trade review and oversight/reporting. Today, the better TCA tools available to track dark pool aggregation performance are the real-time, intra-trade systems that deliver venue-by-venue data on routes and fills. But of the survey participants, only one-quarter leverage TCA for real-time analytics.
One restriction related to adoption of in-trade analytics is that currently the most comprehensive data is primarily available on brokers’ TCA platforms. Although these interfaces give the clearest and deepest view into order handling and quality of venue fulfillment, the information delivered is limited to the executing broker. For buyside traders executing across multiple broker algorithmic suites, tracking execution quality across multiple, disparate systems would be an inefficient use of their time.
In the survey, three-quarters of respondents rely on third-party TCA for their analytical needs. Initially, third-party systems were designed to deliver macro-level reporting across multiple brokers. In turn, they currently lack the depth of data to analyze venue-level performance.
There is, however, a new breed of TCA vendors is designing technology to perform venue-level analysis across a population of brokers. Adaptation is likely in its early stages, but it is encouraging progress for institutional investors who will benefit from consuming a deeper level of execution performance data.
To Light a Candle Is to Cast a Shadow
New regulations in U.S. equities were designed to create competition between market centers and create a fairer and more transparent trading environment. As part of these changes, dark pools were essentially regulated into our markets. Brokers traded phone receivers for computer servers and never looked back.
When too many venues caused too much confusion for institutional investors, brokers attempted to reduce the white noise by allowing their clients to funnel orders through their dynamic trading strategies and out to the rest of The Street.
To manage quality of execution across multiple trading technologies, the buyside needs to continually intensify its vigilance. This requires regular examination of best-execution policies and procedures as well as a comprehensive use of transaction analysis tools in the investment process.
This is ironic. The rules designed to shine a brighter light on our markets have also arguably cast larger shadows and created a new set of challenges for all participants.
Craig Viani is vice president of market structure and technology research at Greenwich Associates.