Nick Nielsen is head of quantitative trading at Marshall Wace, a British hedge fund. He spoke at TradeTech in New York last month about his trading process. Nielsen’s advice to his audience was to develop a “repeatable process” and keep refining it through experimentation. We excerpt some passages from his talk.
On refining the strategy—
Try new things and sample the data and actually look at the statistics. That’s how you continue to make improvements. That’s how you get better at all the decision-making. The most important decisions, in terms of the relevance to the bottom line of your fund, will likely start at the very, very top level with some type of aggressiveness. Generally, it is immaterial whether you went to Dark Pool “A” or Dark Pool “B”–whether one is a little more toxic than the other–relative to the top line of the aggression or strategy choice.
On avoiding signaling—
You actually see effects in the market that are driven by the first letter of the symbol. So, if you are not thoughtful about how you randomize samples, you can effectively signal to the market that a big player is doing all of the stocks that begin with “A” one way and all of the stocks that begin with “B” another way.
On the costs versus benefits of developing and refining in-house trading strategies—
It’s similar to asking: Should we self-clear our own orders? It is technically cheaper on a marginal basis, but probably the fixed cost is too high. You have to look at turnover. Our funds are very, very high in turnover. A single basis point of savings will likely amount to savings of about $40 million to $50 million per year. So, the marginal improvement makes quite a bit of impact. It doesn’t make sense from a cost perspective for fundamental funds which have lower turnover.
On the alternative to doing it yourself–
They’re a number of brokerage services on the electronic desks that have outsourced process management. They can frame an experiment or process for you. And repeat it. And give you good data. Still, some people are concerned about a single broker doing it. That’s a problem because they can’t pay multiple brokers. But you could potentially do this with a number of brokers, provided that you have a similar set-up with all of them. And the data is formulaically derived very similarly. And the order flow is handled similarly.
On running two algorithms side by side for same order as a test—
I don’t think you should do that. Every part of a trade that you schedule has a permanent and temporary impact. And if you’re not giving one algo the opportunity to optimize the temporary impact piece, you will have a greater temporary impact piece between the two than with one schedule.
On real time transaction cost analysis–
Real time TCA is useless. If you’re going to change things all day long, you’re not going to have a repeatable process. If your slippage is high, does that mean you should change things? If it’s high at 10 o’clock in the morning, should you change everything? I don’t know. I think you need to stay determined and get a dataset that you can actually look over. It can’t be: Well, it was high so then I changed it to be more aggressive. It got worse, so I changed it to be less aggressive. That’s making the data…There’s enough data and enough complexity. You don’t need to make the decision harder.
On the pros of real time transaction cost analysis—
We have it. We built it. The only value of real time TCA is when you see a number that is very, very high or very, very low. It may be indicative of a problem in the process. It’s just an alerting mechanism.