With the run of technology snafus this year, Traders Magazine decided to take a look-see at the problem. We present the story in two parts. Part 2 is posted today. Part 1 was posted yesterday.
Continued from yesterday…Shillman and Mathisson made their comments in the aftermath of the ‘flash crash’ of May 6, 2010, when the market abruptly plummeted and rebounded in a matter of minutes. Software glitches were not blamed for the drawdown, but electronic trading was. According to the SEC, the quickie crash occurred after heavy algorithmic selling of E-mini contracts led to a liquidity crunch that was facilitated by speedy electronic markets.
Ever since, the event has cast a shadow over the U.S. stock market. The public perception is that the automation of stock trading has created an unstable marketplace. Incidents such as the Knight disaster or the IPO debacles only serve to validate that view.
The truth, however, is that trading mishaps stemming from software glitches have been a part of the trading landscape for at least the past 15 years. There have been glitches since markets and brokers began to automate their systems in the 1990s.
Ironically, the last time a huge market blow-up occurred, it was Knight that caused it. In 2004, the firm, then known as Knight Trading Group, again put its financial health in jeopardy by trading millions of shares worth of options in the QQQs at injurious prices. Due to a glitch in reading an incoming market data feed, Knight’s systems mispriced QQQ options representing nearly $1 billion in notional value. The trades, which took place on the old Pacific Exchange, covered about 300 million shares at about $30 per share.
Unlike the most recent blow-up, where relatively new erroneous trade rules prevented Knight from busting or adjusting most of its trades, in 2004 error trade rules were much looser. The P-Coast stepped aside and let Knight work with individual market making firms and other brokers to bust the trades, saving Knight millions. Still, Knight decided it had had enough of options trading. In August 2004, it sold the options market making group, which is run out of Minnetonka, Minn., to Citigroup.
The options industry as a whole had significant teething problems in the 1990s and early 2000s, as the exchanges and their members began to automate. Exchanges and brokers experienced problems almost daily during this period, sources tell Traders Magazine. Exchanges launched new auto-ex systems and the broker-dealers had to build their own systems to work with them.
Capacity was another issue. In 2003, when options volume started its multi-year rise, exchanges began to have concerns over message traffic. At the Pacific, "when OPRA feeds started to go over 25,000 messages per second every so often, we’d go into panic mode," said one ex-P-Coast executive. "If it lasted more than three or four minutes, we were going to have problems."
Software glitches plagued cash equities as well, shutting down both Nasdaq and the New York Stock Exchange several times in the 1990s and 2000s. In March 2007, shortly after the NYSE switched over to its hybrid system, it developed a glitch in its DOT system that forced it to temporarily retreat to manual trading. Two years after that, in July 2009, NYSE had to extend a trading day by 15 minutes after a replacement for DOT developed problems.
Nasdaq has not been immune. In 2000, Nasdaq had to halt or slow trading twice due to software glitches. In April of that year, problems with the old SelectNet system halted trading for over an hour. In December, problems with SOES halted trading for 11 minutes. In modern times, the Facebook blunder was not the first incident to raise the ire of market makers. In April 2011, a glitch in new software that allowed Nasdaq to post quotes on behalf of market makers led to losses for them. Nasdaq had to pony up $3 million as recompense.
Longtime observers say exchange reliability is actually greatly improved. "There’s no question," Andresen said. "It’s like any business that matures, you learn from your mistakes."
Andresen suggests that the IPO problems at BATS and Nasdaq were actually not part of their core businesses. The exchange operators are in the business of continuous matching sessions, not point-in-time auctions. "Running a one-off Dutch auction is not the core business of an electronic market," he said. "That’s a different functionality. It’s not something that is done every day. At BATS, it had never been done."
And that’s the rub. Most software glitches occur in new programs. A broker writes a new trading algorithm or updates an old one. An exchange adds new functionality or updates old functionality. Then…kablooey! The development process is supposed to involve supervision and testing. That doesn’t always happen.
Haywire
At Direct Edge, for instance, programmers created new software to comply with changes to the SEC’s Regulation SHO. According to the SEC, the exchange operator never bothered to test the software. It subsequently went haywire, delivering excessive positions to three hapless brokers.
At Credit Suisse, in 2007, programmers working for one of the firm’s proprietary trading desks added a new feature to an old algo that disrupted trading at the New York Stock Exchange. The NYSE fined Credit Suisse $150,000, charging the broker with a failure to supervise the development process and to even monitor the performance of the algorithm.
Technology executives tell Traders Magazine incidents such as these are not surprising as software development processes and procedures are often haphazard. Pressure to rush a new feature to market can override the need to get it right, they say.
"There’s a lot of pressure to speed up the development life cycle when it comes to trading-related applications," explained Michael Chin, chief executive at Mantara, a vendor of risk management technology.
Chin adds that banks have always claimed that software development is not a core competency, "yet at the same time trading is turning into software development," he said. "Do they have all the checks and balances in place? The proper quality assurance processes in place to roll something out like an IBM or a Microsoft or an Apple does? One could argue that ‘no’ they don’t necessarily have those best practices in place because it’s not their core competency."
Longtime trading technology executive Bill Harts-who designed one of the first program trading systems in the early 1990s-has had firsthand experience with the time pressures inherent in software development. In the middle of the last decade, Harts was working for Bank of America and was responsible for the firm’s NYSE specialist operation. At the time, the NYSE was converting to its hybrid market and was urging its specialists to get ready. Bank of America was taking longer than the rest-and was getting bad press in the New York Post for its tardiness-but Harts would not be rushed.
"We had a lot of code we had to get into place," Harts said. "We were under a lot of pressure from the exchange to get it done. But I insisted on making sure our testing was done. We wanted to be 100 percent ready. We were late, but it worked."
Harts maintains it is impossible to eliminate every bug, especially as trading systems have grown more complex over the years. "There’s more that can go wrong," he said. "Today, you have layers upon layers of algorithms, each with the capability of interacting in unforeseen ways. A typical trading system may have hundreds of thousands of lines of code. The opportunities for problems to arise get greater and greater."
Others agree, saying incidents like the Knight debacle will only increase. According to Mike Gualtieri, an analyst with Forrester Group, part of the problem is due to shortened development cycles. "The process is getting sloppy," Gualtieri said recently on Bloomberg Television. "Part of it is because of the speed (of development). Part of it is because the software is inter-related with other software. So sometimes there are unintended consequences."
Balkanized
An executive with a vendor that builds systems that help others to build algorithms says software development is often balkanized and uncoordinated. Different groups-developers, quality assurance, quantitative analysts, and business types-working independently leads to poorly written code.
"People get tunnel vision," says Richard Tibbetts, a co-founder and chief technology officer at StreamBase. "And then they build problems into software that only come to light later when the system is used or modified. It is important to make sure the whole team is responsible for the quality of the software."
So, what’s to be done? More testing, supervision and monitoring are the typical responses. And for some that means more and better human involvement. A key aspect of the Managed Funds Association’s proposal is to require a registered principal to be "on duty" whenever a firm is trading. The principal has the authority to turn off all or part of a given trading algorithm.
Deep Value’s Devarajan agrees that more human involvement is necessary. "The complexity is such that you don’t know who makes the call and the people who make the call don’t have the information they need at the time that these disasters are unfolding to make the call," he explained.
The exec added: "I don’t have a precise prescription, but there should be some level of required human overlay exercising judgment on top of these automated programs. Human overlay is the only response to a future where machines are going to get even faster, the overall system is going to get even more complex, and specializations are going to get even more."