Volatility and high volume - is your trading framework prepared?
Strategies that work fine during more quiet market times may fail during high volatility. Not because they fail as strategies – but because their underlying infrastructure cannot handle the load of intensive trading days. It is important to test an infrastructure to handle multiple times the peak load.
Volatility is back – and trading volume has multiplied
Every trader rejoice – the markets are going ballistic. On the CME we have high trading volumes in stocks, 6E – the Euro/USD future – and crude oil. Politicians play and the markets react. And when there is movement we can make money. Volatility is back.
But it also means more trades. We track trades for historical backtesting using Nanex. Their MF-tape (Nanex is classifying their trades into tapes) contains all symbols on all exchanges of the CME group. Tape size for a trading day is normally around 1.5 GB to 1.9 GB, with some rare spikes up to 2 GB. Nice.
Except now – they are not. Market are crazy. Volatility is back. We had multiple days with 3 GB per day and right now I look at one this week which nearly broke 4 GB. That is nearly 2.5 times average daily volume – and it is a lot more in spikes. Traders be happy. Volatility is back.
With high volatility comes increased stress to the trading infrastructure
Except – now strategies may fail. Bad, but sadly true. And it is not always the fault of the strategy. Yes, strategies may fail when volatility is high – and they are neither tested for this, nor able to recognize this and stop trading. What is more important, though, is that an infrastructure that “barely keeps up” with normal trading peaks suddenly can break when it gets 3 to 4 times the volume in a short timespan (and make no mistake, this is what happens at peaks during those crazy days). Then the processing falls behind. It may result in bad decisions, based on an illusion where the market is (compared to where it really is, as trade information is queued and not processed). It may result in a system crash when buffers overflow.
Strategies must be tested on high volatility data
This can theoretically be done in multiple ways – one is to fake the whole feed process, measure the message rate during normal peak hours and then make a replay (as fast as possible). This shows how much reserve exists in the infrastructure. This is an important factor because it is relatively easily to measure messages / second as throughput – and to false an alarm when this happens. Windows can do that itself as long as the trading infrastructure is written in a well-integrated competent manner and exposes this data via the integrated performance counter system (needless to say: no retail software does so, but then they have often more significant issues).
The infrastructure must handle high volatility problems
At the end, it is not the strategy that must be able, but the infrastructure, which should be able to handle at least 4 times the recorded peak volume of the last 3-5 years. Yes, this means significant more performance than needed. But that is a price to be able to trade when a flash crash hits the market, or crude oil goes crazy because of Russian American conflicts and the Islamic State on top.
The price of at least enough trading servers to handle such peaks is insignificant compared to the losses. And any sensible trader has as first rule – to preserver capital. Money is precious. It is too precious to waste it on not enough computing power when volatility comes back.
A good self-written infrastructure can do more. For example measure internal latency- As all messages (market updates etc.) go through queues, put a special marker message in once per second with the current time, then record in the strategy wrapper how much delay this message has. The result can be a per second latency graph for every strategy – and if the latency rises, the strategy can be suspended.
One can also measure the time difference between the exchange messages (which often carry a timestamp) and the local time – and if this goes above acceptable parameters (which obviously depend on the time it takes for the signal to travel to the trading computer) then trading is suspended – there is either a technical problem or a very high volatility day.
The real problem is when a high volatility day is combined with a systemic failure – and the exchange is lying. As long as the exchange timestamps are good – one can handle this. But some exchanges – which was one cause of the flash crash some years ago – generate timestamps not when the data is generated, but when it leaves the exchange. This has a real problem on high volatility days. When volatility is back, the exchange may fall behind sending the data – and in this case providing a delayed timestamp.- All looks fine, only that the timestamp itself is off by some seconds.
This can also be seen, but only on executions. When an execution is found that is outside bid ask – and stays so – then obviously the data is stale and the timestamps are bad. When the infrastructure gets an execution and no matching bid/ask within 25ms to 50ms – then obviously there is a significant delay in the data AND it is hidden by bad timestamps provided. A delay is necessary because a stop order outside bid/ask may be the result of stop orders eating the whole order book – and exchanges are often providing trade information before the book updates, with higher priority.
High volatility days can be handled – and losses avoided
At the end, the infrastructure must be prepared to handle latency fluctuations that may appear on high volatility days. These often come with extreme amounts of trades, and often within short timeframes – pockets of crazy trading are followed by pockets of relatively normal trading. We have described the problem and shown some solutions that allow one to build an infrastructure that can not only survive high volatility days but also recognize them and measure internal health. This is one of the reasons our internal Reflexo Trading Framework is getting a rework now in the form of a new trading server – more robustness, more capabilities to handle also scenarios like high volatility trading days.