Optimal Execution in Algorithmic Trading

Our PhD quant consulting service can canvass the academic literature on trade execution modelling for you, and help you design, backtest and optimize your execution strategy. Contact us to let us know how we can supercharge your trading.

Individual investors who only trade in small volumes may not need to consider an execution strategy. But institutional investors who wish to trade a large number of shares, such as investment banks, hedge funds and mutual funds, encounter the issue that large trades cause adverse price movements. If they attempt to trade the whole amount in one go, liquidity will thin out and they’ll quickly move through less and less favorable bid/ask levels in the limit order book. For this reason, traders will often attempt to split large orders into a series of smaller trades over a period of time. But there are trade offs to a more passive execution strategy. Firstly, there is the opportunity risk that prices will move unfavorably before you liquidate the whole order. Secondly, other traders may notice what you are doing and react accordingly.

Electronic exchanges typically make available not just the best bid and ask prices for a given security, but several layers of bid and ask prices along with the corresponding volumes. This data, known as “market microstructure”, is exactly what we need to inform algorithmic trading algorithms which attempt to optimize execution.

In this article, we’ll explore some of the mathematics around algorithms which optimize execution strategies.

Simple execution models

We first discuss two simple and well-known approaches to order execution, before examining more sophisticated approaches.

TWAP, or Time Weighted Average Price, involves splitting an order into equally sized pieces that are equally spaced in time. When breaking the order into N pieces to be executed over a time T, a fraction 1/N of the order is executed every 1/T seconds. The idea is that the liquidity in the market has time to recover in between orders, avoiding slippage. The price achieved is simply the average of the N prices at each execution.

VWAP, or Volume Weighted Average Price, also breaks the order up into multiple orders spaced at regular time intervals. It differs from TWAP in that the amount transacted at each time step is weighted according to the relative trading volume. Relative trading volume must be judged by using an average over many days in the historical data, and one must assume that the daily trading pattern is persistent. The price achieved is then a weighted average of the N prices at each execution.

POV, or Percentage of Volume, is similar to VWAP except that instead of using historical average volume, the volume over the last say, ten minutes is used. This has a clear advantage if today’s volume could differ substantially from the historical average.

One could improve on both VWAP and POV by both making use of today’s volume data, while still incorporating the historical volume trend to allow you to forecast volume later in the day. More precisely, at each time step the amount remaining to be traded would be apportioned using the most recent volume data for that time step, and the historical forecasted volume for the future timesteps. To get even more fancy, you could try to adjust the historical forecast based on the actual volume traded thus far today – a machine learning algorithm would be great here.

However, the main advantage of these three models is in their simplicity. They are not generally optimal, and their application can potentially be detected by other market participants.

More complex models

A good place to start in exploring more sophisticated execution models is the Almgren-Chriss model. While the model relies on some data that may be difficult to know in practice, it illustrate some of the key issues that should be understood when developing your own personal strategy. See our article on optimal liquidation using the Almgren-Chriss model.

Optimal execution as an optimal stopping problem

The problem of optimally executing a large trade can be cast as one of a familiar class of problems in mathematics – optimal stopping problems.

  • Optimal stopping problems in mathematics are are a category of problems where, at any time \(t\), you can choose to “stop”, and receive a certain reward which changes with time. The trouble is you only know what the reward is for the current time and earlier times. You do not know what the reward will be in the future. Should you “cash in your chips” now, or wait, and hope the reward will be better in the future?

The relevance to finance here is obvious. Choosing when to execute trades or exercise an American option are both examples of optimal stopping problems.

Optimal execution using stochastic control theory

In the book Algorithmic and High Frequency Trading by Cartea et al., the authors describe the use of stochastic optimal control and stopping methods to attack this problem.

  • Control theory is a field of mathematics that has applications to a wide range of engineering problems. Abstractly, the concept is to find a given “input” to a dynamical system to achieve a desired “output” from the system. A simple example is cruise control, in which the throttle (system input) must be dynamically adjusted to achieve the desired constant speed (system output), for example, when the car begins going uphill. In the case of finance, the dynamical system is typically a stock price \(S(t)\) (typically assumed to follow geometric Brownian motion) or other market information, the system input is the choice of trading strategy, and the system output is the profit of the market participant. The goal is to choose the strategy or “input” which maximises profit.
  • Stochastic control theory is a subfield of control theory in which the time evolution of the dynamical system is not completely determined by the system input, but also contains a stochastic or probabilistic element. Financial applications of control theory obviously fall squarely into this category, since market behaviour such as stock prices are not determined solely by the actions of a single market participant, but have a very significant random element.

Finding an optimal strategy or algorithm for market interaction is a problem the arises across a wide range of trading and investing problems. Many of these problems can be cast as stochastic control problems.

Optimal execution using machine learning

Another possible approach to the optimal execution problem is to put to one side attempts to find an optimal theoretical solution, and allow algorithms to trawl through the vast quantities of freely available data and try to determine an effective strategy empirically.

The primary function of such an algorithm would be in estimating the likely shape of the order book. Optimal execution strategies depend critically on how trade volume is distributed among different bid/ask levels. This is because it is this that determines how fast the price will move as you liquidate increasing amounts of inventory. Since the order book data may often be unknown or partial, various machine learning approaches could be effective in developing an empirical model of the order book. The algorithm would analyse how past trades of various sizes affected the top of the order book (taking into account, of course, how closely spaced the trades were in time).

Machine learning is increasingly in vogue in a wide range of fields, including finance. See this useful summary of a report issued by J.P.Morgan about the future of data science and big data in the financial services industry.

In their paper Reinforcement Learning for Optimized Trade Execution, Nevmyvaka et al. examine the effectiveness of machine learning in finding effective execution strategies. See also the more recent paper Double Deept Q-Learning for Optimal Execution by Ning et al.

The execution strategy to be optimized will take as input, at regular time intervals \(t_i\) for \(i=0,\ldots,n\) , a set of observable market variables (principally market microstructure). It produces as output a limit order, or ask price, at which we are willing to execute all remaining inventory. The algorithm may not wait around forever for the best possible price – so it is reasonable to assume that there is a maximum time \(t_n = T\) at which all remaining inventory must be executed regardless of market prices.

By taking a large number of different stocks, and by considering the same stock at different times, we have a large number of data sets of the form \(S(t_i)\) for \(i=0,\ldots,n\). Our execution strategy must depend on time, because as we near the end of the interval, we are running out of time to transact the remaining inventory. As mentioned in the paper by Nevmyvaka, it’s reasonable to make the Markovian assumption that the optimal strategy depends only on market microstructre at the current time step, and not on what it may have been at previous time steps. Thus, whether proceeding by machine learning or by some other method, determining the optimal execution strategy for each step can be done by working backwards from the final time step, in a similar manner to how one prices an American option using Monte Carlo. At time \(t_n = T\) we already know what the strategy is – we must execute all remaining shares regardless of market prices. At time \(t_{n-1}\), for each individual data set we would like to execute at time \(t_{n-1}\) any shares that can be executed at a price equal or better than they could be at time \( T \). The machine learning algorithm must determine, on average after considering all the data sets, what is the optimal map from the market microstructure (bid/ask levels and volumes) to a price at which we are willing to transact at that time step. Continuing to work backwards, we eventually come up with an optimal strategy at each step.

The question remains – what kind of relationship should we assume between the market microstructure and the optimal transaction price at the same time step? We might, with some careful thought, make a guess as to the form of the mathematical function, so that the machine learning algorithm can optimize the parameters. Another option is to use a neural network which may succeed in finding the form of the relationship by itself. Experience shows that making little effort and expecting a machine learning algorithm to work magic is not always successful. Guiding the process using human theoretical insight and human empirical observation, and then using machine learning techniques to merely to optimize, will often yield the best results.