Optimal Liquidation Algorithms – the Almgren-Chriss Model

Unwinding or liquidating a position is a trade off. Liquidate too quickly and you may suffer price slippage as the market order walks the book. Liquidate too slowly with more conservative limit orders, and you are exposed to the risk of adverse price moves. The concept of splitting a large order into a number of smaller orders to be executed over a certain time period is well-known to traders. Exchanges and many other market participants are therefore motivated to develop liquidation algorithms which behave optimally. In this post we’ll discuss the Almgren-Chriss model. For more details consult The Financial Mathematics of Market Liquidity by Gueant.

We assume a trader wants to unwind \(q_0\) trades in a time interval \([0,T]\). Writing \(q_t\) for the trader’s inventory at time \(t\), we write

\[dq_t = v_t dt, \]

where \(v_t < 0\) is the rate of liquidation. If the trades were exercised in a finite number of discrete blocks, then \(v_t\) would be a sum of delta functions, for example. The mid price of the stock is modelled as

\[ dS_t = \sigma dW_t + kv_t dt\]

for \(k>0\). The first term is simply Brownian motion, although note that the decision is made for simplicity to assume a normally distributed price instead of the usual lognormally distributed price. The second term means that the price drops by an amount proportional to the number of stocks our trader executes. This is the permanent market impact.

But the most significant equation here is the equation representing how the rate of liquidation \(v_t\) affects the price obtained for the shares. This is the instantaneous part of the market impact, which in the model has no permanent impact on the market price. We assume that the price obtained for the shares executed at time \(t\) is

\[S_t + g\left(\frac{v_t}{V_t}\right),\]

where \(V_t\) represents the total market volume and \(g<0\) when \(v_t < 0\). The choice of increasing function \(g\) is actually the key to the model. It quantifies how much worse the average price obtained for the shares trades at time \(t\) is when the rate of liquidation \(v_t\) is higher (i.e. more negative). The original model of Almgren and Chriss chose the function $g$ to be linear. This means that if the trader liquidates twice as many shares at time \(t\), the average price obtained for those shares will be twice as far from the mid price. The cash earnt by the trader is then simply the number of shares liquidated multiplied by the average price obtained, i.e.

\[dX_t = – v_t\left( S_t + g\left(\frac{v_t}{V_t}\right) \right) dt.\]

If the midprice were assumed to be close to constant over time, the optimal strategy would be to liquidate as slowly as possible. This would mean that the shares would all be sold at close to the mid price. However, liquidators are not only unwilling to wait forever, but also typically wish to liquidate the portfolio at close to the current market price. Liquidating over a longer time interval means that the price may fluctuate away from the current price. Some kind of “risk appetite” consideration must therefore be included in the model.

This requirement is not actually encoded in the differential equation for \(X_t\) above. Rather, it is encoded in the quantity we wish to optimize. The way this is done is to not simply optimize the final cash holding \(X_T\), but also to penalise its variance. This can be done by choosing the function to be optimized as something like \(\mathbb{E}(X_T) – \frac{\gamma}{2} \mathbb{V}(X_T)\) or \(\mathbb{E}(-e^{- \gamma X_T})\), for some constant \(\gamma > 0\). How much one penalises variance by choosing \(\gamma\) is essentially an arbitrary decision in the model. Of course, longer trading horizons give rise to more variance in \(X_T\) because \(S_t\) becomes less predictable when allowed more time to drift. Thus this parameter will determine the rate of liquidation based on risk appetite.

Finding the optimal trading strategy \(q(t)\) is a variational problem which requires minimising the function

\[J(q) = \int_0^T{\left(V_tL\left(\frac{q'(t)}{V_t}\right) + \frac{1}{2} \gamma \sigma^2 q(t)^2\right)dt},\]

where \(L(\rho) = \rho g(\rho) \).

Gueant also discusses several extensions of the model, including:

  • Incorporating a drift term into the equation for the evolution of the stock price to allow the trader an opinion on the future trajectory of the stock
  • Placing a lower and/or upper bound on the liquidation rate
  • Considering the liquidation of portfolios of multiple stocks

The Almgren-Chriss model implemented in practice

If you attempted to implement the Almgren-Chriss model in practice, there are a number of issues that would arise. In particular, you would need to specify the parameters of the model, which may be difficult to determine.

The first is the shape of the market impact function, which represents the manner in which the price moves as you execute a certain volume of the asset. A simple assumption is a linear market impact function. However, it depends on the structure of the order book, which could take many different shapes, and may change over time. If you have access to the order book data, you could investigate whether the order book shape is sufficiently constant over time to warrant doing some kind of backtest/fitting. But your execution strategy would cease to be optimal if the shape of the order book deviated from your assumptions. And if you don’t have access to the order book data, this is going to be much harder.

The second is the risk appetite parameter, or how much one penalizes the variance in the final PnL. There are two competing factors in the optimal solution. First, the slower you liquidate the better the price you get. Second, the slower you liquidate the more likely the price will move. It’s pretty much arbitrary how to choose to balance these two competing factors. And, of course, there may be other reasons why you need to liquidate your entire inventory within a certain amount of time, regardless.

The third is your view on the likely future movement of the asset. Clearly, this will have a profound impact on your execution strategy. For example, if you believed the price was going to drop significantly soon, you’d want to use a high rate of liquidation to make sure you had liquidated your inventory before the asset drops too much. But if you had no view on the future asset trajectory, you could neglect this issue.

And finally, something not considered in the model is the need to make sure your execution strategy is unpredictable so other market participants can’t anticipate your trades. A predictable rate of execution is a great way to get taken advantage of.

Despite the above, studying this model is a very great way to clarify your thinking before designing your own execution strategy that suits your own specific application.

Volatility smoothing algorithms to remove arbitrage from volatility surfaces

Need help building a volatility smoothing algorithm? Our quant consulting service can help. Contact us today.

See also our article on generating volatility surfaces from options data in C++.

Implied volatility surfaces and smiles constructed by fitting a cubic spline to raw market data may contain arbitrage. In fact, even if the market data points used do not contain arbitrage, cubic interpolation between data points may introduce it. It is therefore usually desirable to find the best fit of a cubic spline to the data points, under the restriction that the result be arbitrage free. Unlike the basic interpolation approach, the spline need not pass through the data points. This is called volatility smoothing.

There are two kinds of arbitrage on volatility surfaces that we need to guard against:

  • Calendar arbitrage. This is where the volatility surface allows a European option with a shorter maturity to be more valuable than an option with a longer maturity, which is impossible (in the absence of dividends). A simple way to see this is to notice that a longer duration has the same effect as a higher volatility, as it gives the volatility more time to act. It’s well-known that higher volatility increases (rather than decreases) the value of the option since it increases the upside but not the downside (since the holder is protected from downside by the strike).
  • Butterfly arbitrage. In the strike direction, it’s clear that the price of a call must decrease as the strike increases (more precisely, the first derivative of call price with respect to strike must be less than or equal to zero, with the opposite true for puts). Furthermore, the call price function must be convex, meaning that the second derivative with respect to strike is greater than or equal to zero. To see this, consider selling two calls at strike \(K\), and buying two calls, one at a strike slightly below \(K\), and one at a strike slightly above \(K\). The value of this position is given by the below expression, where \(C\) represents the call price function. It’s easy to see that the payoff at maturity of this position is non-negative. It has value 0 if \(S(T) < K-\Delta K\) or \(S(T) > K+\Delta K\), and positive value otherwise (easy to see by plotting the payoff). By dividing by \(\Delta K\) and taking the limit as \(\Delta K \to 0\), we see that the second derivative must be non-negative.

\[ C(K-\Delta K) – 2C(K) + C(K+\Delta K)\]

\[ = ( C(K+\Delta K – C(K))) – (C(K-\Delta K) – C(K))\]

We recommend the approach of M.R Fengler in his paper Arbitrage-Free Smoothing of the Implied Volatility Surface. Instead of fitting a spline to the graph of volatility vs moneyness, Fengler uses call price vs moneyness. An advantage of this is that the no arbitrage restrictions take on a more simple form in terms of call price.

The surface fitting is done using a least squares fitting, with a number of constraints. The heart of the algorithm is therefore a constrained quadratic optimization procedure. In python, this can be achieved using scipy.optimize.minimise with the parameter method=’SLSQP’. The mathematical difficulty is mainly around understanding the constraints and implementing them accurately.

We’ve implemented Fengler’s algorithm in python. The algorithm runs very quickly on a single vol surface. However, since historical volatility data has, for each date, a large number of vol surfaces (one for each tenor), the number of surfaces to be processed can easily proliferate into the millions. In this case one may wish to consider a C++ implementation or at least a multicore implementation in python.

To illustrate the algorithm, we start with 8 pillar points (moneyness/volatility pairs) which make up the raw data of a vol surface. We’ve deliberately chosen data which contains significant arbitrage. We’ve calculated the Black-Scholes call prices corresponding to these points and plotted them as the blue dots in the below graph.

The orange line is the arbitrage free cubic spline generated by our implementation of Fengler’s approach. You can see that it very effectively solves the problem of the out of place at-the-money data point which is entirely inconsistent with an arbitrage free surface.

We can also convert the call prices back to implied volatilities, yielding the following graph. For this graph, we have simply joined the data points by straight lines for illustration purposes.

We found we had to make one addition to Fengler’s approach as described in his paper. Fengler considers a set of weights for each data point in the fitting. We found we had to weight each data point by 1/vega to achieve an accurate result. This is because at the wings of the volatility surface, where vega is very small, a small change in call price corresponds to a huge change in volatility. This means that when converting the fitted call prices back to volatilities, the surface will otherwise be a very poor fit in the wings.

Fengler’s paper is not limited to one dimensional volatility surfaces (that is, smiles). It can also be used for two dimensional volatility surfaces which incorporate both moneyness and maturity. His paper details how to extend the method to include maturity.

We provide volatility smoothing consulting, along with a wide range of quantitative finance consulting services.

You may also wish to check out our article on converting volatility surfaces between moneyness and delta.

Does barrier option valuation depend on volatility and interest rate term structure?

\(\)It’s well-known that vanilla option valuation does not depend on the term structure of volatility and interest rates. This means that the price depends only on the average volatility and average interest rate between the valuation date and maturity, not on how those quantities are distributed within the interval.

A way to visualize this and understand it intuitively is as follows. Consider a large set of paths of the underlying which have been generated by a Monte Carlo routine. The value of the option is the average over all paths of the quantity \(Max(S(T) – K, 0)\). Now, imagine stretching and compressing the paths in different places as if they were plasticine, corresponding to concentrating volatility more in some places than others. It’s as if the underlying were moving faster in some regions, and slower in others, yet \(S(T)\) remains the same for each path. Thus, the price remains the same.

Interest rates affect the underlying’s drift term. Yet, as for volatility, \(S(T)\) depends only on the total proportional increase that the drift term bestows on the underlying, not on where in the interval this increase occurs.

What about barrier options? There are a few cases to consider.

First, we consider the case of a full barrier option. This means that the barrier is monitored for the full length of the deal from the valuation date to maturity, as opposed to only being monitored for a subset of it. We also assume that the underlying’s drift term is zero (this typically occurs when interest rates are zero, for example). In this case, valuation is actually still independent of volatility term structure. This can be understood by realizing that stretching or compressing the paths in different places does not change whether they breach the barrier, but only when they breach the barrier. Thus whether a given path has knocked-in or knocked-out remains unchanged.

Next, we consider the case of a partial or window barrier option. This means that the barrier is only monitored some of the time, with the monitoring period starting after the valuation date and/or ending before maturity. We still assume that the underlying drift is zero. As mentioned above, while a different volatility term structure does not change whether a path breaches the barrier, it does change when it does. Thus, it can affect whether the path breaches the barrier inside the monitoring window or outside, thus changing whether the path knocks in/out or not. Thus, for partial and window barrier options, valuation is not independent of volatility term structure.

Finally, let’s consider the case of a non-zero drift term. In this case, valuation is not independent of volatility or interest rate term structure regardless of whether it is a full barrier option or a partial/window barrier option. To understand this, consider that the movements in the underlying due to volatility are proportional to the current underlying price. If the underlying is monotonically drifting upwards throughout the monitoring window, then volatility applied early on will cause smaller changes in the underlying than if they were applied towards the end of the monitoring window. Thus, if the volatility term structure concentrates volatility towards the end of the interval after the underlying has had time to drift upwards, they are more likely to cause the underlying to rise above an upper barrier. Thus, volatility term structure and interest rate term structure affect knock out / knock in probability and thus affect valuation.

GPS consulting – mathematics and software development for global positioning systems

GPS satellites and receivers are being applied in a huge number of industries including aviation, agriculture, financial fraud identification, robotics (navigation), and landscape surveying.

Developing software to process GPS data requires an understanding of the mathematics involved in GPS coordinate systems, including coordinate transformations between latitude/longitude/height and ECEF coordinates. GPS data often must be combined with other sensor data and run through a mathematical calculation to produce the required output data or system behaviour.

Our consultants can assist you in formulating the correct mathematical equations for your GPS application, and implement them in a variety of languages like python or C++.

Financial Computation using Nvidia GPUs.

While GPUs were originally invented for image processing, their powerful capabilities are now being applied to computation problems that have nothing to do with graphics. As GPUs have about 20x as many cores as CPUS, they can be up to 100x faster for highly parallelizable computations such as machine learning and data analysis.

Did you know that google has used Nvidia GPUs to train its google translate machine learning algorithms?

In particular, Nvidia GPUs find many applications in the financial services industry, which is increasingly making use of massive data sets and AI / deep learning. GPU computation is ideal for Monte Carlo simulations, used extensively in the finance industry, as each path can be processed independently and simultaneously.

CUDA is a program development environment from Nvidia which allows users to execute the highly parallelizable part of their code on an Nvidia GPU.

Converting Volatility Surfaces from Moneyness to Delta Using an Iterative Method

\(\)It often comes up in quantitative finance that you want to convert a vol surface plotted against moneyness, to a vol surface plotted against delta.

See Options, Futures and Other Derivatives by John Hull for a reference on pricing formulas for European options. In the Black-Scholes framework, the delta of a call option is given by

\[\Delta = N(d_1), \]

Where \(N\) represents the cumulative normal probability density function, and

\[d_1 = \frac{\log(S_0/K) + (r + \sigma^2/2)T}{\sigma \sqrt{T}}. \]

(for a put, it is \(\Delta = N(d_1) – 1 )\) . Rearranging for moneyness, we have

\[ \frac{S_0}{K} = \exp\left(N^{-1}(\Delta) \sigma \sqrt{T} – (r + \sigma^2/2)T \right). \]

Now, our volatility surface would typically be specified using a number of moneyness and volatility pairs \((m_i,v_i)\) where the moneyness values would typically be something like

\[ \{m_i\} = \{0.7, 0.8, 0.9,1,1.1,1.2,1.3\}. \]

When calling for a volitility value for a moneyness in between these numbers, the firm would have implemented an interpolation function,

\[I: \text{ moneyness} \to \text{ volatility},\]

which would typically use a monotonic cubic spline. Inverting this function may be a lot of work, as it would require working out the exact coefficients generated by the cubic spline fitting. Even with an explicit formula, the spline is defined piecewise, which makes inverting it complicated.

Given some delta \(\Delta\) , we want to find a volatility \(\sigma\) such that the moneyness corresponding to that volatility according to the cubic spline interpolation is the same as the moneyness from the above formula. This requires solving the following equation for moneyness \(m\):

\[ m = \exp\left(N^{-1}(\Delta) I(m) \sqrt{T} – (r + I(m) ^2/2)T \right). \]

An equation like this should be solved numerically. This is doubly true due to the complicated definition of the function \(I\). While inverting \(I\) would be difficult, evaluating it is easy. This motivates solving using fixed point methods which only require the function to be evaluated.

What we are looking for is a fixed point of the map \(f\) , i.e. a point \(m\) such that \(f(m) = m\). Thus, in the remainder of the article, we’ll look at an iterative fixed point method for solving this equation. The idea is simple. We start with some initial point \(m_0\), and repeatedly apply the map

\[ f(m) = \exp\left(N^{-1}(\Delta) I(m) \sqrt{T} – (r + I(m) ^2/2)T \right) \]

until the change in \(m\) is less than some small tolerance.

The critical questions is: under what circumstances does this iterative procedure actually converge?

According to the Banach fixed-point theorem, this process will converge to a unique fixed point if \(f\) is a contraction mapping, which in the context of a real-valued function means

\[|f(m_1) – f(m_2)| \leq L |m_1 – m_2|, \]

for some constant \(L \in [0,1)\). This is also known as the Lipschitz condition, and it is well known that

\[L = \sup_m |f'(m)|, \]

where the supremum is of course taken over the domain of interest. Thus, our procedure will converge if \(|f'(m)|<1.\) We calculate,

\[ f'(m) = f(m) I'(m) \left( N^{-1}(\Delta) \sqrt{T} – I(m)T \right). \]

Numerical evidence shows that this derivative does not in general have an absolute value smaller than one, but typically does after just one iteration of our map. Our experience is that this method will almost always converge for all “reasonable” volatility surfaces, and usually within only 2 or 3 iterations!

A possible alternative to searching for a fixed point is to use Newton’s method to search for a zero of the function \(F(m) = m – f(m).\)

Order Imbalance in Algorithmic Trading

\(\)An order imbalance occurs when the buy volume significantly exceeds the sell volume in the order book, or vice versa. They are often caused by news of a significant development that is perceived to affect the value of the stock. It is well-known that order imbalances are an effective predictor of future stock price movement. If demand to buy exceeds the available liquidity, the price will likely move up. If demand to sell is too high for the interest on the buy side to absorb, the price will likely fall. Thus, anyone engaging in algorithmic trading will want to develop algorithms that respond effectively to imbalance signals.

A reasonable definition of order imbalance is

\[ I = \frac{V_b – V_a}{ V_b + V_a },\]

where \(V_b\) and \(V_a\) are the best (or L1) bid and ask volumes. Alternatively, and depending on the application, these volumes may be defined to include multiple levels of the limit order book (a machine learning algorithm would be well suited to determining the complicated relationship between the volume at different levels and the most probable price movement).

A simple approach is described in the book High Frequency Trading by Easley et al. The authors define a “microprice” quantity as a weighted average of the bid and ask price by

\[P_\text{micro} = P_b \frac{V_a}{V_a + V_b} + P_a \frac{V_b}{V_a + V_b}, \]

where \(P_b\) and \(P_a\) are the best (or L1) bid and ask prices, and \(V_b\) and \(V_a\) are the corresponding L1 volumes. The micro price will be closer to the bid price if there is higher volume on the ask side, and closer to the ask price if there is higher volume on the bid side. They then propose to cross the spread on a buy order when \(P _\text{micro}\) is sufficiently close to the ask price, i.e.,

\[P_\text{micro} > P_a – k(P_a – P_b), \]

and analogously for a sell order. Here, \(k\) is some constant specifying the tolerance, which would have to be determined by some kind of tick data analysis technique such as machine learning.

In the book Algorithmic and High Frequency Trading by Cartea et al. the authors discuss a Markov chain approach to modelling the order imbalance. To discretise the problem, order imbalance values are placed into five buckets. A transition matrix is fitted to data. The transition matrix represents the probability of being in each of the five buckets at the next time step, given the current bucket. They also generate data showing the probability of positive and negative price moves based on the current order imbalance bucket.

Financial Model Validation Consulting and Advisory Services

Are you looking for model validation consulting or advisory services? Our PhD quants have you covered! We provide model validation and model creation consulting services to the financial services industry including banks, hedge funds and trading firms. Contact us to learn more.

Mathematicians have an ability to think clearly and precisely that is rare among finance professionals. We’re excellently placed to provide model validation consulting services. Learn how I found a critical conceptual error in risk modelling work by one of the largest financial consulting firms in the world.

There are two main kinds of models that quantitative analysts are called on to validate in the financial services: derivative pricing models, and risk models.

Validating derivative pricing models

Much of derivative pricing theory is now pretty standard and well-worn. However, there are some choices to be made when validating the appropriateness of the choice of model.

Firstly, there’s the choice of whether to use a computationally slower but more accurate numerical model (such as Monte Carlo, local volatility or stochastic volatility), vs a fast but approximate analytical model. This choice arises with Asian option, where a fast analytic method is known (method of moments), but makes the assumption that the sum of lognormal distributions is lognormal (which is not actually true). Similarly, there exist analytic Black-Scholes formulae for pricing barrier options. However, these models assume that volatility and interest rates are constant. Since volatility term structure has a huge impact on the valuation of barrier options, these models sacrifice a lot of accuracy for speed and simplicity. Whether the trade off is worth it can depend on whether the model is being used to risk purposes (such as a market risk VaR calculation), or front office pricing.

Another issue that arises is the choice of volatility input. Since exotic options are typically not liquid enough to allow for the construction of an implied volatility surface, the use of the European volatility surface must be justified somehow.

Once a model is chosen, there is often no question, in principle, of how to price the derivative. Validating derivative pricing models is thus often mainly about checking the correctness of the coding implementation. A standard way to do this is to build a second, independent model against which to compare the output of the original model. Since it’s impossible to run the two models with all possible inputs, usually one would try to generate a set of test parameters which cover every significant discrete case, such as each possible ordering of date parameters and date coincidence. Another important step is to compare the behaviour of the model to the product description, as just because the two models agree does not necessarily mean they are correctly implementing the intent in the product description. Another important step is to check boundary cases, such as pricing very close to a barrier, very far from a barrier, or after knock-out/knock-in (in the case of barrier options).

An important step is checking the model under stressed scenarios, including very low or very high volatility, and near-zero or negative rates.

However, not all derivatives can be priced with a well-known and standard method. Monte Carlo and other numerical models can require careful work to ensure the model is converging correctly under all circumstances. Custom derivatives can arise which require some ingenuity to price. Examples like high-dimensional derivatives with a large number of underlying assets can require novel mathematics to price, as standard methods are simply not fast enough on current computer hardware. In some cases, pricing early exercise optionality is mathematically non-trivial and/or computationally challenging. As mathematicians, we’re excellently placed to help you price these bespoke derivatives.

See also our derivative pricing consulting services.

Validating risk models

We can build and validate financial risk models including operational risk, market risk and credit risk.

In some cases such as market risk, there are industry standard methodologies (see also our market risk consulting services). However, there are still key choices to be made such as whether to use filtered historical simulation, where data may be weighted by recentness, or adjusted for volatility. One must also decide whether to use absolute or relative shifts, what historical period to use for shift generation, and what time horizon to use for shifts (eg 1 day or 10 day).

For market risk calculations for fixed income products, conceptual pitfalls can arise around calculating shifts in credit spreads (eg bond Z spread). These kinds of subtleties are often missed by the major financial consulting firms, who lack the rigorous mathematical thinking required to detect these errors.

In other cases, such as operational risk, there is no standard approach and much more room for creativity.

Looking for an external model validation consultant? Please get in touch to discuss how we can meet your needs.

Derivative Pricing Consulting and Advisory Services

Financial derivative valuation requires advanced mathematical skills, coding ability, and financial experience.

Whether you’re looking for a single algorithm or sizable software development, we offer professional cloud-based PhD derivative pricing consulting and advisory services, including

  • Equity derivatives
  • FX / Forex derivatives
  • Interest rate derivatives
  • Convexity corrections for swaps and FRAs
  • Asian options, barrier options, local volatility models and exotic derivatives
  • Bitcoin and cryptocurrency derivatives
  • Calculation of equity and interest rate volatility surfaces from market data
  • Calculation of greeks including delta, gamma, vega and theta.

We write code scripts or design derivative valuation software to price everything from vanilla options to the exotic derivatives, including:

  • Vanilla Black-Scholes for calls and puts
  • Forwards and futures
  • Interest rate derivatives like swaps, caps and floors
  • Local volatility and stochastic volatility models
  • American options and exotic options with callability or early exercise optionality
  • SABR models
  • Fixed interest derivatives like bond futures
  • Derivatives on baskets
  • Knock in / knock out barrier options and window barrier options. See our article about barrier options and volatility/interest rate term structure. Also, be sure to see the paper by KS Moon for improving the efficiency of Monte Carlo pricing using a Brownian bridge.
  • Fixed and variable coupons
  • Warrants
  • Pnotes (promissory notes)
  • Dividend futures

We use a variety of derivative pricing methods including Monte Carlo, Black-Scholes, Finite Difference, and Longstaff-Schwartz. For interest rate derivatives, see the SABR volatility model.

Also check out our article on converting volatility surfaces from moneyness to delta using an iterative procedure.

Need a cloud-based PhD quant to solve all of your derivative pricing problems? Contact us today!

Algorithmic Trading Consulting Services

Use the power of Mathematics and Statistics to backtest and optimize your trading strategies against historical data.

Automate your trading strategies with C++/python code to interact directly with the exchange

Ask us how our PhD consultants can help you utilize AI and machine learning in your trading strategies.

Do you have an idea for a trading strategy, but want to prove that it will work through backtesting against historical data? Or do you have a successful trading strategy but want to optimize the parameters of the strategy to maximise returns?

Or perhaps you’ve heard about machine learning and would like to find out how you could incorporate it into your trading. Machine learning can be used to trawl through large amounts of data looking for statistically significant signals to use in your trading. It can also be used to determine the optimal way to combine a number of possible signals or ideas into a single algorithm.

We provide cloud-based PhD quant support for traders. We offer trading algorithm development services for equity and FX markets on all major exchanges. We also offer bitcoin and cryptocurrency algorithmic trading services on major exchanges like Binance and Bitmex.

Our consulting services for algorithmic trading include:

  • Backtesting of strategies, strategy optimization and statistical analysis
  • Automating algorithms (trading bots) in languages like C++ and python
  • Applying machine learning techniques like neural networks to trading
  • Processing and analysis of large amounts of data to search for trading signals.
  • Pricing of vanilla and exotic derivatives
  • Mathematical and statistical research projects
  • General quantitative analysis – see our main page Quant Consulting.

Individual traders and smaller financial institutions may lack the quantitative expertise to design or implement trading algorithms, which involves elements of coding, mathematics, statistics and data analysis. Quantitative finance is a field where complex mathematics thrives, so that even sizable firms may wish to undertake projects which are beyond their in-house mathematical expertise. In particular, many firms are interested in dipping their feet into machine learning trading techniques, but lack the necessary internal resources.

Our staff of experienced mathematical researchers can solve sophisticated quantitative problems efficiently, and communicate the results clearly to professionals of all backgrounds. We specialize in advanced mathematical and statistical analysis, and we love a challenge! We can explain and implement the results of sophisticated academic papers and turn them into practical outcomes for your business.

Want to learn more about how cloud-based quant support can supercharge your trading? Contact us today for a frank discussion about the merits of quantitative (or algorithmic) trading.

For examples of the applications of algorithms to trading, see our article on optimal execution algorithms, our article on market making, or our article on algorithms to take advantage of order imbalances.

If you’re just getting started with algorithmic trading, check out our introductory guides to algo trading on various exchanges.

For some examples of backtesting and optimizing trading strategies, take a look at the following articles.