Services to rebuild and document legacy mathematical/engineering/financial software and spreadsheets.

Do you have legacy code and spreadsheets for mathematical models that are:

  • Difficult to understand, with little or no documentation
  • Fragile or error-prone
  • Slow or inefficient, limiting your ability to scale or run large analyses
  • Built in outdated languages or complex spreadsheets that are hard to follow and maintain
  • Dependent on key individuals, creating operational and knowledge risk

We can help you transform them into modern, robust mathematical software with clear documentation, and uplift the underlying mathematical methodology as we do so.

We can transform spreadsheets into production-quality code, or migrate code from one language to another (e.g. Excel/VBA/Matlab to Python or C#, or legacy systems into high-performance modern architectures). The result is software that is faster, more reliable, easier to maintain, and fully aligned with your current and future business needs.

In many cases, legacy systems not only suffer from technical issues but also embed outdated or suboptimal assumptions. As part of the rebuild process, we review and, where appropriate, enhance the underlying mathematical models—whether that involves improving numerical methods, correcting approximations, or extending functionality to support more realistic scenarios. This ensures that the new system is not just a cleaner implementation, but a genuine upgrade in capability and accuracy.

We also place a strong emphasis on testing, validation, and transparency. In fact, model validation consulting is one of our key services. Rebuilt systems are delivered with comprehensive test suites, clear audit trails, and documentation that allows your team to fully understand and confidently maintain the software going forward. The end result is a system that reduces operational risk, removes key-person dependencies, and provides a solid foundation for future development and scaling.

We also offer a wide range of other services relating to mathematical software development, engineering tools and financial modelling. Contact us today to learn more.

Investment management software using Monte Carlo simulation – how AI can now help you build it in-house.

Monte Carlo simulation is used in investment management and portfolio optimization to model possible future outcomes for asset prices and portfolios. Instead of relying on a single forecast, it generates tens of thousands or millions of simulated paths based on asset data such as returns, volatility, and correlations, allowing managers to see the full distribution of potential results. This helps in assessing risk (such as drawdowns and tail losses), evaluating the robustness of investment strategies, and estimating the probability of achieving specific financial goals. It is particularly valuable for portfolios involving derivatives or dynamic strategies, where outcomes depend on the path of market movements rather than just the final price.

Why buy when you can build?

AI-accelerated development has significant reduced the cost and development time of investment management software. This means that now is an excellent time to reduce costs by bringing this functionality in-house, or to increase your modelling capabilities for a competitive advantage.

Commercial investment management platforms that incorporate Monte Carlo simulation are often expensive, inflexible, and not fully aligned with a firm’s specific needs, with ongoing licensing and customization costs adding up over time. In many cases, firms end up paying for generic functionality while still having to work around limitations in the system.

By contrast, building a tailored in-house solution can deliver both cost savings and a significantly better fit to your investment strategies and workflows. With the support of Genius Mathematics Consultants, your firm can design and implement high-performance Monte Carlo-based systems that are fully customized, transparent, and adaptable, without the long-term burden of vendor fees. This approach not only reduces costs but also provides greater control over models, assumptions, and future development.

What are the advantages of Monte Carlo simulation in investment management and portfolio optimization?

  • A pension fund can model cashflows and asset returns jointly to estimate the probability of funding shortfalls under different contribution and allocation strategies, allowing it to choose a policy that minimises the risk of needing emergency capital injections.
  • In asset allocation, Monte Carlo can be used to test how a portfolio performs under correlation breakdown scenarios—such as equities and bonds falling together—revealing vulnerabilities that standard mean-variance optimisation would miss.
  • For structured products or illiquid investments, it can simulate path-dependent payoff profiles to understand how returns behave under stress, such as early drawdowns or prolonged low-return environments.
  • A wealth manager can model different withdrawal strategies for clients—such as fixed percentage vs inflation-linked drawdowns—to quantify the probability of portfolio depletion under varying market conditions, enabling more robust retirement planning advice.
  • A multi-asset fund can simulate liquidity stress scenarios, modelling how quickly positions can be unwound during market dislocations and estimating the impact on portfolio value, helping to avoid forced selling at distressed prices.
  • A systematic trading strategy can be stress-tested by simulating execution delays, spread widening, and regime shifts to evaluate whether its apparent edge survives real-world trading frictions rather than idealised backtest assumptions.

Importantly, Monte Carlo simulations can be calibrated to

  • Historical asset behaviour
  • Historical asset behaviour during stressed scenarios such as the GFC
  • Hypothetical scenarios

This allows asset managers to build up a complex picture of how their portfolio could behave under the widest range of possible and historical scenarios.

Let’s get the ball rolling

Keen to take advantage of AI efficiency increases to increase the sophistication of your investment modelling? Or keen to bring an existing vendor platform in-house to reduce costs and increase customization? Either way, we’ve got you covered.

Drop us a message today to get the ball rolling.

Hire a Freelance PhD Quantitative Developer or Researcher

Are you looking to hire the services of a freelancer quant dev or quant researcher? Forget the extra admin work and fees of freelancer aggregation sites like Upwork, Toptal or Arc – work directly with a PhD-level quantitative expert. We help you with things like:

  • Financial modelling in languages like python and C++/C#
  • Automated/algorithmic trading systems including backtesting, execution logic and machine learning trading strategies. We focus on realistic modelling (slippage, latency, transaction costs) to avoid overstated performance.
  • Risk systems including market risk and liquidity risk, including scenario analysis and stress testing. We help quantify risk in a way that supports clear decision-making.
  • Development derivative pricing libraries for vanilla and exotic derivatives, including Monte Carlo methods and XVA
  • Trading and modelling infrastructure for Crypto firms

Our approach combines modern AI-accelerated development with rigorous human expert validation:

• AI-assisted coding is used to accelerate development and reduce cost
• All models are then reviewed, tested, and validated by an experienced quantitative specialist

Engagements can be either project based, or take the form of a fractional retainer (e.g., 10 hours per week).

Contact us today to discuss how we can help solve your problem.

Build vs Buy – How AI Has Changed the Economics of Mathematical Software — and Why In-House Systems Now Make Sense

Are you paying expensive subscriptions to vendors for mathematical software? Learn how we can help you bring the capability in-house, lowering costs and customizing the software for your needs at the same time.

For decades, companies from mining to medical technology to financial services have relied on large and expensive software vendors for mathematical tools — simulation, industry optimisation and logistics, financial analytics, and domain-specific modelling software.

Building such systems internally required:

  • A large team
  • A long development cycle
  • Deep specialised expertise

In many cases, firms could not justify the expense of building their own software in-house, leaving them at the mercy of high and on-going software subscription fees.

Thanks to AI, that has now changed — fundamentally.

The shift: AI has collapsed the cost of mathematical coding

Modern AI tools like ChatGPT and Github Copilot have dramatically accelerated:

  • implementation of mathematical models and numerical solvers
  • building graphing and visualization tools
  • creation of unit test frameworks
  • documentation of models

What previously required a team of 5–10 experts over months, can now often be achieved by 1–2 strong PhD developers with AI assistance in weeks.

AI often does not eliminate the need for expertise — but dramatically increases the productivity of experts.

The old model: buy expensive, general-purpose software

Historically, firms had little choice but to purchase systems which were:

  • expensive
  • large and complex
  • general-purpose

And crucially: they were designed for everyone, not for your specific problem.

The new model: build exactly and only what you need

An important shift is this: Software no longer needs to be general-purpose to justify its cost.

With AI-assisted development, firms can now build highly specialised, mathematically rigorous tools tailored to their exact workflows.

Large vendors still offer:

  • standardisation
  • support
  • regulatory acceptance

But they also come with:

  • high costs
  • rigid systems
  • potentially steep learning curve and complex configuration
  • poor alignment with specific workflows

In many cases, firms are paying for complexity they do not need.

Bespoke in-house systems offer:

  • exact alignment with business processes
  • faster iteration and adaptation
  • ownership of intellectual property
  • lower long-term cost

And now, thanks to AI, they are far more economically viable than they were previously.

How we help

At Genius Mathematics Consultants, we specialise in:

  • designing and building bespoke mathematical software
  • replacing expensive vendor systems with targeted solutions
  • delivering high-performance, production-ready tools

We focus on simulation, modelling, optimisation, and analytics across a wide range of industries.

Conclusion

AI has significantly changed the economics of mathematical software.

What was once too expensive, too complex, and too slow to build is now practical, fast, and highly cost-effective.

For many firms, the question is now:

“Why are we still paying a vendor for something we could own?”

Examples of tools now viable in-house

Engineering simulation tools

  • custom finite element solvers for specific components
  • thermal or stress models tailored to a single product line
  • simplified computational fluid dynamics models for internal use

Financial services

  • derivative pricing models
  • risk analytics
  • trading tools and backtesting
  • portfolio optimization

Scientific and laboratory systems

  • automated experiment pipelines
  • data analysis and visualisation systems
  • parameter estimation and model fitting tools

Medical and healthcare operations

  • patient flow simulation
  • scheduling and resource allocation models
  • treatment pathway optimisation

Construction and architecture tools

  • site layout optimisation tools
  • cost estimation and material usage models
  • structural sanity-check systems
  • project simulation tools

Logistics and operational systems

  • classic optimization including scheduling, logistics and supply chain
  • route simulation under uncertainty
  • warehouse layout models
  • demand forecasting systems
  • real-time operational dashboards

Manufacturing and process engineering

  • defect detection using computer vision
  • process control models
  • yield prediction systems
  • predictive maintenance tools

Digital twins and simulation environments

  • digital twins of operations
  • training simulations
  • scenario testing environments

XVA Consulting for Derivative Pricing – Techniques for Efficient Calculation

XVA (valuation adjustments) arise when extending classical derivative pricing to account for credit risk, funding costs, capital, and margin. Since XVA calculation involves time integrals of stochastic quantities, and usually must be computed for a large portfolio, efficiently calculating it remains challenging.

In this article, we’ll remind you what the basic formulas are, before discuss techniques to efficiently compute them.

Formulae for XVA

The total XVA adjustment is typically decomposed into the following components:

\[
\mathrm{XVA} = \mathrm{CVA} + \mathrm{DVA} + \mathrm{FVA} + \mathrm{KVA} + \mathrm{MVA}
\]

The first is the Credit Valuation Adjustment (CVA), which represents the expected loss due to counterparty default:

\[
\mathrm{CVA} = (1 – R_c)\int_0^T \mathbb{E}^{\mathbb{Q}}\big[ D(0,t)\,\mathrm{E^{+}}(t) \big] \, dPD_c(t)
\]

where \(R_c\) is the counterparty recovery rate, \(D(0,t)\) is the discount factor, and \(\mathrm{E^{+}}(t) = \mathbb{E}[\max(V(t) – C(T),0)]\) is the expected positive exposure, with \(C(t)\) being the collateral.

The Debit Valuation Adjustment (DVA) is similar, but reflects the institution’s own default risk:

\[
\mathrm{DVA} = (1 – R_b)\int_0^T \mathbb{E}^{\mathbb{Q}}\big[ D(0,t)\,\mathrm{E^{-}}(t) \big] \, dPD_b(t)
\]

where \(\mathrm{E^{-}}(t)\) represents expected negative exposure.

Funding valuation adjustment (FVA):

\[
\mathrm{FVA} = \int_0^T \mathbb{E}^{\mathbb{Q}}\big[ D(0,t)\,(f(t)-r(t))\,E(t) \big] \, dt
\]

where \(f(t)\) denotes the institution’s funding rate, \(r(t)\) is the risk-free rate, and \(E(t)\) is the funding exposure or requirement, typically representing the amount of uncollateralised exposure that must be funded. Intuitively, FVA measures the discounted expected cost arising from funding at a rate above the risk-free benchmark.

Similarly, the Margin Valuation Adjustment (MVA) reflects the cost of funding initial margin, replacing exposure \(E(t)\) with the margin profile \(IM(t)\), where \(IM(t)\) denotes the initial margin posted at time \(t\).

Finally, the Capital Valuation Adjustment (KVA) accounts for the cost of holding regulatory capital:

\[
\mathrm{KVA} = \int_0^T \mathbb{E}^{\mathbb{Q}}\big[ D(0,t)\,\gamma\,K(t) \big] \, dt
\]

where \(K(t)\) is the regulatory capital requirement, and \(\gamma\) denotes the institution’s cost of capital, i.e. the required return demanded by shareholders for committing capital. KVA measures the discounted expected cost of holding capital over the lifetime of the transaction.

Techniques for efficiently calculating XVA

What makes XVA so hard to calculate is:

  • The integrals for CVA and DVA must be calculated for a large number of risk factor paths. Since the exposure is floored (ceilinged) just like an option is, we can’t just replace the risk factor paths with their average (like we do when we value a swap).
  • For each risk factor path, the trades need to be valued at every time step (typically once per day) to expiry. For a large portfolio, this is a huge number of trade valuations.

If we were using Monte Carlo to value trades, the above two bullet points would lead to a nested Monte Carlo, which beyond almost any amount of computing power.

For CVA and DVA, the trade valuations must be floored (or ceilinged) at 0. An astute reader may note that the payoff of an option is always non-negative for the holder, so that calculating \(E^+\) and \(E^-\) is trivial. However, it’s important to note that the netting is done per counterparty. This means that options must be combined with other trade types, and the net exposure to that counterparty cannot be assumed to be positive (or negative).

Another key consideration is called wrong way risk. It’s tempting to assume that exposure and probability of default are independent, but the reality is that as your exposure increases, the counterparty becomes more likely to default.

Common techniques used to speed up XVA calculation are:

  • Using analytic or faster approximate models to value trades, instead of more accurate numerical models
  • Use fast “proxy” models for trade valuation, such as Taylor series, linear regressions of pricing functions on (potentially non-linear functions of) risk factors, or neural nets fitted to model prices
  • Reducing the number of trades by “bucketing” or grouping similar trades together
  • Utilizing GPUs which are good at highly parallel calculations
  • Using algorithmic differentiation to speed up calculation of XVA Greeks

Of course, with any approximation one needs to be able to quantify the error and make sure it is within some acceptable tolerance.

XVA consulting services

At Genius Mathematics Consultants we:

  • Build production quality XVA engines
  • Optimize existing XVA systems
  • Implement algorithmic differentiation, GPU acceleration and proxy models

The major difficulty with XVA calculation in quantitative finance is that it is computationally intensive when calculated over a large portfolio. Are you interested in working with PhD quant consultants to research and develop more efficient methodologies for XVA calculation? Drop us a message today.

How to Build a Production-Grade Options Pricing Library in Python or C++ using AI

Building a production-grade options pricing library has traditionally been one of the most demanding and time consuming tasks in quantitative finance. Often financial services firms will have as many as three such libraries: one for front office pricing, one for market risk calculation and one for independent validation of the others. They must be accurate, robust and flexible.

Recent advances in AI-assisted development tools such as Visual Studio 2026 and GitHub Copilot have fundamentally changed how these systems can be built. Instead of manually implementing and checking tens of thousands of lines of code, developers can now guide AI to generate complete pricing libraries, including curve and market data infrastructure.

This approach dramatically accelerates development, while potentially raising some new issues around code correctness and validation. However, these concerns can be well mitigated by using AI to generate libraries of unit tests, and generating more than one pricing model for cross-checking.

Step 1: Install Visual Studio 2026 and enable GitHub Copilot

Although there are many IDEs, AI models and coding assistants available, I’m going to focus on Visual Studio 2026 and Github Copilot, which is fully integrated into Visual Studio. Visual Studio is a full-featured professional IDE, and can be downloaded for free in the form of the community edition. Github Copilot Pro requires a small monthly fee to avoid hitting a usage cap, but is inexpensive. The first step is to install Visual Studio 2026 and subscribe to GitHub Copilot. Github Copilot allows selection of many different AI models including ChatGPT and Claude.

I’d recommend using either C# or C++ for a pricing library due to the faster execution speed for numeric models. The main drawcard of python is faster/easier human development and maintenance – an advantage less important when using AI code generation.

When using Copilot to generate code, it’s a good idea to give it specific instructions about how you want the code structured or modularised. For example, why not tell it to create a volatility class which default to “constant vol”. That way, if in the future you want to implement a local or stochastic vol model, you can just augment this class and the rest of the pricing code will still work. If at some point you want to manually check the correctness of any piece of the code, having the code structured in modular way that you find clear is going to make this process faster.

Step 2: Use AI to generate curve objects, market data infrastructure and trade loading/parsing functionality

Before pricing any options, the pricing library must be able to create curve objects representing interest rates, FX forward curves, discount factors, and other market inputs. The curve objects need to have appropriate interpolation functions defined. Volatility surface construction and interpolation is slightly involved – though potentially much faster when using Copilot. If you just want to specify the volatility directly, you can skip this aspect for now. If required, now is also the time to create code to load and parse trade data.

Step 3: Use AI to generate a Monte Carlo pricing engine

Where I would suggest beginning is to ask Github Copilot to generate a Monte Carlo pricing engine. This is because Monte Carlo can price all kinds of options. While slower than analytic pricing models (when they exist), it can be used as a validation tool to cross check the faster models. Make sure you tell Copilot to make it multithreaded!

Start with European options, and one step at a time add functionality for Asian payoffs, barriers, and early exercise (Longstaff-Schwarz).

Step 4: Use AI to generate analytic pricing models

While Monte Carlo is extremely versatile, it’s also slow and sometimes has numerical issues. The next step is to generate analytic or otherwise faster models:

  1. Black Scholes vanilla pricer (of course)
  2. The analytic Black-Scholes barrier equations
  3. Method of moments for Asian options (not an exact model, but good enough for many purposes)
  4. Binomial tree model for American options / early exercise

Step 5: Use AI to generate comprehensive unit tests

One of the most powerful uses of AI in building pricing libraries is automated validation. Github Copilot can rapidly generate long lists of unit tests. Hundreds or even thousands of tests can be created quickly. You want to focus on:

  1. Checking that the Monte Carlo pricer agrees with the analytic or faster models for a comprehensive set of combinations of trade parameters
  2. Checking special or corner cases such as:
  • An American call with no dividends has the same price as a European call
  • An American call should be exercised before a large dividend
  • A knock-out option with spot already breaching the barrier should have value 0 (and likewise a knock-in should have the same value as a vanilla)
  • And Asian option with only one averaging date at maturity should equal vanilla

Once these unit tests have been created, it just takes a few clicks to test them all again after new code changes.

Step 6: Use GitHub Copilot to generate documentation

It must be said that the necessity of documentation may be reduced when you can at any time ask Copilot to explain a part of the code for you!

But if you do require documentation of the pricing library, either for regulatory purposes or for other staff without access to the source code, Copilot will do that for you in seconds.

GitHub Copilot can generate complete documentation for the pricing library automatically, but be sure to give it detailed instructions about the format and content you require for the documentation. For example, you could ask it to include a section for each model describing pros/cons/limitations of the choice of pricing model.

Conclusion

AI tools such as Visual Studio 2026 and GitHub Copilot have transformed how production-grade options pricing libraries can be built.

Instead of manually implementing every model and unit test, developers can guide AI to generate complete pricing libraries, curve infrastructure, trade loading and unit test frameworks.

In this example, Monte Carlo serves as a reference model, while analytic and tree models provide more efficient pricing. AI-generated unit tests give confidence around model correctness at a fraction of the time cost of manually validating the entire library.

The result is a comprehensive pricing library that can be developed dramatically faster than traditional manually implemented systems.

Triangular Arbitrage in FX and Crypto trading

In a previous article we discussed the unexpected complexities of trying to take advantage of cross-exchange arbitrages. In this article we’ll focus on triangular arbitrages either on a single exchange or between multiple exchanges.

A triangular arbitrage is where we convert currencies \(C_1 \to C_2 \to C_3 \to C_1\). If the exchange rates satisfy \(R_{12}R_{23}R_{31} > 1\), then we end up with more of \(C_1\) than we started with.

We can formulate this as a graph theory problem as follows. Taking the logarithm of both sides gives

\[\text{log}(R_{12})+\text{log}(R_{23})+\text{log}(R_{31}) > 0.\]

Let’s construct a complete graph where the vertices are the currencies \(C_1,C_2,C_3,\ldots\), and the edge between currencies \(C_i\) and \(C_j\) is \(\text{log}(R_{ij})\).

We’re interested in finding cycles where the sum of the edges is greater than 0. In fact, despite the name “triangular arbitrage”, there’s no reason to restrict ourselves to a cycle involving only three currencies. If we can find an arbitrage arising from a cycle of more than three currencies, that’s potentially exploitable as well.

It turns out that we have good algorithms, such as the Bellman-Ford algorithm, which can compute the shortest or longest paths from all vertices to all other vertices in a graph.

Challenges in practice

Just like in the previous article, this idealised model encounters several complexities when you try to apply it in practice:

  • Each edge should actually be replaced by two edges representing bid and ask, for example \(R^{\text{bid}}_{12}\) and \(R^{\text{ask}}_{12}\).
  • Due to slippage, the graph weights may need to depend on trade size.
  • The edges need to be adjusted by the trading fees
  • Latency makes the graph weights stochastic, meaning they need to be modelled by some appropriately chosen model
  • In the case that you place limit orders, partial execution risk is far greater due to the larger number of trades involved

In essence this problem has more moving parts, but is otherwise quite similar to the cross-exchange arbitrage we previously considered. The main difference is we now have a large number of stochastic equations to formulate and calibrate to the data (one for each edge). Conceptually, it’s not too different.

Note also that 1) arbitrage opportunities may exist only briefly, and 2) the edge weights change constantly. Thus the algorithm needs to be re-run continuously. The efficiency of the algorithm is therefore paramount to take advantage of arbitrage opportunities before they disappear, and reduce the latency.

Cross-exchange arbitrage detection algorithms

You might think that taking advantage of a price discrepancy between venues sounds pretty simple – if the prices aren’t the same buy at the low exchange and sell at the high exchange. It seems entirely simple and totally risk free! Unfortunately, and as we’ll see in this article, the reality is neither simple or risk free.

The complexities of execution

The first things we need to talk about which deviates from the simple picture above, are fees, slippage and the bid-ask spread. If the bid on exchange A is higher than the ask on exchange B, \(\text{Bid}_A > \text{Ask}_B\), then the profit is actually

\[\text{profit} = \text{Bid}_A – \text{Ask}_B – \text{fees} – \text{slippage}.\]

The slippage (price movement when we transact a volume larger than the first order book entry) contains a component from each exchange, and must be calculated from the order book data. Alternatively, we could drop the slippage term in this equation and instead replace the bid and ask with their volume weighted average prices (VWAP).

However, there’s still more to consider here – we have to consider latency and price movement. If an arbitrage opportunity enters existence at time \(t\), your two trades aren’t actually executed until times \(t+L_A\) and \(t+L_B\), where the latencies \(L_A\) and \(L_B\) can be different and consist of things like:

  • The time between an arbitrage opportunity entering existence on the exchanges, and the information reaching your system
  • The time taken for your system to process the data and become aware of the opportunity
  • The time taken for your buy/sell orders to reach the exchanges
  • The time taken for your orders to be processed and executed by the exchanges

Adverse price movement may occur during the latency periods, leading to the sum of the two trades no longer being profitable.

No free lunch

Now you may be thinking – what if I only post limit orders? Then the worst case scenario is that my trades don’t execute and I lose nothing!

But hold on, that’s not true – the worst case scenario is actually that one of the trades executes and the other doesn’t – leaving you holding inventory you didn’t want, whose value may fall.

To avoid this difficulty, cross-venue arbitrage algorithms often use market orders. Although this means the two trades could lose money, it also ensures that the trades always execute, and execute quickly, which reduces adverse price movement risk.

What this means is that, contrary to what you might have assumed, there is no risk-free way to try to exploit cross-exchange arbitrage. It also means that price prediction and probabilistic modelling becomes a significant part of any cross-exchange arbitrage system.

Probabilistic modelling and machine learning

As we’ve seen, by the time your system detects an arbitrage, the relevant prices are already stale. And the prices may move still further by the time your orders are executed on the exchange. In fact our profit equation is now

\[\text{profit} = \text{Bid}_A (t+L_A) – \text{Ask}_B (t+L_B) – \text{fees} – \text{slippage},\]

where the bid and ask functions are random variables. Also a random variable is the slippage, which should really be separated into \( \text{slippage} = \text{slippage}_A + \text{slippage}_B\).

A simple way to model the price at a time in the future is to assume a normal model

\[dP = \sigma dW,\]

where \(\sigma\) is the volatility and \(W\) is a Weiner process (Brownian motion). Now you may object that market prices are often modelled using geometric Brownian motion, where the price moves also get bigger when the price gets bigger. But keep in mind that as long as a normal model is periodically recalibrated (after price has changed substantially), this effect is captured by a normal model anyway. By recalibrating \(/sigma\) to the most recent data, we naturally arrive at the idea of a profitability threshold, where the trades are only executed if the observed arbitrage is sufficiently large relative to recent volatility.

Of course, more complex models are possible, including models that try to predict price movement by looking at volume imbalances on the order book, models that use trends/momentum or mean reversion, and models that attempt to use machine learning on a large number of signals. To undertake this kind of project involves both 1) developing a theoretical model, and 2) calibrating the model to recent historical data.

If your trades are large enough that slippage becomes significant, you would also want to model and calibrate optimal trade size. And if you were to use limit orders, you’d want to model the probability that an order would be filled and estimate when the trade would be filled.

Conclusion

Cross-exchange arbitrage appears simple and risk-free, but once fees, slippage, latency, and execution risk are taken into account, it’s a far more subtle and mathematically involved problem than it at first appears. It becomes a probabilistic trading strategy rather than a deterministic one, and it carries risk.

The role of the arbitrage detection algorithm is not simply to identify price differences, but to estimate expected profit under uncertainty. This requires careful modelling of order book dynamics, execution latency, and price movement.

See also our article on triangular arbitrage.

How can you use mathematical algorithms and models in trading?

You’re probably aware that modern trading firms can utilize mathematical models and algorithms to make faster, more informed, and more profitable decisions, and you may be interested in increasing the sophistication of your own trading infrastructure. However, you might be unclear where to begin, which techniques are most relevant, or how to implement them in a way that produces measurable improvements rather than theoretical complexity. Building robust quantitative trading systems requires not only mathematical and statistical expertise, but also careful calibration with real data, and integration with execution workflows and risk management processes. Here are a number of applications of quantitative models to the world of trading to get you started!

  • Optimal execution algorithm – Construct a statistically calibrated execution model (e.g. based on Almgren–Chriss or related frameworks), fitted to your historical trade and order book data, to determine optimal trade slicing and timing to minimise market impact and slippage.
  • Trading algorithms – machine learning methods such as ridge regression methods to test signals, optimize signal weighting, and statistically optimize decision making.
  • Liquidity and slippage prediction engine – Develop predictive models that estimate expected slippage and available liquidity as a function of trade size, volatility, order book structure, and market regime. This enables better pre-trade decision-making and more accurate transaction cost modelling.
  • Cross-venue arbitrage detection algorithm – Build a real-time system to monitor price discrepancies across exchanges and trading venues, identifying statistically significant arbitrage opportunities while accounting for execution latency, transaction costs, and liquidity constraints.
  • Anomaly detection engine for trading signals and market data – Implement statistical and machine learning methods to identify data feed errors, model failures, or abnormal trading signal behaviour before they can lead to incorrect decisions or financial losses.
  • Market regime detection engine – Use statistical regime-switching models to identify shifts in market conditions such as volatility spikes, liquidity deterioration, or trend vs mean-reversion regimes. This allows trading strategies and risk models to adapt dynamically.
  • Pricing engine for illiquid or complex assets – Develop fair-value models for instruments lacking reliable market prices, using Monte Carlo simulation, or market factor fitting approaches.
  • Option pricing and volatility modelling infrastructure – Build or extend your options pricing capability including volatility surface construction, calibration of local or stochastic volatility models, and versatile numerical pricing methods like Monte Carlo.
  • Independent model validation and model documentation – Perform rigorous validation of existing pricing, risk, or trading models, including correctness verification, stress testing, numerical stability analysis, and preparation of clear documentation describing model assumptions, limitations, and behaviour.
  • Market risk modelling and Value-at-Risk calculations – Implement robust VaR and risk analytics frameworks, including historical simulation, Monte Carlo methods, and stress testing, providing accurate measurement of portfolio risk and tail exposure.

Consulting and Expert Witness Services for Lawyers in Mathematics, Quantitative Finance, and Financial Risk

Legal disputes, regulatory investigations, and financial litigation often turn on technical details: pricing models, risk calculations, valuation assumptions, statistical methods, and quantitative systems. Legal outcomes can depend critically on whether the underlying mathematics and models are technically sound.

We provide expert testimony and independent consulting and litigation support services for law firms, legal teams, and expert witnesses requiring specialist support in mathematics, quantitative finance, and financial risk.

Quantitative support for litigation and regulatory matters

Modern legal disputes in the financial services frequently involve derivatives pricing disagreements, alleged model errors or mis-calibration, risk models used for capital, automated or AI-based decision systems, statistical claims requiring scrutiny, and regulatory expectations around model governance and validation.

These matters require deep technical analysis. General financial expertise is often insufficient. What is needed is expert quantitative insight: the ability to reconstruct the mathematics, identify hidden assumptions, test internal consistency, and assess whether conclusions are supported by the underlying models.

How we work with law firms and expert witnesses

Our role is to provide independent technical analysis and clarity.

We support lawyers and expert witnesses by analysing quantitative models and calculations used by banks, funds, or vendors; identifying incorrect assumptions, implementation errors, or conceptual flaws; assessing whether the models behave as claimed under realistic conditions; evaluating whether methodologies align with regulatory or industry standards; and translating complex mathematical findings into clear, structured explanations suitable for legal and expert reports.

This work is commonly used in litigation, disputes, regulatory responses, internal investigations, expert witness preparation, and early-stage technical assessments before proceedings escalate.

Areas of expertise

Mathematics and statistics
Probability and differential equations, coding and algorithms, numerical methods and approximation error, statistical inference, misuse of data, sensitivity analysis, and robustness testing.

Quantitative finance
Market risk and VaR calculation, derivatives pricing including interest rate, FX and equity options, structured products, and exotics; operational risk modelling; credit and liquidity risk models; reconciliation disputes; and analysis of model assumptions versus real-world behaviour.

Financial risk and model governance
Model validation and independent challenge, risk systems, AI and automated decision tools in finance, and alignment with regulatory expectations in disputed or investigative contexts.

Independent expert analysis, not advocacy

We work independently of banks, vendors, and large consulting firms. My role is not advocacy, but objective technical assessment.

An alternative to large consulting firms

Large consulting firms are frequently engaged in financial disputes and regulatory matters, but their operating model is not always well suited to focused, technically precise analysis.

Law firms often require clear, independent quantitative insight rather than large delivery teams, generic reporting, or broad advisory scopes. Major consulting firms typically operate with higher overheads and layered staffing models, which can dramatically increase cost without improving technical clarity.

Our consulting approach is deliberately different. I work as a single, independent specialist, providing direct analysis in mathematics, quantitative finance, and financial risk. Engagements are narrowly scoped and focused on the specific quantitative questions relevant to the matter.

For many legal teams, this offers greater proportionality, clearer accountability, faster turnaround, and substantially lower cost than large consulting engagements.

When legal teams engage quantitative experts

Law firms and expert witnesses typically engage this type of support when a case hinges on technical modelling or financial calculations, internal explanations from financial institutions do not withstand scrutiny, expert opinions require rigorous quantitative backing, regulators or counterparties raise modelling objections, or early independent analysis could materially affect legal strategy.

Identifying technical weaknesses early often changes the trajectory of a matter.

Getting in touch

If you are a lawyer or expert witness working on a matter involving mathematics, quantitative finance, or financial risk and require independent technical analysis, contact us to discuss how we can help.