Hire a Freelance Algorithm Developer

Looking to hire a freelancer to work on your next algorithm development project?

We combine PhD level mathematics expertise with expert coding skills in a wide range of languages. Furthermore, we have solid industry experience in mathematical algorithm development. We’re perfectly positioned to assist professionals in a wide range of industries as their fields become increasingly technical from a mathematical and computational perspective.

Global internet communications, the explosion of knowledge in the 21st century and the ever increasing specialization it entails means freelance consultants are a powerful new way to augment your business. According to Morgan Stanley, the freelance economy could represent more than 50% of the US working population within ten years. And as mentioned by the Financial Review, freelancers can increase a firm’s talent agility while reducing costs.

We can design all kinds of mathematical algorithms including:

Learn more about our algorithm consulting services.

Hire a Freelance Algorithm Developer

Looking to hire a freelancer to work on your next algorithm development project?

We combine PhD level mathematics expertise with expert coding skills in a wide range of languages. Furthermore, we have solid industry experience in mathematical algorithm development. We’re perfectly positioned to assist professionals in a wide range of industries as their fields become increasingly technical from a mathematical and computational perspective.

Global internet communications, the explosion of knowledge in the 21st century and the ever increasing specialization it entails means freelance consultants are a powerful new way to augment your business. According to Morgan Stanley, the freelance economy could represent more than 50% of the US working population within ten years. And as mentioned by the Financial Review, freelancers can increase a firm’s talent agility while reducing costs.

We can design all kinds of mathematical algorithms including:

Learn more about our algorithm consulting services.

Derivative Pricing Consulting – the Longstaff-Schwartz Method using Machine Learning and Optimization Techniques

Interested in pricing callable or early exercise derivatives using the Longstaff-Schwartz method? Our consulting service can design and implement a Longstaff-Schwartz algorithm to meet your specific needs, in languages like C++ and python. Learn more about our derivative pricing consulting services.

\(\)In this article we’ll make some interesting observations concerning the well-known Longstaff-Schwarz method for pricing derivatives. Specifically, we’ll look at how to determine an optimal function fitting or regression approach. Most interestingly, we’ll see that one can actually do away with function fitting altogether (!) and apply machine learning and optimization techniques in its place.

In the famous paper of Longstaff and Schwarz, the authors introduced an approach for pricing derivatives that allow for early exercise using Monte Carlo. This includes a vanilla American option and also more complicated exotic options. Their method dramatically reduces the number of paths required by assuming that the relationship between the value of the underlying (or underlyings) and the expected value of continuing (that is, not exercising yet) can be described by a smooth function. In their original paper, they illustrated their method by fitting a quadratic polynomial. This raises the question of what function fitting method should actually be used. What order polynomial should be chosen? Should a more sophisticated non-parametric fitting method be chosen which doesn’t require knowing in advance what kind of function to choose?

We’ll start with a brief recap of the Longstaff-Schwarz method.

Consider pricing an a American call option with expiry \(T\). At any earlier time \(t_0\), the value of exercising the option depends on the current underlying asset price as shown in the following graph.

This graph also gives you the fair price of the option at expiry \(T\). We consider that the option can be exercised at a set of time steps \(t_0, t_1, t_2,…,t_n = T\).

Since Longstaff-Schwartz is a Monte Carlo method, we begin by generating a large number of possible future paths for the underlying asset. We can price the option by finding the average payoff over all the asset paths. The complexity comes from the possibility of early exercise, which requires that we determine where it is optimal to exercise for each path. Once we know at which time step a given path will be exercised, the payoff for that path is simply the exercise value at the time step where it is exercised.

  • Assuming a given path reaches time \(t_n = T\) without being exercised, it is trivial to determine whether we will exercise based on whether the price is above the strike price or not.
  • At \(t_{n-1}\), we assume the option has not yet been exercised, and we have to determine whether to exercise at this step for each path. To do this we compare the value of exercising at this step with the expected value of not exercising
  • We can iterate the procedure to determine whether we will exercise earlier at time \(t_{n-2}\). Continuing to iterate, we eventually get back to \(t_0\), having determined for each path the earliest time step where it is optimal to exercise.

So pricing a derivative with early exercise comes down to determining, for each path, the expected value of continuing at each time step.

In the graph below, each blue dot represents one of the paths at some given time step \(t_i\). The horizontal axis shows the payoff from exercising at this time step, and the vertical axis shows the value of continuing based on the known future trajectory of the path. This is the value of exercising at the earliest future time step where we have determined that it is optimal to exercise. But since we wouldn’t normally know the asset path at future time steps, we need to work out the expected value of continuing.

An obvious way to do this would be to generate, for each path above, a large number of future trajectories for that path to find the expected value of continuing. But then the number of paths would grow exponentially at each time step.

The insight of Longstaff and Schwartz was that we can assume that the expected value of continuing is a smooth function of the asset price. This function can be found using some kind of function fitting or regression technique on the data in the above graph. It turns out that the least squares fitting of a set of vertical points is exactly their average. This means that regression can be viewed as a kind of average which is able to utilize paths at nearby x-values, so we do not need an inordinate number of points vertically above every asset value. The graph below shows a function fitted to the data above.

In their original paper, Longstaff and Schwartz considered using least squares to fit a simple polynomial to the data. However, the literature on function fitting is vast, and a practitioner needs to consider which method to adopt. Since a function must be fitting at each of a potentially large number of time steps, the computational efficiency of the method becomes important. This becomes even more critical when one considers the multidimensional case for basket options with more than one underlying. In this case, we must fit a function of multiple variables. Of course, one also then encounters the curse of dimensionality.

Polynomial fitting has a number of drawbacks. Firstly, a polynomial may not be the appropriate choice for some data sets. Second, you have to decide in advance what order polynomial to choose. If the order is too low, the function will not be able to fit the data. If the order is too high, it suffer from overfitting.

Another approach is to use non-parametric regression methods. For example, local linear regression fits a polynomial only locally, using only points within some “bandwidth” to do the fitting. The fitting can also be weighted so that points further away have less contribution. This is the method used to generate the graph above. However, determining an appropriate bandwidth does give rise to a similar issue to the polynomial case. If the bandwidth is too large, you will miss important function features. Too small, and you will overfit.

The efficacy of different function fitting techniques, particularly non-parametric techniques, is an interesting question. And in considering this question one would examine both the accuracy and the computation time of these methods. But as we’ll see in this article, in considering this one is lead to an even more interesting question – do we need to fit a function at all? Can we simply use optimization and machine learning techniques to determine when it is optimal to exercise?

To see why, let’s start by placing the value of exercising and the our fitting function representing the value of continuing on the same graph.

Remember that the only piece of information used in pricing is whether we are going to exercise at this time step. This in turn depends only on whether the fitted function, representing the expected value of continuing, is above or below the exercise value. In the above graph, if the asset has price around 100, it is optimal to continue (not exercise). But once the asset price passes about 122, a slightly higher average payoff comes from exercising now rather than continuing. Thus, if we know that the value of continuing is above the value of exercising on the left hand side, and “crosses over” at about 122, then we have all the information we need to price the option. The exact shape of the blue fitted function is entirely irrelevant beyond that.

This raises a very interesting question. Do we need to fit a function at all?

In the graph below, we introduce a new function labelled “linear fit”. This is a straight line that has been determined using an optimization method. Precisely, for our candidate function \(f(x)\), we have maximised the sum over all paths \(p_i = (x_i,y_i)\) of the following:

\[V(f) = \left\{\begin{array}{lr} y_i, & \text{if } f(x_i) > E(x_i)\\ E(x_i), & \text{if } f(x_i) \leq E(x_i) \end{array}\right\} \]

Here, \(E(x)\) is the function representing the value of exercising for asset price \(x\).

Actually, we first did a linear regression to come up with a rough initial approximation to use as a rough initial point for the optimization procedure. But this line has been fitted by optimization, not by regression. Note that the optimization procedure has succeeded in finding the exact crossover point at 122 of the much more sophisticated non-parametric fitting. And although a poor fit for the data, the linear fit is above and below the exercise value in exactly the right regions, so it makes exercise decisions perfectly.

So it seems like we can dispense with function fitting! The only piece of information we need are the places where the value of continuing “crosses over” the value of exercising (combined with knowing whether it is above or below in at least one region)

It’s clear that our optimization procedure will work splendidly whenever there is only one crossover point. What is there are many crossover points? One option is to use an optimization procedure to find a higher order polynomial to serve as our decision surface, as in the below illustration. We would need a polynomial of order equal to the number of crossovers.

But do we need to bother with a function at all? It does offer one advantage – the ability to use a polynomial regression to generate a rough initial point for the optimization procedure. But the astute reader will notice that what we are looking at here is really a classification problem, not a function fitting problem at all. The information we are really trying to extract is the following:

In the higher dimensional case where there is more than one underlying, the cross-over points are not points, but hypersurfaces. In the case of two underlyings for example, the cross-over boundary consists of one or more curves in the plane. Let’s consider what this would look like in the two dimensional case where there are two underlying assets to the derivative:

Here, each red “x” represents a data point where the value of exercising is greater than the value of continuing. Each blue “o” represents a data point where the reverse is true. Our task is to come up with the decision boundary which most accurately “classifies” the points into exercise vs continue. But this is looking exactly like a classification problem from machine learning! In particular, this problem is looking exactly like the sort of problem one can solve using a support vector machine (SVM)!

For the moment, we’ll leave this exploration there. Hopefully the reader has become convinced that there are some exciting possibilities for applying machine learning and optimization methods to the pricing of derivatives using the Longstaff-Schwartz method. Except, since we have dispensed with function fitting entirely, I’m not sure we can call this the Longstaff-Schwartz method anymore! Whatever it’s called, it’s an intriguing new approach to pricing derivatives with early exercise using Monte-Carlo. It would be interesting to conduct a study of these new methods and how they compare to a more conventional Longstaff-Schwartz approach.

Interested in developing code to price derivatives with early exercise? We offer derivative pricing consulting services, and a wide range of general quantitative analysis consulting services.

Monte Carlo Risk Model Development

Looking for PhD level Monte Carlo risk models for your business? We design and develop high quality Monte Carlo models in languages like python, C++ and VBA. Contact us to learn how we can help your business.

The key concept behind the Monte Carlo method is to model risk or profit by generating a very large number of future paths or outcomes. It is a conceptually simple, flexible and very powerful approach to solving mathematics which would otherwise be very difficult or impossible. And with the power of modern computers, it’s pragmatic as well.

In the financial services industry, Monte Carlo risk models have extensive applications including:

  • Market risk simulation of value at risk, calibrated to historical market data
  • Operational risk calculations, in particular aggregating lognormal distributions to calculate diversification benefit
  • Modelling and optimizing portfolio returns
  • Pricing exotic financial products and derivatives

However, risk is something that must be estimated and managed across a wide range of industries. Monte Carlo risk models are also used in a range of other industries including telecommunications, electric, and oil.

When dealing with distributions, there typically is no manageable formula to describe their many combinations. This situation frequently arises when trying to aggregate multiple risks which are not normally distributed. In other situations, the relevant distributions are not known at all and must be entirely simulated. Monte Carlo risk models are the ideal approach all but the simplest models.

In addition to developing custom-build Monte Carlo risk models, we provide a wide range of cloud-based quant consulting services, market risk advisory, and more generally, diverse mathematical modelling, research and software development services.

Are you interested in developing a Monte Carlo model to estimate risk for your business? Our mathematicians will provide you with an industry-leading solution. Contact us today.

Sensor Fusion Techniques – How to Combine the Output of Multiple Sensors

In many sensor applications there is a network of sensors observing the same phenomenon. The sensors may completely duplicate each other, or they may overlap only partially, perhaps observing different angles or aspects of the situation. They may be identical sensors or they could be sensors of entirely different types. Multiple different sensor types may be required as they may be more effective at different speeds, distances or light levels, all of which may vary throughout the observation period. The addition of multiple sensors can create a more accurate and complete picture, but their output also needs to be integrated somehow and reconciled where they disagree.

When creating algorithms to process sensor data from a network of multiple sensors, some complex mathematics can be required to “fuse” the readings together into a single coherent picture. Since modern sensing systems can generate vast amounts of data, efficient fusion algorithms are a necessity to quickly combine and condense the data into manageable quantities.

When sensors are being used to influence decision making of an automated system, deciding which sensors should be given priority and how discrepancies should be resolved can be a matter of critical importance. When the machine controls a manufacturing production line or an aeroplane’s angle, a sub-optimal algorithm that fails to manage it’s many inputs effectively could mean lost money or lost lives.

Sensor fusion can be “competitive”, where sensors present potentially differing measurements of the same quantity, or “cooperative”, where the sensors work together to build up a complete picture.

The sensing systems used to make decisions in modern technology leave little room for ineffective algorithms that can’t balance multiple sensors, or handle conflicting, erroneous or unexpected sensor input.

Sensor fusion for identical sensors making identical measurements

Let’s look first at the simplest case of sensor fusion, which occurs where multiple identical sensors have made the same measurement (this situation could also result from the same sensor making a measurement multiple times). There are two reasons why such a setup is desirable. Firstly, since sensors always have some finite accuracy limitation, multiple measurements are likely to differ due to an element of random noise. By taking multiple measurements, you can average over them to average out some of the noise. This may be a preferable solution to developing or purchasing more accurate (and more expensive) sensors. The second reason for fusing together multiple identical measurements is redundancy to handle sensor errors or sensor failure. When an aeroplane is making decisions based on the readings from a number of sensors, it is critical that it be able to identify and discard the readings from an erroneous sensor.

The simplest way to combine the output from the sensors in this case is simply to take the average of their measurements. Going a little further, if some of the sensors are known to be more accurate than others (measured by the variance in their outputs), you can give more weight to the more accurate sensors in a mathematically precise way. Going to ever further levels of complexity, one can use Kalman filters or Bayesian statistics to combine these measurements.

More problematic is that the practical realities of sensors mean that their readings don’t just suffer from statistical noise, but they can sometimes fail in ways that cause them to give completely random readings, or no reading at all. Thus, it is wise to implement algorithms that can recognise and discard outliers before inputting the data into your sensor fusion algorithm. Fault tolerance is a main motivation of multi-sensor systems in applications that require robustness.

For some applications, it may be known that the data ought to obey some functional relationship, like a straight line or exponential. In this case the averaging is effectively carried out by assuming this functional form and doing a function fitting (i.e. regression). In this case, erroneous readings and sensor errors can be eliminated using algorithms like RANSAC. Furthermore, principal component analysis can be used to discard irrelevant measurements and simplify data.

Sensor fusion for disparate sensor types or sensors making different measurements

Multiple sensors of the same type may be set up in a network to observe different angles or aspects of a situation. A simple example of this would be sensors each observing a different part of the scene, with possibly some degree of overlap. Where the sensor domains overlap, their measurements might be fused using the techniques mentioned in the previous section.

A more interesting situation occurs when sensors are set up in a way that means their readings are not directly comparable. For example, a network of sensors might be observing a scene from different angles. In this case the sensors will sometimes compete with each other, and sometimes cooperate to provide additional information that other sensors can’t see. An effective algorithm must decide which is appropriate at any given time.

Mathematical transformations on sensor data

When sensors are making measurements that are not directly equivalent (such as different angles of the same scene), their outputs cannot be immediately integrated. A similar situation occurs with sensors of different types that measure different physical properties. The sensor outputs are not estimates of the same quantities, but may imply estimates of the same quantities after mathematical calculations are performed on them. In this situation, mathematical transformations must be performed to extract the desired quantities before fusion can occur. Furthermore, their uncertainties cannot be directly compared either. Rather, the sensor uncertainties must be processed through the same calculations to determine the error in the quantities of interest. This leads to some interesting statistics when trying to fuse together the outputs of a diverse sensing system.

Sensor fusion consulting services

Muliple sensor systems are how machines observe the world. As automated devices like self-driving cars become more common, sophisticated sensing systems and the algorithms that drive them will become more mainstream.

We design sensor fusion algorithms for scientists and engineers. We also offer many other sensor data analysis consulting services. See our main article here.

Manufacturing algorithms and industrial process data science consulting

Algorithms are being used across a wide range of industries to optimize and monitor manufacturing and industrial processes. We develop algorithms in languages like python and c++ which are custom-made from scratch for your business.

While industrial optimization (or “operations research”) has existed for a while, data science and machine learning are now also finding increasing applications in industry. According to the Royal Society, demand for data scientists has more than tripled in five years. Yet, particularly in industrial applications, there is a shortage of people with the necessary skills. This makes our consulting service quite unique.

Industries left and right are being disrupted by the applications of algorithms made possible by fast processors and reams of cheap data. What can algorithms accomplish for your business? Here are a few examples:

  • Optimizing manufacturing processes to minimize cost per unit and maximize product quality. For example, assembly line balancing which is an application of the mathematical assignment problem. This also includes queuing, scheduling, shipping and supply chain problems. 
  • Predictive maintenance – predicting which machine parts are likely to fail and when, to optimize when machine parts are replaced or scheduled for maintenance. A solution must be found which optimizes how limited maintenance and replacement budget is spent, while also minimizing failure and lost profit due to downtime
  • Predicting which units are faulty early in a production line rather than late, to reduce wastage
  • Optimizing network designs to reduce costs and minimize the probability of network failure
  • Algorithms to analyse sensor data gathered from industrial processes, including multiple sensor fusion and compensating for erroneous or incomplete data.
  • Data science and machine learning techniques can be used to create algorithms that adjust themselves based on data gathered from machinery in real time.
  • See also our page on algorithms for business strategy.


The astonishing complexity of modern industrial facilities means there is a lot of money to be saved through automation, optimization and data science techniques.

Just like with the game of chess, algorithms can be either completely autonomous, or merely provide information to augment the decision making capability of human operators. Automating decisions not only allows for the more efficient use of human labor, but can in various ways improve upon human decision making. After all, an algorithm can process large amounts of relevant data, monitor the factory continuously and react more quickly.

Machine learning algorithms are in Vogue due to their ability to detect patterns and relationships in data, and automate human decision making. But not all effective algorithms need machine learning techniques.

The impact of algorithms on every aspect of the economy is only going to grow. Algorithms are taking over the world.

We develop manufacturing algorithms and provide consulting services for all kinds of industrial data science. Interested in leveraging algorithms and data science to take your industry into the digital age? Drop us a message.

Business algorithms and data-based decision making consulting services

We offer business algorithm consulting services such as:

  • Data science techniques to analyse your data and extract business value
  • Decision making algorithms in languages like python and c++
  • Algorithms to automate processes in your business, including machine learning and AI
  • Optimization algorithms to optimize business processes

Algorithmic business is the approach of using mathematical algorithms to make decisions or optimize business activities. Most companies have already recognised the importance of investing in data-driven business algorithms. Forbes has called it the golden age of algorithms. See also Fortune’s article The Algorithm CEO. In this digital era of big data and fast computers, we are already seeing the impact of algorithms on business decision making. However, there is much more to come.

You’ve no doubt heard of examples like product recommendations and advert targeting, demand-based (dynamic) pricing, and predictive forecasting. But the scope of business algorithms extends far beyond these.

In fact, business algorithms can be used in any situation where data contains relevant information, but is particularly useful where machines can leverage the data in ways humans can not. Algorithms can make decisions within a fraction of a second which is important for time-critical applications, such as stock trading. The ability to act on data in real time, rather than wait days or weeks while human eyes analyse the information can be highly advantageous for businesses. Algorithms are also capable of analysing and integrating a vast array of disparate data sources which is simply beyond the capability of a human decision maker. And machine learning algorithms can discover relationships within data that humans wouldn’t suspect.

The process typically begins by noting all the data you have available to you, or which additional data sources you need to obtain. Then, mathematicians can get to work and, through data science, develop an algorithm which can output information of business value. Algorithms can be used to forecast the future, anticipate where faults are most likely to develop in a network or manufacturing process or which patients are most likely to develop a certain disease, automate processes that previously required human labor, find mathematically optimal solutions to resource deployment, match business operations to future demand and much more.

In particular, industrial algorithms are extremely important in configuring and optimizing manufacturing and logistics operations. See our page on machine learning. To learn more about the value that an algorithm and automation consulting service can bring to your business, see how algorithm consulting is taking over the world.

So, you’re a business owner who has heard stories about how your competitors are leveraging data and algorithms , and you want to get on board. Maybe you have a specific idea you want to discuss, or you’re just broadly interested in the possibilities. Either way, feel free to drop us a message to get the conversation started.

Sensor data analysis and sensor algorithm consulting services

Our firm offers sensor data consulting services such as:

  • Writing algorithms in languages like python, C++ and Matlab to process data collected from sensors
  • Function fitting techniques to generate smooth functions from discrete sensor observations
  • Mathematical techniques to compensate for noisy data
  • Sensor fusion – integrating the output from multiple sensors
  • Machine learning techniques on sensor data
  • Inertial Measurement Unit (IMU) data processing and gps coordinates
  • Algorithms for 3D reconstruction of faces and objects
  • Mathematics and software development for sensor systems involving global positioning systems (GPS).

A spectral sensor, useful in remote sensing

The development of cheap wireless sensors and mobile devices is causing an explosion is the ability of scientists and engineers to gather huge volumes of data. At the same time, our society is steamrolling towards artificial intelligence and automation. Since almost any autonomous device must have a means of gathering data on which to act, there is a close relationship between automation and sensor analytics. Since the scientists and engineers who gather this data usually are not mathematicians or data scientists, this rapid growth has created a shortage of people with the expertise to develop mathematical algorithms to process all this data and extract useful conclusions.

What’s surprising is how quickly one runs into difficult mathematical problems dealing with sensor data, even with relatively simple sensors. Mathematical transformations are often required to convert the raw data into the relevant quantities. Since sensors often return partial or imperfect data, remarkably complex algorithms must be developed to determine which data points are likely erroneous, and try to fill in the gaps. Random noise in the data can necessitate sophisticated 2D or 3D function fitting algorithms. Difficult statistical problems arise in estimating the confidence intervals in the final metrics, such as key features of fuctions that have been fitted to noisy data.

Complex function fitting algorithms can be required to estimate key values from sensor data and determine the confidence intervals in those estimates

Customers utilizing a sensing device would prefer not to have to wait several seconds to see the output metrics. In fact, since storing the massive amount of data generated by a continuously active sensor is impractical, algorithms to process the data may even need to run in real time. This creates the challenges of writing fast, efficient algorithms, since even simple sensors can require algorithms that are computationally intensive. Since mobile devices like smart phones and ipads are often used to receive and process sensor data, the algorithms may need to complete quickly using only the computational resources of a mobile device.

The problem becomes even more interesting when you have multiple sensors. A network of similar and disparate sensor types whose data ranges overlap requires some interesting mathematics and statistics in order to combine the data sets into a single coherent picture. This is known as sensor fusion or data fusion.

Sometimes, the person designing the sensor system does not know themselves how to interpret the output of the sensor system. For example, if a network of sensors are monitoring different aspects of complicated machinery, which sensor outputs might indicate that some part of the machine is likely to fail and requires maintenance? This is where machine learning comes in. The mathematics of machine learning allows us to analyse historical data and find hidden features in the data that reliably predict a given outcome. Sometimes, these relationships may not be easily discernible by a human operator. Machine learning marries beautifully with sensor data analysis and has the potential to lead to engineering outcomes that are both effective and mathematically elegant. Check out this page on machine learning for remote sensing applications.

The proliferation of cheap sensors and computing technology is causing the rapid growth of sensor technologies in countless fields, including medical research, biomedical device development, geology, landscape mapping, manufacturing, and oil and gas industries. This is creating a strong demand for sensor data analysis and sensor algorithm development services.

At Genius Mathematics Consultants, we’re as excited about the interesting mathematics of sensor data analysis as we are about making it work for your business. There’s few things more satisfying than solving interesting mathematics problems and seeing the result succeed commercially. Are you a scientist or engineer developing sensing technologies? We’re often told that with increasing specialisation, interdisciplinary collaboration is the future of research.

How algorithm consulting is taking over the world

The title is, of course, a play on the phrase, “Algorithms are taking over the world”.

But what makes an algorithm consultant more valuable to you than any other kind of consultant?

We all know that Algorithms are now doing many jobs that used to be done by people, with the speed of this transformation increasing. We also know that in some cases algorithms not only replace, but exceed the capabilities of human workers. According to an analysis by management consulting firm Mckinsey, about half of the activities currently carried out by human workers are susceptible to automation, and 15% of the global workforce could be displaced through automation by 2030. A report by the world economic forum has described it as the forth industrial revolution

But more jobs than those lost will be changed as algorithms complement human labour. Firms urgently need algorithm consultants to not just assist them in developing algorithms, but to retrain staff in their use.

From a finance perspective, consider that upwards of 80% of US stock trading is now done using algorithms. An algorithm can monitor such a huge volume of data and execute so rapidly that it renders direct human decision making obsolete. Instead, in algorithmic trading the human task is to design and monitor the algorithms.

Ok, so we’re all familiar with google home assistant. We’ve heard about medical diagnostic algorithms which can identify diseases with higher reliability than trained medical professionals. We’ve heard about facial recognition technology that can identify people from security cameras. And we know how algorithms are increasingly being used to automate business decisions. We know how important algorithms, sometimes called machine learning or artificial intelligence, are to the modern economy.

But what are the challenges?

Algorithms are often highly mathematical. They typically need to correctly analyse large amounts of imperfect data from several disparate sources, and integrate them to reach a decision (a process known as data fusion). Designing an algorithm that responds correctly in each situation isn’t easy, and often involves PhD level mathematics.

Machine learning algorithms are all the hype these days. But machine learning algorithms begin with a human expert specifying the form the algorithm will take. Machine learning techniques then use data to optimise the parameters of a model which has been specified and constructed by a human beforehand. And this is to say nothing of the propensity of machine learning techniques to learn spurious relationships. It’s important to understand that algorithm design still very much requires human experts, and it will be a very long time before AI advances to the point that this changes.

We believe our algorithm consulting service is ideally situated to move your business into the future. We’re passionate about collaborating with science and industry on research and development projects. To get the conversation started, don’t hesitate to contact us.

How math consultants can collaborate with your industry to drive innovation

As our world becomes more complex, fields of expertise are becoming more and more specialised. As knowledge grows, the amount any one person can be an expert in gets smaller. This means that now, more than ever, collaboration is the name of the game. You might like to take a look at Sciencemag’s article on successful collaboration, or the European Science Foundation’s publication on mathematics in industry.

There are no shortage of success stories demonstrating the applicability of mathematics to science and industry. As one of the most fundamental and abstract subjects, mathematics may be without peer in the broadness of its applications. Perhaps for no other discipline is it so important to form lines of communication and collaboration with other experts. And as the world becomes increasingly technically sophisticated, this fact will become ever more true. It’s important for both sides to determine a strategy through which mathematicians and other experts can strengthen their interactions to prepare for our highly technological future.

It sometimes doesn’t occur to people that their industry should seek the the expertise of mathematicians. Sometimes, this is because mathematics in their industry is working its magic under a different name or job title, such as engineer. They may not even be aware that there is such a thing as a math consultant, and therefore unlikely to seek one out. The completion of a PhD in a field of mathematics not only qualifies the holder within that field, but testifies to an ability to think creatively and solve difficult problems. Ironically, these skills make mathematicians the ideal consultants – even for problems that are not explicitly mathematical. They lend themselves so well to applied research and development tasks that the concept of a math consulting firm should be part of the lingo.

Encouragingly, surveys have found that managers express enthusiasm for collaborating with mathematicians, and genuinely believe that mathematics can provide them with a competitive edge. Yet, their own lack of familiarity with mathematics can make it difficult for them to drive the interaction from their side, and to generate project ideas.

So how can math consultants and other professionals improve collaboration?

  • Both sides should work to build professional connections with each other, even before any possible collaborative projects are apparent to either side. These connections may yield unforeseen fruit in the future.
  • Scientists, engineers and other professionals should discuss with mathematicians problems they are working on or facing in their fields. After all, you never know what value they may be able to add if they were only aware of the problem!
  • Since other professionals may not know enough about what mathematicians do to realise when they are needed, math consultants may need to “take it to them”. Mathematicians need to invest time in learning about the work being done in science and industry, and develop their own project proposals to present to industry leaders.
  • Many businesses and teams do not include a mathematician who would have the skills to solve difficult quantitative problems that arise in their field. Historically, this made it difficult for collaboration to occur. And increasing specialisation means teams would get larger and larger if they needed a permanent team member for every area of expertise that arises. Fortunately, the internet has created an unprecedented flexibility for working which means that you needn’t employ a full-time mathematician to reap the benefits. The expertise you need is only a few clicks away. Industry professionals should embrace online consulting as a convenient and cost-effective way to tap into the expertise of mathematicians