Big data and artificial intelligence – the future of commodity market analysis

“Computers are “starting to open their eyes””  Google – Jeff Dean

AI is colloquially known as machines exhibiting intelligent cognitive behavior such as learning and complex problem solving. 2015 was a big year for AI and 2016 continues the trend with big players such as IBM, Facebook, Google, Apple and Tesla making headlines. Today computers can perform medical diagnostics, recognize human speech, interpret body language, create art, drive cars, comprehend abstract concepts, and even perform high level strategic work. Not only do computers accomplish all these things today, but they actually perform better than humans in all of them. Think about how far we have come, and let that sink in for a moment.

A new wave of AI

Research is not yet unified around a common AI paradigm, and takes many different approaches inspired by biology, evolution, statistics and more. The end goal is general intelligence, and the closest we get today is deep learning neural networks that can adapt to different tasks with minor adaptations and even perform unsupervised learning. Deep learning is commonly referred to as machine learning (ML) – the most promising subfield of AI. Surprisingly many of today’s breakthroughs are not based on new ideas, but rather new implementations of old algorithms. Some of them were invented as early as the 1940s when the first artificial neuron was conceived. So if today’s algorithms are not new, what is then driving the recent surge in AI? It is the result of many factors. The most important factors are availability of cheap computational power, availability of data and open source code such as Google’s TensorFlow and Theano. Already today, you can put together a self-driving car by following simple online tutorials with no more than a couple hundred lines of code, a weekend to spare and a few hundred dollars. To be fair, that price would only get you a toy car, but the point is that you no longer need millions of dollars, advanced equipment and a PhD in computer science to work with AI.

Today you can build and train a self-driving car with limited knowledge and a few hundred dollars (Source: Fossbytes.com)
Today you can build and train a self-driving car with limited knowledge and a few hundred dollars (Source: Fossbytes.com)

Financial markets

Financial markets are quick to adapt. Big successful quantitative hedge funds like Renaissance Technologies and Two Sigma has been using machine learning techniques for years. These algorithms might not be very far from what traditional analysts have in their toolbox. Some of them use techniques inspired by evolution, advanced search- and optimization models. But simple is often better, and the main difference is size, complexity, and speed at which they are able to perform calculations and optimize trading strategies. The beauty of these trading strategies is not the calculations themselves, but rather the ingenious implementation of big unstructured data. With recent developments in AI and ever more accessible data the possibilities are growing at an incredible speed. The industry is picking up on the trend, and since 2015 we have seen a surge of investments in more advanced AI and big data analytics. Take a look at companies such as Sentinent, Rebellion Research and Aidyia if you are interested in learning more.

 “Artificial intelligence software solutions will likely be the top disruptor in technology in the next decade (…) Data-intensive industries such as financial services and those using the internet may be among the first disrupted by artificial intelligence.” – Bloomberg Intelligence

Commodity markets

For more exotic markets like the electricity market where prices are deeply rooted in real physical events, liquidity is lower and some companies may have competitive advantage when it comes to information itself. The market forecast tool of the trade is stochastic dynamic programming and related optimization algorithms. They do a really good job, because they actually do simulate the real physical processes defining the actual market. That means crunching the numbers for everything from water reservoirs, inflow to grid loss, power plant unavailability, consumption and other events. If we were to directly replace this with AI tomorrow, we will likely get weird results that do not make sense at all. There are many reasons as to why we don’t see quick development and heavy investments in AI in these sectors. The most important are:

  1. Many fundamental drivers and complex relations requires extreme amounts of input data – curse of dimensionality
  2. Risk of finding complex causality that does not make sense in the real world – risk of overfitting
  3. Difficult to directly interpret relations in the model – a black box model

It is all too easy to find spurious relationships:

Groundbreaking research has found a path to save the planet. Increase the number of pirates in the world! The risk of overfitting and data mining is unfortunately not always that obvious
Now, this is how we are going to save the planet from global warming! Unfortunately we make errors like this all the time without realizing it, the risk of overfitting is not always that obvious.

Not that simple

Why should we care about AI if we will just get lost in the curse of dimensionality, end up overfitting data and look into a black box? To throw AI out the window would be a hasted decision based on faulty logic. The fact that AI probably shouldn’t replace a complex fundamental market model does not mean that it cannot add value to different data streams. If we dig deeper and think about what we are actually trying to predict, it is not only market prices and optimal hydro power dispatch. We are actually trying to predict a whole range of data, including input data such as:

  • Consumption forecast
  • Inflow/reservoir forecast
  • Wind production
  • Solar production
  • Production availability
  • Transmission availability
  • Fuel prices
  • Expansion/dismantling events
  • Economic growth, and much more

The list goes on, and opportunities are ample! The fact that we can’t simulate the electricity market as a whole does not mean that we can’t apply AI on individual subsystems. Take the electricity consumption prediction as an example. ML algorithms such as Support Vector Machines (SVM) and deep learning neural networks have successfully been predicting consumption better that traditional regression analysis for years. We can do this because there are concrete logical relationships between different parameters such as seasonality, day-of-week, hour-of-day, holidays, weather, geographic area, long-term trends, economic activity, etc. By building a simple model we can avoid the three main problems mentioned above. A model with fewer dependent and independent variables avoids the curse of dimensionality. Secondly we can perform statistical analysis in the same way we build normal regression models to control for stationarity, heteroscedasticity, cointegration, normality, etc. Further it is fairly straight forward to normalize data and benchmark different models by splitting data into a training set and a test set for cross validation. These are all powerful methods to avoid overfitting. For such models it is possible to either deduct the inner workings of the model directly by studying model parameters, or to use simulation techniques such as Monte Carlo. This makes it possible to avoid the black box phenomena to a certain extent. The same concept mentioned here applies to most model inputs and parameters if applied with care. The point is that the problems only arise when you try to build an overly complex model and throw in data with a lot of noise, too much data or random data. Von Neumann had a point:

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” – Von Neumann on the risk of overfitting

The way forward

We have seen that it is possible to apply AI to forecast model inputs. There are other possibilities such as fundamental model calibration, real-time sensitivity analysis and more. Soon we will see a whole new generation of predictive models being applied in stock markets and commodity markets all over the world. These models will be driven by the ever growing availability of data, low latency automated systems supported by both traditional models and AI. But humans can never be fully replaced. Computers can’t analyse the impact of climate treaties, changes in market designs, modelling new power plants, and more. Nonetheless, there is little doubt that we will see big changes in the near future. The question is who will take the lead, and who will follow?

Leave a Reply