On 7 May 2024, the Bank of England (BoE) published a speech by Jonathan Hall (External member of the Financial Policy Committee) in which he discusses how developments in Artificial Intelligence (AI) could affect financial stability. The speech is entitled Monsters in the deep?

Mr Hall states that AI in general, and deep learning in particular, has the potential to be incredibly powerful. Regulators should try and reduce the downside risk, whilst allowing, and enabling, the positive possibilities.

Deep learning is a kind of machine learning, inspired by the human brain, in which neural networks are trained on vast amounts of data. An artificial neural network is an information processing system. It converts inputs into outputs via a series of hidden layers, the nodes of which are connected with different weights, which are optimised through a training process. The output can be in the form of information which is given to humans or other machines, or actions. In the case of a deep trading algorithm, the output could be an electronically generated, tradeable order. Whilst deep learning is powerful it can go wrong either due to model failure or model misspecification. AI literature provides two well-known examples of these types of failure: the panda/gibbon task and the paperclip maximiser.

In his speech Mr Hall highlights two key themes: (i) the complexity and unpredictability of deep learning models; and (ii) the possibility of collusion or destabilising behaviour that could arise from an unconstrained profit maximising function. Together these create performance and regulatory risks for trading firms and explain the current caution about using neural networks for trading applications. Mr Hall also warns that the adoption of deep trading algorithms might raise system-wide concerns. Either because it could lead to a less resilient and highly correlated market ecosystem, or because neural networks could learn the value of actively amplifying an external shock. For these reasons, regulators, market participants and AI safety experts should work together to ensure that the behaviour of any future algorithms can be constrained and controlled.

Mr Hall adds that his analysis suggests three main areas of focus going forward:

  • Training, monitoring and control: Any deep trading algorithms will need to be trained extensively, tested in multi-agent sandbox environments and constrained by tightly monitored risk and stop-loss limits.
  • Alignment with regulations: Any deep trading algorithms must be trained in a way that ensures that their behaviour is consistent the regulatory rule book.
  • Stress testing: New kinds of stress test may be needed. Stress scenarios should be created using adversarial techniques, as managers and regulators cannot rely on neural networks behaving in a smooth manner. Stress tests should also be used not just to check performance and solvency, but also to better understand the reaction function of deep trading algorithms. Testing must be ongoing to ensure that the reaction function has not changed due to forgetting or opponent shaping.