A Confused Fed
Chair Yellen’s June announcement that the United States Federal Reserve’s (Fed) target fund rate would remain between 0.25 and 0.5 percent marked the fourth consecutive such decision and the continuation of an eight-year long regime of near zero interest rates.
With most members of the Federal Open Markets Committee (FOMC) predicting interest rates will be between 2.5 and 3.5 percent by 2018, a rate rise in the near future is to be expected. The opinions of individual FOMC members reveal a confused Fed, however: predictions for 2018 given the March and June (2016) range from 0.5 percent to 4 percent.
The fundamental role of a central bank is the promotion of macro stability. Given the size and influence of America’s economy, when required rate rises are repeatedly delayed the effects are large and widespread. The growing success of machine learning in predicting economic trends, particularly by investment management firms, has led some in the academic community to question the reluctance of central banks to deviate from conventional statistical methods.
Help from Computer Science
Presently, there does not exist a single reductive mechanism through which interest rates are set; and with good reason, since any fixed rule would quickly be incorporated into market expectations thus rendering monetary policy permanently neutral. At best, central banks are seen to loosely observe the Taylor Rule – though John Taylor himself notably describes the rule a method for “checking where rates should have been” as opposed to a rule through which rules should be set.
Where the goal of conventional econometric techniques is to extract a causal relationship from data, most machine learning algorithms have no underlying model. Thus, predictions arrived at through machine learning often provide little intuitive explanation of the relationship governing the data in question. By way of example, with a data set in (x,y) space the simple linear regression finds the best possible linear relationship of the form Y = b_0 + b_1*X. A machine learning algorithm, however, will gradually ‘learn’ some previously unknown function Y = f(X), based on the ‘shape of the data.’
Machine learning algorithms can deal more easily with non-lin linearity and non-normality (two common assumptions in econometrics), and for this reason frequently yield more accurate predictions that econometrics. The importance placed by economics on explicitly defining causal links has lead to a general reluctance among the academic community regarding non-parametric techniques. As increasingly large quantities of economic data blur the boundaries between econometrics and applied statistics, some economists have begun to apply machine learning techniques to problems in economics.
Possibilities for Machine Learning
In its most extreme incarnation, it is not completely unfeasible that the adoption of machine learning in economics will lead to the complete automation of monetary policy. A defining feature of machine learning is its reliance on large quantities of high-quality data. In fields such as quantitative finance, the availability of such data has already lead to the widespread adoption of algorithmic techniques.
While algorithms to determine policy will likely never run unsupervised, in countries such as Hong Kong and Bulgaria where monetary policy is set by a currency board, it is easy to see how large parts of the monetary authority will become redundant.
More realistically, in a recent interview on EconTalk (09/05/2016) Pedro Domingos of Washington University described how through a technique known as split testing, web developers are already beginning to extract causal understanding from machine learnt predictions. In marketing, split testing is performed by presenting two variants of the same product to two representative groups at the same time. In the case of machine learnt predictions, causal links can be determined by performing a variation of split testing on the predictions outputted by the algorithm.
For economists less comfortable with abandoning a model a priori, Noah Smith’s blog post describes how among others Hal Varian (Chief Economist at Google) proposed the use of machine learning techniques to decide between multiple competing models. Varian links machine learning techniques to experimental economics, in which posited theories are tested on representative groups under experimental conditions.
Limitations and the Future of Machine Learning
There are fundamental differences monetary policy and website design. Adaptive Dynamics, a process similar to the ‘observer effect’ in physics where observing the system changes the system provides the first. In setting monetary policy, the very fact that there is an ’entity with agency’ setting the interest rates must be controlled for. It’s hard, though not impossible, to envisage an algorithm capable of controlling for this.
Bill Dudley of the Federal Bank of New York has often stated that the greatest failing of Fed models was their inability to account adequately for financial markets. Through computerisation has reduced its extent, a large part of market movements are due to sentiment (or pure psychology). Thus, the inability of computers to neither replicate nor understand the irrationalities of human behaviour presents a further obstacle.
If the above problems can be surmounted, it is very likely that machine learning will be a powerful economic tool shortly.