The idea that, with the right algorithm, the future can be mostly predicted is among the most dangerous examples of human hubris.
This faith in the ideal algorithm underpins various initiatives, from “Five Year Plans” and Karl Marx’s dialectic historical analysis to self-assured television punditry covering topics ranging from stock markets to geopolitics.
The technique now appears to be based on an objective methodology and has an intellectual veneer thanks to the availability of readily accessible regression models and Excel sheets.
The striking inability of all these “forecasting models” to foresee the most significant turns in human history, such as the fall of the Soviet Union, the global financial crisis of 2007–2008, the Covid-19 pandemic, and so forth, has remarkably had little impact on this hubris.
A model (or expert) seldom repeats a performance, even when it happens once in a while. Occasionally, even monkeys who toss darts will hit the target.
This human hubris is poised to get a frightening new tool in the form of artificial intelligence (AI). Its capacity to sift through vast volumes of data and offer highly targeted recommendations will instill a completely unwarranted sense of confidence among consultants, social scientists (particularly economists), and other readers of tarot cards that they can finally foresee the course of the universe.
The possibility that extensive data mining could sometimes produce real insights into how the world works will serve to justify this hubris.
The financial industry is one where there is already a lot of excitement around AI’s capacity to mine data. Fintech companies and even conventional banks are intrigued by the notion that the ideal loan decision may be automatically made once computers have thoroughly mined massive amounts of data.
Some of the risks are readily apparent. First off, the entire strategy is predicated on data from the past, but the main idea behind financial risk management is being resilient to unforeseen, random shocks in the future.
AI can be a useful tool, but there’s a risk that financiers will become dependent on it and lose sight of fundamental concepts. This was the case during the 2007–2008 financial crisis with sophisticated derivatives.
Second, if multiple AI models are fed comparable data sets, they will probably also come to similar conclusions. This implies that it will result in self-reinforcing cycles, which may generate blind spots, excessive focus, and systemic groupthink.
We can all observe how social media algorithms cause this to occur. Fintech algorithms will probably experience a similar situation.
Thirdly, because the emphasis is on extracting information from pre-existing databases, new technologies, and market niches may not receive funding because they lack a historical performance “track record.”
When there are no structural breaks in a field, data mining is effective in stationary fields where structural linkages are generally static. The increased use of AI may, paradoxically, discourage innovation in other domains.
This piece aims to highlight that AI-based analysis is a valuable tool and that its potential is not as great as its supporters claim. It does not, however, want to discourage the usage of this technology.
AI-based analytical tools run the risk of being indiscriminately used in various contexts, including academics, banking, and investing in addition to policy-making. Artificial intelligence (AI)-based systems may potentially “grow” into new domains due to their emergent characteristics.
It would then quickly turn into Black-Scholes pro-max (a mathematical pricing model for derivatives that contributed to the global financial crisis of 2007).
Our world is complicated. To deal with it, we must acknowledge that it is inherently unpredictable and non-deterministic. Most forecasting tools don’t work well over the long term, even though one can make an informed prediction in the short term.
We can only speculate on potential outcomes and directions in broad terms. For this reason, rather than being purely scientific, fields like finance and economic policy-making have elements of art. This won’t change with fad-driven AI applications.
AI-based systems are currently being compared to conceited children, and those who support them are acting like indulgent parents. But if they don’t receive early discipline, they’ll develop into aggressive adults who could do a lot of harm.