What the bear case on AI is missing
TwentyFour
We have had an eventful few weeks of AI-driven volatility in markets, with markets seemingly swinging from “everyone’s a winner” to “everyone’s a loser” faster than technological progress itself.
The Citrini Research article published a few days ago, which presented a somewhat dystopian view on the AI threat to white-collar jobs, sparked a fresh sell-off in the software sector and all other sectors potentially involved in the AI disruption trade. While the situation seems calmer after Anthropic’s comments that it was not looking to exterminate every software company in the world with “agentic” AI, nervousness is still palpable.
This episode highlights a couple of interesting topics. First, financial markets are extremely anxious about the consequences AI might have, not only in the tech sector but in many others as well. This is of course not helped by valuations being stretched after multi-year rallies in some stock prices. A research article that begins “What follows is a scenario, not a prediction…” is usually not cause for blue-chip stocks to sell off by 7% in a day. Uncertainty will remain high in coming quarters and will only begin to dissipate gradually as companies report concrete figures on forward-looking indicators such as new subscriptions, backlogs of new orders and others that might be relevant for different industries. This should in turn lead to more dispersion in returns as winners and losers become easier to tell apart, but the process will take some time.
Second, as many critiques to the Citrini piece have highlighted, it is very difficult to go from a few bearish predictions on unemployment to a more general macro equilibrium scenario which is internally consistent and therefore considers the broader consequences of said bearish predictions in all relevant macro variables. One example is in the labour market. Substituting labour for capital (i.e. reducing workforce and replacing it with AI) certainly has limits, both from an operational, legal, and economic point of view. At least for the time being, the legal responsibility of fulfilling a contract lies with human beings. An AI cannot enter a contract to manage an investment fund, for example, which places a limit on the substitutability of labour for capital. From an economics point of view, there is a cost to AI which at the moment has to do with electricity, R&D and possibly maintenance. To the extent that these marginal costs are higher than those of a human employee, the substitution effect will be limited.
Another example is the potential impact lower prices in certain goods and services might have on volumes of goods and services traded in the economy. If prices decline, then real income should increase, which makes it likely that consumption also increases – this would be real GDP growth. Another way of looking at this is that greater supply of goods and services in real terms must be met with more demand. For a period, companies might accumulate inventories, but eventually those additional goods and services brought about by increases in productivity will find their way into one of the components of aggregate demand: private consumption, government consumption, investment or net exports. In other words, if we agree that productivity gains boost the supply of goods and services, then we must also agree that one of the components of aggregate demand will increase to match it. You can’t have a sustained increase in production at the same time as a sustained decline in growth.
One final example of offsetting forces in the economy going unconsidered by the Citrini article is this: if AI takes off as an economically viable technology and companies such as OpenAI start delivering profits, then those profits will come back to the economy via a combination of reinvestment, investments in other areas, or simply consumption by the owners of these companies. All of these are components of the demand side of GDP and therefore would bring about growth.
For fixed income investors, who do not benefit from the upside of investing in world-changing technologies, there are three main takeaways.
First, the best we can do for now is monitor closely how earnings and business models may or may not be disrupted by AI. At this stage, we think for the most part the application of AI technologies is likely to end up driving certain costs down. In a reasonably competitive market this should mean that at least part of these cost savings translates into lower end prices. If companies are slow to adapt, then they might be priced out by more efficient competitors and run into financial trouble. This would translate into a significant deterioration in fundamentals for many companies.
Second, it seems that there are players in the market that might be too concentrated in illiquid, lower rated or unrated tech credit. We need to monitor to what extent trouble in these assets might cause a tightening of financial conditions, as investors face losses they were not expecting to be part of a reasonable set of outcomes at the time they invested. Private credit issues will continue to come to the front pages where exposures are held in inappropriate vehicles or where concentration in a given sector is too high.
Third, fear levels are running high and there will be contagion into companies and sectors that will in fact be able to navigate the uncertainty ahead. This could create opportunities to pick up undervalued bonds. As always, careful credit analysis and actively managing portfolios remains critical to delivering attractive risk-adjusted returns in the medium term.