AI's Diminishing Returns: The Slowdown in Model Improvements and Profitability
As the AI industry matures, the rate of improvement in models like ChatGPT is slowing, raising questions about the path to profitability and the future of the technology.
The Facts:
The rate of improvement for AI models, including the most capable ones like OpenAI's ChatGPT and Google's Gemini, appears to be slowing down. Building and running AI systems is incredibly expensive, and new AI models are constantly emerging, but it takes a long time for them to have a meaningful impact on how people work. This is raising questions about whether AI could become commoditized, its potential to generate revenue and profits, and whether a new economy is genuinely being born.
The measurable and qualitative improvements in large language model AIs primarily stem from feeding them ever-increasing amounts of data. However, these models have already been trained on more or less the entire internet, and there is a limited supply of additional data for them to consume. Attempts to use "synthetic data" generated by other AIs to train the next generation of models have not been successful, as this can lead to an irreversible, degenerative process called "model collapse." Recursively training generative models on their own content can also lead to self-consuming loops that degrade model quality and diversity, a phenomenon called "Model Autophagy Disorder (MAD)."
The slowdown in AI improvement is evident in the closing performance gaps between various AI models, both proprietary and open-source. As the technology matures, no one has a significant edge in performance, and the focus shifts from who is in the lead to who can cut costs the most. This phenomenon has been observed in other mature technologies, such as electric vehicles.
The commoditization of AI is seen as a potential threat to the future of AI startups, such as OpenAI and Anthropic. While larger companies like Microsoft and Google may be able to attract enough users to justify their AI investments, the comparatively limited resources of AI startups may make it difficult for them to compete. Some AI startups have already encountered turmoil, with employees leaving and merger talks taking place.
The staggering difference between the industry's spending on AI and the revenue it generates is a cause for concern. The industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue. The costs of running AI-powered services, such as delivering AI-generated answers in search results, can significantly exceed the costs of training the models. This challenge is compounded by the slow adoption of AI tools, as many workers are only experimenting with them rather than fully relying on them. AI is not yet a significant productivity booster for most workers, as the technology cannot fully replace human roles, similar to the slow adoption of self-driving trucks. There is a massive gulf between the number of workers who are just playing with AI and the subset who rely on it and pay for it.
Additionally, while 61% of CEOs surveyed by PwC expect that generative AI will boost the quality of their products and services by 2024, the actual adoption and real profit from AI may not materialize as quickly as expected, with issues like slower-than-expected uptake of Microsoft's Copilot and the inability of AI to replicate human context and skills.
The View:
The AI industry is at a critical juncture, where the hype and expectations around the technology have far outpaced its actual capabilities and adoption. The evidence suggests that the rate of improvement in AI models has slowed, and the path to profitability and sustainable business models is not as clear-cut as many have assumed.
The business community's expectations for AI are the natural result of the breathless hype of the last two years, with Wall Street analysts promising that AI is a "generational transformation" akin to the advent of electricity. This has created a situation where the expectations for AI have reached a fever pitch, far exceeding the current realities of the technology.
One significant barrier to the swift adoption of AI is the need to change people's mindsets and habits. This is a consistent pattern observed with the rollout of new technologies, where individuals and organizations often resist change and require time to adapt to the new tools and processes. Shifting entrenched ways of thinking and established workflows can be a substantial challenge, and may delay the widespread acceptance and integration of AI.
There is also a concern that future AI models could regress and become less useful if they predominantly train on their own outputs, leading to a "bland AI echo chamber" that loses touch with real-world data distribution. This could potentially undermine the continued progress and usefulness of AI systems if not addressed.
The current level of investment in the AI industry is predicated on the assumption that the technology will improve rapidly and be adopted swiftly, but the mounting evidence suggests otherwise. It is now clear that 2024 will be a crucial year for AI, as business leaders and investors grapple with the reality of what the technology can and cannot deliver in the near term.
TLDR:
- The rate of improvement for AI models, including OpenAI’s ChatGPT and Google’s Gemini, is slowing down
- The measurable and qualitative improvements in large language model AIs primarily stem from feeding them ever-increasing amounts of data, but there is a limited supply of additional data
- Attempts to use "synthetic data" generated by other AIs to train the next generation of models have not been successful, leading to "model collapse" and "Model Autophagy Disorder (MAD)".
- The performance gaps between various AI models are closing, shifting the focus to cost-cutting rather than performance
- The AI industry's spending on chips and infrastructure far exceeds the revenue it generates, with the costs of running AI-powered services significantly exceeding the costs of training the models
- Slow adoption of AI tools by workers, similar to the slow adoption of self-driving trucks, is a major challenge
- The expectations for AI have reached a fever pitch, far exceeding the current realities and capabilities of the technology
Know More:
Has Generative AI Already Peaked?
Has Generative AI Already Peaked? - Computerphile
Have we reached peak generative AI? - FRANCE 24
Is the Generative AI Bubble Bursting Already? - Gartner
Insights From:
The AI Revolution Is Already Losing Steam - Wall Street Journal