Even for experts, it’s currently very hard to make sense of the incredibly rapid and dramatic trends in the AI market. But that’s not surprising for a massive new tech wave.
The tech industry doesn’t move in a linear fashion. S-curve-like patterns can be found everywhere. Not much seems to be happening for a while, then suddenly things take off very rapidly and grow exponentially (both in terms of technical capabilities and business success), only to flatten out again after a while.
Humans have a hard time intuitively grasping S-curves because they rarely occur in nature. It’s even more difficult to analyze and understand overlapping S-curves that influence each other.
What we are currently experiencing in AI is exactly that: A pretty wild mix of different S-curves that overlap, move at different speeds and either amplify or cancel out each other.
This kind of dynamic is nothing new in tech. For example, during the dot-com boom of the late 90s, browser technology went through a very fast S-curve. From 1995 to 1998, the definition of what a web browser was and what it could do changed fundamentally — from a passive, dumb terminal to a fully programmable application platform. Since then there has been some innovation in the browser market, but it feels very incremental by comparison.
Meanwhile, the development in Internet access technology lagged behind. Most households were still accessing the web using slow analog modems in the early 2000s. Only a breakthrough in broadband penetration led to the wave of adoption we have seen since. Social media would have been unthinkable as a mainstream phenomenon in the times of slow modems, and it wouldn’t have become ubiquitous without smartphones that went through similar curves.
The most visible S-curve in AI in recent years has of course been the adoption of capable LLMs. ChatGPT is the iconic product in the space, and many others have followed. The release of GPT-4 in March of 2023 seemed like a dramatic leap in capabilities, but at least from the perspective of a typical end user, not that much has changed since. Are we already in the flat top part of the LLM S-curve? Several players (including Google and the top open source models) are now catching up to GPT-4, but we have not seen a massive leap forward in recent months.
Some areas of the AI market are still waiting for a breakthrough. The many generic “chat with your data” apps and copilots are certainly somewhat useful, but it would be an exaggeration to say that these are game-changers for most people. We have yet to see a true breakthrough that is so obviously more useful than existing technology that it triggers exponential adoption.
In some areas there are signs that things could turn exponential soon. AI-driven web search is getting increasingly exciting with the compelling work that is currently being done by new players such as You.com and Perplexity. Highly customizable AI assistants such as TextCortex are becoming even more useful with every new feature. And there are of course several verticals such as life sciences, media production and marketing where AI is having a real impact.
Similar patterns can be seen in hardware. Of course NVIDIA is currently by far the dominant player thanks to its GPUs. But startups such as Groq with alternative approaches have been making headlines, and there are a whole range of possible approaches for how traditional GPUs could be augmented and improved upon.
Other segments seem to be more elusive. There is a lot of talk about proactive agents that could be built on top of LLMs — independently acting pieces of software that can fulfill a complex task according to a user’s instructions. First experiments seem promising, but there is no truly useful product yet that goes beyond fairly basic automation tasks. Most products are stuck at the level of a cool demo. It can probably be argued that the current generation of LLMs is simply not reliable and fast enough to provide the underpinnings for capable agents. And there are other elements that are lacking, such as flexible data integration. But that’s probably a question of time, and the agent S-curve could be massive once all preconditions are in place.
It’s certainly a lesson from previous tech waves that each new S-curve unlocks the potential for other waves of innovation, and that often happens very quickly. LLMs have already enabled several new categories of applications and also opened needs for more sophisticated tooling. Even if LLMs get stuck at the current performance level for a while, they have already unlocked countless opportunities.
Navigating this complex terrain of asynchronous S-curves will be crucial for entrepreneurs and investors alike as they try to identify the next wave of transformative AI technologies before they hit exponential growth. S-Curves are hard to predict, but the rewards for getting it right are massive.