Unpopular opinion: While everyone talks about AI’s ability to efficiently produce cognitive deliverables — such as writing code, creating content, and analyzing data — a massive opportunity is hiding in plain sight: coordination.

Some studies suggest that up to 50%-60% of employees’ time in large corporations is spent coordinating rather than creating value. Tackling this problem could be AI’s most transformative impact on the economy, fundamentally reshaping how organizations operate.

There are three important dimensions of coordination inefficiencies in which AI might have an impact:

1. The Context Window Problem

Humans, like AI models, have limited processing capacity — relatively short context windows, if we want to apply this AI term on humans. We can process only a certain amount of information and can only understand facts and connections up to a certain degree of complexity.

Organizations compensate by breaking down complexity into individual goals — essentially creating personal “reward functions” for each employee. Much like a reinforcement-learning AI model is being trained to optimize against a certain goal, human workers get goals that are specific enough to handle for one person. The same applies to teams, departments, and divisions on a higher level.

What’s the problem with that? People optimize their own metrics without seeing the bigger picture. These personal reward functions don’t necessarily add up to an optimal organization-wide reward function.

Procurement departments are an example. Anyone who’s sold to enterprise clients knows this pain. While procurement teams serve vital cost-saving and quality control functions, they often operate in silos. They’ll spend months negotiating a 10% discount on a strategically important software product or service, while the delay could cost the company millions in missed opportunities. They’re optimizing their reward function (cost savings) without considering the organization-wide impact.

In the near future, AI agents might be able to collaborate toward a unified organizational objective. They could process vastly more context and adhere to more complex priority structures than any individual human. Companies have long used computer models to optimize complex goal hierarchies, but integrating goal optimization with actual work execution and dynamically adjusting priorities based on current reality seems much more promising.

2. The Social Factor

Humans are inherently social beings. We seek approval, status, and connection — goals that don’t always align with corporate objectives. Office politics, relationship management, and status optimization consume significant mental bandwidth and decision-making capacity in many organizations. I have personally had the privilege of working in very socially inspiring teams with great cultures, but have also experienced quite the opposite. And even the most pleasant work environment suffers from social friction at times.

Of course social dynamics between humans can lead to magical moments of innovation or culture building that AI can’t replace. But it’s probably fair to say that this upside comes often at a considerable price. Consider how many meetings exist primarily to manage social dynamics rather than make decisions.

AI doesn’t need to be liked, doesn’t compete for promotions, and doesn’t get distracted by interpersonal dynamics. This neutrality allows for purely objective optimization toward defined goals.

The interesting thing to watch will be how autonomous AI agents work alongside human co-workers. Will social dynamics in these “cyborg” teams be very different from human-only teams?

3. The Principal-Agent Problem

Every employee is, understandably, optimizing for their entire life, not just their current job role. Career advancement, work-life balance, skill development — these personal priorities naturally influence professional decisions, potentially creating misalignment with company objectives.

For example, software engineers sometimes choose technologies or architectures that enhance their personal marketability rather than serve the company’s needs. Using the latest trendy framework might make an engineer more hirable elsewhere, even if a boring, proven technology would better serve the current project. It is often very difficult for engineering managers to balance these needs.

AI agents don’t have careers to manage or personal lives to balance. They can dedicate 100% of their processing power to organizational outcomes.

This will certainly be a source of conflict. We’re already hearing senior software engineers complain that AI coding tools don’t produce “beautiful” code with all the fancy frameworks and abstractions engineers tend to be proud of. But what if AI is simply more efficient, ignoring secondary aspects that matter only to humans?

What will an AI-Human-Hybrid Company Look Like?

The companies of the future — maybe the near future — will likely be structured as dynamic networks of AI agents coordinated by strategic human leaders who define the overarching organizational reward function.

AI might become the organization’s operational backbone, autonomously handling routine tasks, data analysis, real-time decision-making, most sales and marketing tasks, and internal communication. Human leaders focus exclusively on strategic vision, innovation, critical decisions, and building meaningful stakeholder relationships.

The efficiency gains would be staggering. No more endless email chains, misaligned incentives, or coordination overhead consuming the majority of productive time. The coordination revolution may prove even more transformative than AI’s impressive cognitive capabilities.

How far away is this future? It’s hard to say. AI is amazingly good at certain cognitive tasks, but the challenge is still to feed it with all necessary context, to give it the right guardrails and to make sure it sets the right priorities. The necessary technical capabilities won’t be ready overnight, but they are progressing every week.

Most of all, the adoption of AI as a coordination layer will be very transformative for established organizations. Fundamental transformations take time and tend to be messy.

I therefore think that we will see this kind of AI-first organization initially in relatively new companies — startups and scale-ups — that don’t have to deal with a lot of legacy. First signs are already here. And it’s possible that these AI-first companies are so much more efficient than incumbents that they are going to have a relatively easy time winning in their markets.

The Social Impact

To state the obvious: Yes, this will have a massive societal impact. We are already seeing that many companies are becoming much more efficient (i.e. are employing fewer humans) thanks to AI, and we’ve probably only scratched the surface. AI has not even really started to solve the coordination problems described in this post, and the impact on employment could be massive. Who needs middle managers if AI can do their jobs much better?

And consider the impact on human identity and self-image: It’s one thing to use AI as a coding buddy or copywriting assistant. But what happens to human identity when AI starts making decisions? What if it gives you orders in your daily work because it’s better at determining what’s good for the company? What if AI becomes your manager? There is little doubt that there will be job functions where humans are going to be superior to AI for quite a while, but if AI is better at coordination and management, we could see inverted roles.

If you’re the CEO of an AI-first company, how do you know that the AI is presenting correct and unmanipulated data for your own decisions? How do you supervise AI agents that operate at incredible speed?

As so often, the emergence of a fundamental innovation raises more questions than it answers.

(Thanks to Claude Code, o3-pro and GPT-4.5 for their suggestions, as well as DeepL Write for copy editing and proofreading. And yes, em-dashes are all mine.)

Categories: Strategy

Leave a Reply

Your email address will not be published. Required fields are marked *