The Efficiency Trap Inside Corporate A.I. Spending

The Efficiency Trap Inside Corporate A.I. Spending The Efficiency Trap Inside Corporate A.I. Spending

Illustration of office three workers hunched over their desks after hours, surrounded by many empty desks The Efficiency Trap Inside Corporate A.I. Spending

When Meta cut 8,000 employees in April and framed it explicitly as an investment in artificial intelligence, it joined a growing list of highly profitable companies trading headcount for computing. Coinbase followed weeks later, cutting around 14 percent of its workforce, citing market volatility and the proliferation of A.I. tools in the same breath.

The direction of travel is understandable. The economics of A.I. are real, the competitive pressure is real and at some point, every company with serious A.I. ambitions will have to make tough capital allocation decisions in the face of these realities.

What is less clear is whether the approach some companies are taking is wise. There is, for instance, a clear transatlantic split in the speed and scale of this reallocation, with U.S. firms in particular appearing willing to move more aggressively, cutting deeper and faster. The bets being made are large. And some of the risks attached to them are genuinely hard to foresee. But at least two of them are entirely predictable, and neither is getting nearly enough attention.

The efficiency trap

The first concerns what happens to organizations when they strip out capacity in the name of efficiency. The word itself is doing a lot of work in these announcements. Because when Zuckerberg talks about efficiency in the same sentence as A.I. investment, he uses a word that, in organizational terms, carries a specific and well-documented set of consequences that are clearly not just about finances. It’s not always clear that they are about efficiency, either. 

For instance, research into how organizations respond to efficiency programs shows that when they strip out slack and compress the number of forums for alignment and discussion, they do not make these organizations more adaptive. Instead, counterintuitively, they make them more fragile. The reason is structural. What looks like waste, including people and processes that don’t look critical, is often the system’s capacity for self-correction. It’s where misalignments get spotted early, where new ideas find a testing ground, where the organization senses what’s changing before it becomes visible in the numbers. Remove it, and the organization becomes faster in the short term and blinder in the medium term.

The irony for companies like Meta is notable. The innovation that A.I. investment is supposed to unlock depends on exactly the kind of organizational behavior that efficiency cuts tend to suppress: experimentation, initiative and a willingness to surface problems early and openly. These are not soft things. Google’s own research, the long-running People Analytics work that identified the characteristics of effective teams, found that psychological safety, the degree to which people feel it’s safe to take risks and speak up, was the single most important variable. It sat above talent, above resources, above everything else. And it is, almost by definition, the first casualty of a large-scale redundancy program framed around the idea that human labor is being substituted for machine capacity.

The signal problem

The second risk runs deeper, and it connects to something I’ve spent considerable time researching in the context of power and organizational information flow. When leaders make decisions, the quality of those decisions depends on the quality of the information reaching them. And that information flow is not a neutral pipe. It is constantly shaped by what people say.

The research on this is consistent and sobering. Subordinates speak less openly to people with power over them. Bad news travels slowly upward. Ambiguous information gets simplified and sanitized as it moves through layers of management. Leaders end up with a cleaner, more coherent picture of reality than actually exists. In my book The Power Trap, I called this the clarity gap: the distance between what senior leaders believe is happening and what is actually happening below them.

Layoffs do not cause this problem. It already exists in every organization with any degree of hierarchy. But large-scale redundancy programs, particularly those framed the way Meta’s was, dramatically accelerate it. The moment employees realize their role might be next in line for replacement by A.I., they begin calculating what is safe to say. They become more politically cautious, more careful about raising problems, more inclined to tell leaders what they think leaders want to hear. Dissent becomes expensive. Experimentation becomes risky. The bad news that leaders most need in order to make good A.I. investment decisions starts taking the long route, or doesn’t arrive at all.

This is not just hypothetical. I have worked with CEOs across sectors who, after extensive restructuring, have found that their organizations have become curiously quiet. That strategy lands without pushback, problems surface late, and middle managers begin optimizing for what the boss notices rather than what the organization needs. The restructuring achieved its financial goal, but the organization became harder to steer.

What leaders signal, whether they intend to or not

There is a third dynamic worth naming, because it compounds both of the above. Under conditions of uncertainty, research shows that people become significantly more sensitive to signals from those in authority, especially negative ones. A leader who expresses doubt creates more anxiety than the same doubt expressed by a peer. A senior announcement that implies job insecurity lands harder and spreads further than the same message in a stable environment. Small missteps get amplified.

This means that leaders pursuing the cut-to-invest approach are operating with a wider blast radius than they may realize. The language they use matters more right now, not less. When efficiency and A.I. investment appear in the same sentence, the inference employees draw is not a narrow financial one. It is existential. And the behavioral consequences, a more risk-averse, information-filtering, upward-managing workforce, are precisely the conditions in which large technology bets go wrong.

None of this means the capital reallocation itself is wrong. The structural shift from human capital to compute is real, and it is going to accelerate. The question for leaders is whether they are managing it in a way that preserves the organizational conditions their A.I. investment will need in order to work. Right now, a significant number of them are not. And that is a risk that won’t show up in the capital allocation model but will eventually surface in the results.

Nik Kinley is a leadership psychologist, executive coach, and author of The Power Trap, who has worked with the CEOs of national banks, heads of national security and hedge fund bosses, as well as royalty, criminals, politicians and children.