ServiceNow’s Customer Chief Warns ‘Tokenmaxxing’ Is an A.I. Hype Cycle

ServiceNow’s Customer Chief Warns ‘Tokenmaxxing’ Is an A.I. Hype Cycle ServiceNow’s Customer Chief Warns ‘Tokenmaxxing’ Is an A.I. Hype Cycle

An illustration of claude code and vibe coding.ServiceNow’s Customer Chief Warns ‘Tokenmaxxing’ Is an A.I. Hype Cycle

Tech workers are burning through massive amounts of A.I. compute in a race to code faster and automate more work. In Silicon Valley, the practice is known as “tokenmaxxing,” where employees push their use of ChatGPT and other large language models to the limit to maximize productivity.

The trend has accelerated alongside the rise of A.I. coding tools and agents that can handle increasingly complex tasks. Unlike casually asking ChatGPT or Claude for writing help, generating code and running agentic workflows consumes far more tokens, or the units of data A.I. models process when users enter prompts or generate responses. Inside startups and tech firms, token usage is increasingly becoming a proxy for how heavily employees rely on A.I. in their daily work, with some engineers even leaving coding agents running overnight to speed up product development.

But not every enterprise A.I. leader believes that more usage automatically translates into better results. “I think this [tokenmaxxing] will be a short-lived hype cycle,” Chris Bedi, ServiceNow’s chief customer officer, told Observer at the 2026 ServiceNow Knowledge event this week. “There’s a bill to pay for those tokens.”

ServiceNow is a major cloud platform that helps enterprises manage, automate and design workflows. Roughly 90 percent of Fortune 500 companies use its products, including Nvidia, AT&T and Delta Air Lines. In the first quarter of 2026, ServiceNow generated $3.67 billion in subscription revenue, a 19 percent year-over-year increase since making A.I. central to its business strategy.

One of its flagship products is A.I. Control Tower, a platform that allows customers to oversee A.I. deployments, including tracking agentic behavior and measuring return on investment (ROI). The company also offers an “autonomous workforce” suite of “A.I. specialist” agents, which it has expanded to execute workflows across IT, customer relationship management, and security and risk, among other functions. It also invests heavily in A.I. upskilling through ServiceNow University, a platform designed to train workers to use A.I. in their jobs.

As A.I. agents take on larger roles in the workforce, Bedi says ServiceNow aims to help customers maximize value without overspending. But in his conversations with enterprise customers, tokenmaxxing isn’t top of mind. “When I talk to the C-suite, tokenmaxxing does not come up,” Bedi said.

Bedi argues the trend conflates activity with value. “It’s almost like measuring a restaurant based on how many ingredients they buy,” he said. “You don’t measure a restaurant that way. I wouldn’t.” A worker prompting an A.I. chatbot dozens of times to generate code, for example, may end up with the same result as someone who gets there in only a few prompts.

His skepticism comes as A.I. usage inside tech companies is exploding. According to The New York Times, employees at A.I. firms are consuming staggering amounts of compute internally, with one Anthropic employee allegedly racking up a $150,000 bill in a single month using Claude Code. Tokenmaxxing has become emblematic of how quickly A.I. experimentation costs can spiral as workers face pressure to supercharge their workflows. As employers increasingly mandate A.I. adoption on the job, token usage is expected to climb even higher.

That surge has been a boon for A.I. model providers that charge based on token consumption. OpenAI says its ChatGPT APIs process more than 15 billion tokens per minute. Google’s Gemini models now process more than 16 billion tokens per minute—a 60 percent year-over-year increase, according to its latest earnings report. For A.I. providers, enterprise adoption creates a powerful incentive structure: the more workers rely on A.I., the more revenue those systems generate.

Some tech companies have encouraged employees to maximize token usage internally. At Meta, an employee created an internal leaderboard tracking token usage and highlighting top users, The Information reported in April. After the project leaked publicly and sparked debate over the value of ranking token consumption, the leaderboard was taken down.

Generous token budgets are increasingly being treated like premium software stipends or free meals. At Nvidia’s annual GTC conference in March, CEO Jensen Huang said engineers should expect annual token budgets worth roughly half their already high salaries, on top of base pay, so their output “could be amplified 10X.”

Still, a growing number of executives argue that tokenmaxxing risks becoming another Silicon Valley vanity metric. Yamini Rangan, CEO of HubSpot, recently wrote on LinkedIn that “[Outcome maxxing >> token maxxing],” meaning measurable business outcomes matter more than usage. Andrew Lau, CEO of engineering intelligence firm Jellyfish, shared a similar view, calling tokenmaxxing a “starting point” to amplify growth.

According to Bedi, the value of A.I. is best measured by whether it meaningfully improves performance. Companies are still relying on familiar metrics: how much time workers save, how much output they produce, and whether A.I. improves operational efficiency.

While less flashy than token counts, traditional business outcomes remain key to quantifying A.I. ROI, Bedi said. “The overall goal is, how do I help my workforce be as capable as possible on A.I., and how do I help them get comfortable using it?”