Subscribe

Get exclusive insights on startups, growth, and tech trends

One curated email per month. No spam, ever.
Subscription Form
Est. Reading: 8 minutes

The Bandwidth Tax: AI Gave You Infinite Intelligence. It Didn't Give You Infinite Time.

TL;DR: The AI bottleneck isn't the models. It's that work is still designed around humans touching every output. Redesign where humans sit in the workflow, or just keep getting busier.


A few months ago I was in a conversation with a CTO who ran me through her company's AI rollout with quiet pride. Five teams, each with its own tool, each moving fast. Marketing, engineering, support, analytics, sales, all adopted something within six months.

Then she pulled up the numbers. Cycle times hadn't improved. Deal velocity was flat. Engineering was actually shipping slower. Her senior developers were spending two hours a day reviewing AI-generated code instead of writing their own. Twenty-three people across the company had "AI review" as a weekly recurring calendar block.

"We automated the production," she said, "and accidentally created a review factory."

I've heard versions of this story from enough teams now that I've stopped treating it as an edge case. It's the central paradox of enterprise AI: the tools are genuinely capable, and everyone is genuinely busier.

It's Not an Edge Case

Microsoft's 2025 Work Trend Index put a number on it: 80% of the global workforce says they lack the time or energy to do their job. Not because they don't have tools. Because the tools generate output faster than humans can process it. McKinsey's state of AI survey tells the same story from the other side: 88% of organizations use AI in at least one function, but only 39% see bottom-line impact. More than half haven't scaled past the pilot stage.

Sequoia Capital calls this AI's $600B question. The industry is spending hundreds of billions on AI infrastructure, and end users can't tell you what they're getting out of it. Sequoia's analysis is blunt: the chatbot phase of AI suffered from poor retention because users had to work too hard to extract value. It didn't save time. It moved the work from "creating" to "evaluating, editing, and approving."

That's the Bandwidth Tax in action: not the cost of the tools themselves, but the cost of all the human attention they generate around them.

How You End Up Paying It

The pattern is almost always the same. A team bolts AI onto an existing process, expecting the process to handle more volume. It does. In the worst possible way.

Marketing adopts a writing tool. Now instead of one person crafting three subject lines, one person reviews thirty. Sales gets an AI prospecting assistant. Now an SDR evaluates fifty personalized emails instead of writing ten. Engineering integrates a code-generation tool. Now a senior developer becomes a full-time code reviewer.

In every case, the creation bottleneck vanished. A curation bottleneck appeared in its place. Nobody planned for it, nobody staffed for it, nobody measures it.

And the real damage isn't the time spent reviewing. It's the time not spent on the work that actually needs you. When your senior engineer is reviewing boilerplate two hours a day, that's two hours not spent on system design. When your VP of marketing is evaluating fifty AI headlines, that's an afternoon not spent on positioning.

Menlo Ventures' State of Generative AI in the Enterprise research tracks this shift: enterprise budgets are moving away from general-purpose chat assistants and toward vertical agents that execute end-to-end. An AI assistant that generates options but never closes the loop isn't saving time. It's consuming it.

The Fix

Last month I wrote about companies mispricing AI externally, charging by the token when they should charge for outcomes. There's a parallel mistake happening inside every organization: mispricing human attention. Treating every AI output as if it needs a sign-off, when most of it doesn't.

The framework I keep coming back to is simple. Three tiers, one question: how much human involvement does this workflow actually need?

Delegate. AI executes end-to-end. No human in the loop. Email categorization, log monitoring, meeting summaries, scheduling, data formatting. If a mistake costs you an hour of cleanup, let the agent handle it. Don't spend a human's hour preventing it.

Monitor. AI executes, a human spot-checks on a cadence. Not every output, not in real time. A weekly review, a sampled audit. Support ticket resolution, analytics reports, security alert triage, routine code reviews. One owner, fixed cadence. If you're reviewing every single output, you've turned a Delegate task into a Monitor task and you're paying double.

Direct. The human leads. AI provides analysis, options, and expertise, but the human owns the call. Pricing strategy, hiring, product direction, key customer escalations. High-stakes, hard-to-reverse decisions. This is where human bandwidth creates the most leverage. Protect it.

The exact split will depend on what your company does, but the direction is consistent across every team I've seen get this right: far more workflows belong in Delegate than most leaders assume. The instinct is to keep humans involved "just in case" — and that instinct is usually wrong. Most organizations are running the ratio backwards. They leave a human in nearly every loop, and the decisions that actually need judgment get squeezed into whatever time is left after the review factory closes.

Who's Actually Doing This

The organizations McKinsey identifies as AI "high performers" (the 6% that capture real enterprise value) are three times more likely to have fundamentally redesigned workflows rather than just layering AI on top. The distinguishing factor isn't the technology. It's that they designed humans out of the loops that don't need them, so they could be fully present in the loops that do.

a16z has been making a similar point from the product side: companies that bolt a chatbot onto legacy software are vulnerable to AI-native startups that redesign the entire interface. Often replacing the traditional UI with an approval-based dashboard where the human's only job is handling exceptions. That's Monitor as a product design philosophy, not just an internal process.

The Monday Morning Checklist

For Founders / CEOs:

  • [ ] Map every AI tool your company uses to a tier. For each one, ask: is the human in this loop adding judgment, or just adding latency?
  • [ ] Count how many Direct-tier decisions your leadership team makes per week. If it's more than five, the most important ones aren't getting the attention they deserve
  • [ ] Kill the fragmentation tax. Every standalone AI tool that lives outside the primary workflow platform adds coordination overhead that eats the productivity gains

For CTOs / Engineering Leaders:

  • [ ] List every AI workflow, assign a tier, ask your team: "where are we sitting in a loop an agent should own?"
  • [ ] Swap your success metric from "options generated" to "time to outcome." An AI that surfaces fifty options and takes a week to evaluate is worse than one that executes the best option and flags exceptions
  • [ ] Pick one domain and start a hill-climbing loop: build an eval set, train the agent to meet the bar, raise the bar, repeat

For Team Leads / ICs:

  • [ ] Before any AI-assisted task, ask: am I the bottleneck? If you're spending time reviewing Delegate-tier work, step out of the loop
  • [ ] Audit your calendar. Which recurring meetings exist because a human sits in a loop that could be automated? Push for async, agent-driven updates
  • [ ] Stop supervising AI output. Start building a team of agents that expands what you're capable of. Delegate the mechanical work, keep the judgment

What Comes Next

The most sophisticated AI users I know are quietly moving past the chatbot model — not because they've given up on AI, but because they've realized that asking a person to sit in front of a chat window, evaluate outputs, copy-paste results into another system, and approve every step wasn't a workflow redesign. It was delegation theater.

The shift toward agents — software that executes workflows end-to-end rather than answering questions — is already showing up in where enterprise budgets are going. Menlo, Sequoia, and a16z are all tracking it. The move isn't dramatic. It's quiet, team by team, workflow by workflow. Companies replacing chat interfaces with systems that close loops rather than open tabs.

What separates the teams making that shift from the ones still stuck in the review factory usually isn't the technology. It's that someone stopped and asked the uncomfortable question: which of these loops actually needs a human, and which ones just have one out of habit?

That question is harder than it sounds. But it's the right one to start with.

AI gave you infinite intelligence. It didn't give you infinite time.


Where are you paying the Bandwidth Tax? I'd love to hear. Reply or drop a comment.

Subscription Form
© 2025 Emre Tezisci
magnifiercross