Velocity, not headcount, is becoming the ultimate moat.
Generative AI is the star of every board‑deck, X or LinkedIn thread, but new telemetry from Anthropic suggests the real drama is happening below the headline. After poring over 500,000+ live interactions with its Claude models, Anthropic found that the most powerful version—Claude Code, a tool‑using "agent" optimised for building software—has been embraced unevenly. A handful of teams are already running laps while others are still stretching on the sidelines.
This pattern is straight out of the history books of General‑Purpose Technologies (GPTs). Electricity, the internet, the smartphone: each rewired the economy, but only after early adopters redesigned their workflows around the new capability. AI, it turns out, is following the same adoption S‑curve—just compressed into quarters instead of decades.
Anthropic’s study compared behaviour on the vanilla Claude.ai chatbot with usage of Claude Code. Where the former is an all‑purpose conversationalist, the latter can chain tools, write and execute code, and return finished artefacts—think co‑pilot versus co‑builder.
Three findings jump off the spreadsheet.

Startups account for roughly one‑third (≈33 %) of all Claude Code usage. Enterprises, despite their headcount advantage, show up in only 13–24 % of those conversations.
Why the gap? Brynjolfsson and McAfee’s classic HBR piece on GPTs argued that the real cost isn’t the tech; it’s the complementary investments—process redesign, up‑skilling, governance tweaks. Startups, unburdened by legacy ERPs and eight layers of sign‑off, treat those investments as Tuesday afternoon projects. Enterprises file a ticket and form a steering committee.
On Claude Code, 79 % of sessions culminated in the agent finishing the task autonomously—writing production code, shipping a script, closing the loop. The generic chatbot managed that feat in 49 % of cases.
This is the difference between AI as a search engine with swagger and AI as an execution layer. The former speeds up the thinking; the latter collapses the doing.

Which tasks go agent‑first the soonest? Anything user‑facing. JavaScript, HTML and CSS dominate the language mix; UI component scaffolding tops the task list. Anthropic engineers nick‑named the phenomenon "vibe coding": spinning up a polished interface in hours, then iterating live with customers.
For startups, the front door is the product. Shaving days off the design‑build loop is existential.

Put the pieces together:
Speed compounds. Every cycle completed ahead of the competition widens the moat—not linearly but exponentially, the way compound interest does. The late adopters still own the same number of engineers on paper; they just ship half as often and learn half as fast.
We’ve seen this movie before:
In each case, technology was the spark; organisational redesign was the fuel.
If a code‑writing agent can lop hours off a sprint, imagine domain‑specific agents for:
Expect the same adoption curve: startups sprint, enterprises jog—and the speed divide widens function by function.
The question is no longer if AI will reshape work. It’s how many learning cycles you can afford to surrender to faster rivals before the gap becomes unbridgeable.
Speed is about to become a balance‑sheet item. Which side of the divide will you be on?