AI Services - Wrong Mental Models, Right Moment
Every major fund published a thesis this year. Sequoia. a16z. YC. GC. Bessemer. Emergence. Read them together and the argument is identical, just in different packaging.
Services is the next software. For every dollar spent on software, six go to services. Sell the work, not the tool. Capture the labor budget, not the software budget.
The theses are not wrong.
What is absent from every single one: the questions that determine whether any specific company actually wins. Market size. Entry wedge. Copilot-to-autopilot transition timing. All answered. The three harder questions are not.
What accumulates. Who the story is for. Why the $16T number is hiding the structure of the actual competition.
Mental Model 1 - $16T is the opportunity
Services is six times the size of software. AI makes services delivery cheap. Therefore AI services companies capture an enormous market.
That $16T isn't one market. It's three: work that evaporates, work that goes human-in-the-loop, and work that stays human because no training set contains it. They're collapsing at completely different speeds.
High volume, low novelty work: document processing, standard claims, routine accounting. Compressing toward machine rates in 18–24 months. Capital has already arrived. The window for new entrants is closing. When machines do the work, the work reprices to machine rates. Most of the $16T does not migrate to AI services companies. Most of it evaporates.
The part that does not evaporate is the judgment layer. The acquisition that falls apart because the founder’s body language changed in the third meeting. The compliance ruling nobody saw coming because the regulator had a different reading of the statute. The negotiation where the number was right and the deal still died. Work no training set contains. That market is real. Also the hardest to win and the hardest to scale.
TAM is not what makes the deal move. Something else has to force the decision. In which specific slice does a client have two senior engineers retiring in 14 months with no one who can read the codebase, a pending acquisition that requires tech diligence, a compliance deadline the procurement team already missed once. Something that makes them act this quarter rather than the next four years. That slice is the actual opportunity. Much smaller than $16T. Also the slice that closes, pays, and renews.
Mental Model 2 - Outcome-based pricing is how you win
YC: don’t sell access to a tool for $50 a month, use the AI yourself and sell the finished work for $5,000. Bessemer: outcome-based pricing, time-to-value, measurable ROI. Emergence: domain credibility is existential.
They’re right about where this ends. They’re early on how it actually starts.
Enterprise procurement runs on headcount and rate cards. The CFO compares your outcome price to last year’s T&M rate and experiences category mismatch, not price shock. The number isn’t really the issue. The client just doesn’t have a reference point for it. The client is trying to buy a chair. You’re asking them to sign up for “sitting-as-a-service.” That mismatch is the problem.
Most outcome-based proposals require VP or C-suite approval because they don’t fit standard procurement categories. The deal stalls there. The value case is fine. The problem is nobody in the approval chain has a budget code for “AI-delivered outcomes.”
Most teams pitching outcome pricing are talking to the wrong person. The person who needs to approve it is not the same person who wants it. The path runs through a procurement bypass, first engagement small enough that a business owner approves it from discretionary budget. The outcome is demonstrated. The reference class is established internally. Second engagement sized against the demonstrated outcome, not the old rate card.
The first engagement has to be small enough to get approved without procurement. Once that works, you size the next one against what you just proved. Only after that does outcome pricing actually land.
Mental Model 3 - Large IT firms are the competition
The structural argument is correct. If AI handles 80% of junior work, the headcount pyramid inverts. The cost arbitrage that built Infosys and TCS erodes from both sides. A project that needed twenty offshore developers might need eight. Margin squeezed simultaneously from the client side and the efficiency side.
The February 3 selloff, $285 billion wiped from global IT services in a single morning when Anthropic shipped a legal workflow plugin — was partly narrative panic and partly a valuation correction the market wanted to execute anyway. IT services in India had been trading above their historical 15–18x multiple on growth promises that AI just made harder to keep.
Large IT firms are not competing on the same thing as small AI services companies. Large firms sell breadth plus brand plus relationships that survived multiple technology cycles. Small firms can sell depth in a specific domain.
In most deals, the real competition isn’t Infosys or Cognizant. It’s other small AI services companies positioned identically, and the client’s own internal team with a “build it yourself” mandate. That second threat is the one nobody names.
The client asks: why pay $200K for your managed agentic services when I can give three engineers access to Claude Code and have them build it in eight weeks?
The answer has to be specific enough to be unchallengeable. Your engineers have never deployed an AI agent inside a wealth management firm’s trading workflow where the audit trail has to satisfy SEC examination, where the model’s reasoning for every flagged transaction has to be explainable to a compliance officer who doesn’t trust AI, and where one unexplained decision costs the client a regulatory action. We have done this three times. Here are the things that broke.
In practice, you’re up against the client’s internal team and a bunch of small firms that look exactly like you. Cognizant rarely shows up in that decision. Winning on both fronts requires specificity that cannot be sourced from either.
Mental Model 4 - Vertical depth is the moat
A year ago I wrote that vertical AI will eat the world. Pick a vertical. Go deep. Tomorrow’s software giants won’t optimize departments — they’ll eliminate them.
The direction is right. The moat framing was incomplete in a way that makes it operationally useless.
“Go deep in a vertical” as a strategy instruction implies you accumulate domain expertise by doing work in a vertical. Half true. The other half is what determines whether the depth compounds.
Domain expertise becomes a moat only if you treat every engagement as raw material for something that compounds. The services business model runs on utilization. Utilization rewards staying in scope, closing engagements cleanly, starting the next one fresh. This is exactly the behavior that prevents accumulation.
Every engagement produces things that evaporate if you let them. The edge case on engagement three that cost you a week because you assumed the client’s data pipeline was clean and it wasn’t. The client who said “our compliance team has a different read on that regulation” and you learned something no prior vendor had learned before you. The failure mode on engagement seven that you only understood in retrospect when the same thing almost happened on engagement nine.
Most companies let this walk out the door with the project file. The tenth client gets exactly the same quality of work as the first. Nothing compounded.
The companies getting this right treat every engagement as a deliberate context-building exercise. Someone asks after every project: what did we learn that we can keep? What failure mode can we encode? The AI executing work for the fifth client in the vertical performs better than it did for the first. That improvement is the moat. If nothing improves from client one to client ten, there is no moat.
If nothing accumulates, “vertical depth” just ends up being a narrowly defined ICP.
Going narrow isn’t enough. What matters is whether anything from each project actually carries forward.
Mental Model 5 - Brand is what you say about yourself
In February I wrote that harness and context is where value capture goes. The model does the work. The harness decides what work is worth doing. The agent that accumulates institutional memory becomes irreplaceable not because it is better but because it knows things no competitor can access without sitting in the execution path for years.
Right for a software company using a services motion to deploy. Wrong for a services company trying to reposition around AI.
For a services company, the harness isn’t the orchestration layer. It’s the story that gets you into the right conversation before any engagement even begins.
Peer standing cannot be recovered once vendor standing is established. If the first conversation positions you as a vendor: responding to an RFP, sending a capability deck, doing a demo when asked, following up on timelines set by the client. The relationship is set. Deliver exceptional work. The client still will not give you the strategic conversation. Their mental model was formed in that first interaction and work quality does not change it.
The companies winning on repositioning are not doing it through marketing copy. They initiate the conversation, don’t respond to it. They publish something specific enough about the domain that the client finds them — not a post summarizing what Sequoia already said, but a finding that only someone who has done the work could produce. What percentage of AI agent deployments in regulated financial services fail their first compliance audit. Why the second deployment at a firm is cheaper than the first but the third is not. The specific failure modes that appear at scale that don’t appear in pilots.
In “Value Capture Has Nothing to Do with Value Creation” I wrote that markets pay for the story about value, not value itself. Cognizant’s revenue turnaround under Ravi Kumar, large deal TCV up 50%, five mega deals including a billion-dollar engagement, followed the repositioning story. Revenue followed narrative.
The narrative has to be backed by something real or it collapses when someone asks a hard question. Story at the right level to earn the right conversation. Work that accumulates real proof during that conversation. Then a story with evidence underneath it.
The companies that get this wrong build impressive capability and spend eighteen months trying to get clients to recognize it. The story stays a year behind the reality. The right clients never show up because the entry point was commercial rather than intellectual.
In services, brand is mostly what people remember you said or did before they ever hired you. Built through publishing findings specific enough that a competitor without the same client history could not have produced them. The window for claiming that position in most AI services verticals is open for another 12–18 months.
Question Nobody Is Asking
Every VC thesis describes the transition. None describe what survives it.
The transition is not the game. The game is what you are building inside the transition that compounds after it ends.
Most companies in this space are building revenue. If nothing compounds underneath, revenue starts looking like a countdown.
The theses describe the wave. Very few people are asking what actually survives once it passes.

We bought an HVAC company specifically to run AI on it as the owner, not the hired gun. The depth of engagement is completely different - we see what actually moves the needle versus what just looks good in a demo. So far, getting the team to absorb AI, is the biggest challenge. Our bet is that knowledge compounds into something we can take to other operators in the industry. Stay tuned how that works.
Can there be a model where services firm offers engineers well versed with AI tools to enterprise that is a laggard with respect to AI?
The clients pays for the engineers and AI tool cost, while the services firm gets the job done.