PA
Peter AaronBlog
Strategy

The Build vs Buy Decision Nobody Makes Properly

12 min read
Product StrategyAIEnterprise SoftwareBuild vs BuySaaS
Railway tracks splitting at a junction — the build-versus-buy decision is a switch you only get to throw in one direction at a time.
Railway tracks splitting at a junction — the build-versus-buy decision is a switch you only get to throw in one direction at a time.
Article InsightsAI
Generating insights…

Someone in leadership hears a pitch, another team is already trialling a competitor's tool, the has a preference formed from a podcast, and within a fortnight there's a Slack channel called #ai-tools-evaluation that goes quiet after three weeks. A decision gets made. Nobody's entirely sure when, or based on what.

The build vs buy question has always had this problem. It's the decision that gets made most emotionally and least rationally. hasn't fixed that. It's made it categorically harder, because the options have multiplied, the stakes have risen, and the pace of change means any decision you make today has a meaningful chance of looking wrong in eighteen months.

A brief history of the same mistake

This debate is older than most people realise, and each era thought it had settled the question. It never had.

Six Eras of Build vs Buy
  1. Era 11950s – 1968

    Build Everything

    • · Software bundled with hardware; no commercial market
    • · 1965: IBM System/360 standardises the platform

    Build was the only option because buy didn't exist.

  2. Era 21969 – 1980s

    Unbundling

    • · 23 June 1969: IBM unbundles software under DoJ pressure
    • · SAP (1972) and Oracle (1977) born

    First real build vs buy decision; build still dominant.

  3. Era 31980s – late 1990s

    On-Premise Licence

    • · Packaged ERP, CRM and HR software arrives
    • · $1–2M licences, 6–12 month implementations

    Buy became viable but expensive; build still owned the core.

  4. Era 41999 – 2015

    SaaS Revolution

    • · Salesforce launches with the "No Software" campaign
    • · AWS (2006) makes cloud cheap and accessible

    Buy becomes the default; integration debt accumulates silently.

  5. Era 52015 – 2023

    Fragmentation

    • · Average enterprise runs ~897 apps, integrates ~29%
    • · Shadow IT becomes endemic; governance lags adoption

    Over-buying created its own problem; SaaS sprawl as liability.

  6. Era 62024 – present

    AI / Agent Era

    • · Every SaaS bolts on AI; AI-native challengers emerge
    • · Nov 2024: MCP launches and is adopted across providers
    • · 2026: agentic skills become the dominant configuration pattern

    Build, buy AND configure are viable; skills compound fastest.

Sources: Computer History Museum, MuleSoft Connectivity Benchmark.

In the 1950s and 60s, software wasn't a market at all. IBM and its competitors bundled software with hardware at no separate charge, and what enterprises needed, they built (or had IBM build) for their specific machines. There was no real "buy" option. Then, in June 1969, under pressure from a antitrust suit, IBM announced it would unbundle software from hardware, pricing each separately. Overnight, as the Computer History Museum notes, software changed from a giveaway to a competitive commercial product. The software industry was born.

Throughout the 1970s and 80s, packaged software slowly developed, but the default for enterprises was still to build. SAP, Oracle and the first generation of commercial vendors spent that decade proving you could sell generalised systems to large businesses. By the early 1990s, "buy" had become viable for finance and HR, but was still considered risky for anything close to core operations. The on-premise licence was the deal: a large upfront payment, a six-to-twelve month implementation, an army of consultants, and a product that would require painful upgrades every few years.

The economics broke in 1999 when Marc Benioff founded Salesforce in a San Francisco apartment and launched with a campaign that staged traditional enterprise software as literal hell. The "No Software" tagline was on taxis and billboards. It was mocking the idea that you should install, maintain and upgrade anything yourself. The era began. Within a decade the model generalised across every business function: , , marketing automation, finance, support, analytics. Each function got its own best-in-class tool. Buy became the default assumption, not the exception.

By the 2010s the pendulum had fully swung, and with it came a new problem. MuleSoft's 2025 Connectivity Benchmark Report found the average enterprise now runs around 897 applications, yet integrates only roughly 29% of them. The buy-everything era created the fragmentation era. became an exercise in managing a SaaS estate that nobody had fully planned. The integration cost, hidden in every buying decision, became enormous.

Generative AI is the third moment. It hasn't just added new tools to the catalogue. It's changed what software can do and, more importantly, changed the economics of building. The question is no longer simply "automate a workflow or buy someone who has": it's whether to replace the entire workflow with an agent, and whether you need a specialist product to do it or whether your existing platform, properly configured, can get there.

What you're actually deciding

Here's the mistake most teams make: they treat "build vs buy" as a single binary, when it's actually four distinct questions that require different answers.

What Are You Actually Deciding?
Internal Use
Customer-Facing
Tool

Lightweight Buy

Transcription, code review, contract summarisation

Buy fast. Validate value. Don't over-govern a $20/seat tool.

Capability via API

Embedding an LLM into your product through an API contract

Treat as integration, not product. Own retrieval and evaluation.

Application

Heavyweight Buy

CRM, HRIS, support platform, internal knowledge base

Application-level scrutiny: data, workflows, exit cost.

Build the Layer

AI features inside the product you sell

If it's how you create value, build. Use vendors below the line.

Treating every cell with the same caution is the most common mistake.

Are you acquiring a tool, or an application? A tool is scoped to a task: transcription, code review, contract summarisation. An application is a system that people work inside: your CRM, your support platform, your internal knowledge base. The stakes are entirely different. Replacing a tool is relatively low risk. Replacing an application carries data migration, workflow change, retraining and a much longer tail of hidden costs. The error is treating tool purchases with application-level caution (which slows you down) or making application decisions at tool speed (which creates serious exposure).

Are you buying a product or an integration? Some things you're paying for are products with interfaces, onboarding and support. Others are effectively contracts: you're buying access to a capability you'll wire into your own infrastructure. The Anthropic API, for instance, is not a product you use directly. It's a capability you integrate. Treating an integration like a product, or expecting a product to behave like a clean API, creates mismatched expectations and wasted effort. Know which one you're evaluating.

If you're a software company: are you deploying this in your product, or for your team? These are completely different decisions. Deploying an LLM in customer-facing infrastructure is an engineering and compliance decision: you own the runtime, the latency contract, the failure modes and the regulatory exposure. Picking a tool for your internal team is more like traditional SaaS procurement. Many teams conflate these, especially in product-engineering hybrids where both are happening simultaneously. The governance requirements, the vendor scrutiny and the are different by an order of magnitude.

Are you solving a commodity problem or a differentiating one? If three vendors solve 90% of your requirement out of the box, buying is almost always right. If the capability is core to how your product creates value and cannot be replicated by what any vendor offers, build is almost always right. The honest answer is that most enterprise functions are commodity, and most enterprise teams overestimate how differentiated their needs are. At the same time, most software products underestimate how much of their AI layer will become core within two years.

The new shape of the choice

Once you've identified which question you're actually answering, the option set looks different.

For internal enterprise tooling, the options in 2026 are broadly three. First, established SaaS with AI added. Salesforce adds Einstein. Notion adds AI writers. Your HRIS adds intelligent document processing. These tools already hold your data and your workflows. Their AI features are rarely the best available, but they're inside your governance perimeter and the integration cost is near zero. The trade-off is you're accepting whatever AI capability your vendor chooses to ship, at their pace, on their pricing terms.

Second, AI-native applications. Products built from scratch with intelligence as the core rather than a feature: Cursor for coding, Decagon for customer support, Harvey for legal. According to a16z's survey of 100 enterprise CIOs, users who adopted Cursor showed notably lower satisfaction with previous tools like GitHub Copilot, illustrating how quickly AI-native products reset expectations. The risk is immaturity: smaller teams, shorter track records, pricing models still evolving, and enterprise security and compliance features that may be twelve months behind where regulated businesses need them.

Third — and this is the one most evaluations still underweight, despite being one of the loudest conversations in the room right now — a configured primary with agentic skills and connectors. A single AI model, given a library of organisation-specific skills and wired into your existing SaaS applications via MCP, creates a different kind of workflow: one AI brain that reaches across your stack, carries your operating context, and executes multi-step tasks without your team switching surfaces per job. The Claude user community in particular has spent the opening months of 2026 turning this from a curiosity into a default configuration — skills marketplaces, organisation-wide skill libraries and "agent-of-agents" patterns are now the dominant topic in enterprise AI forums.

Anthropic's Model Context Protocol, open-sourced in late 2024, has been adopted by every major AI provider and reached 97 million monthly SDK downloads by early 2026. It is no longer "quietly" becoming the integration substrate for the industry; it is the substrate, and the conversation has moved on to what you build on top of it. Skills — small, named, reusable capabilities the model can invoke — turn the configured-LLM option from "a chatbot with plugins" into something closer to an internal platform. Once an organisation has fifty or a hundred well-scoped skills, the marginal cost of automating the next workflow drops to near zero. That is the dynamic shifting build vs buy calculations under most teams' feet right now: the third option is compounding faster than the first two.

The implication for buyers is sharp. A point solution you bought in 2024 to do one thing well now competes with a skill that one of your engineers can write in an afternoon, governed centrally and available to everyone in your organisation. Vendors who do not have a credible MCP and skills story are increasingly being asked, in renewal conversations, to justify why their workflow shouldn't just be a skill.

For software teams building products, the decision tree has a fourth branch: build on an API, and own the layer. Using an LLM via API while building your own retrieval, evaluation and orchestration layer gives you the capability without handing over the intelligence that makes your product different. The risk is maintenance burden. As one analysis from Mavik Labs notes, the hidden cost of internal builds is often five times the initial development cost once you factor in ongoing maintenance, model updates and scaling complexity.

Security, governance and the shadow AI problem

This is the part of the decision that gets least attention and matters most in regulated industries.

Shadow AI is already inside your organisation. Deloitte's 2026 State of AI in the Enterprise report found worker access to AI rose by 50% in 2025, yet only one in five companies has a mature governance model overseeing how it's actually being used. IBM's 2025 Cost of a Data Breach report found that incidents involving shadow AI add an estimated $308,000 per breach. Employees paste customer data into free-tier chatbots and call it productivity. Without a sanctioned enterprise AI toolchain, you don't reduce shadow AI. You just lose visibility into it.

A configured primary LLM has a real security advantage over fifteen point solutions here: a single governed access point, with MCP servers scoped per team and audit logging at the gateway layer, gives compliance teams something they can reason about. Compare that to managing data residency and data processing agreements across thirty AI-native vendors, each at a different stage of enterprise readiness.

Gartner has identified vendor lock-in as a critical GenAI blind spot for enterprise , predicting that by 2030 this will divide enterprises that scale AI safely from those that become trapped. MCP's model-agnostic architecture helps, but it's only as useful as the governance layer you build around it.

The honest answer

There is no universal answer, and anyone who tells you there is one is selling you something. But the honest answer has more texture than most frameworks offer.

If you're an internal team: buy, almost always. The fastest path to understanding where AI creates genuine value in your workflows is to use something polished, even if imperfectly tailored. Buy to learn, then build to last. The enterprises getting the best return are those that started with vendors to validate business cases, then built proprietary layers where the intelligence started to look like a differentiator.

If you're a product team: the question is where your defensibility lives. If your moat is data and proprietary business logic, build the AI layer on top of it. If your moat is workflow or distribution, use whatever capability is best today and iterate. The error is neither of these: treating the LLM API as a permanent vendor relationship with no proprietary layer on top, which means the capability is as accessible to your competitors as it is to you.

In either case, before you decide anything: get to know the product teams you're betting on. This never appears in any build vs buy framework, and it's arguably the most important factor for AI tools specifically. The gap between v1 and v2 of an AI-native product can be enormous, and the pace is faster than traditional software cycles. When you commit to a vendor, you're not just buying the current product. You're buying the roadmap and the team's willingness to let enterprise customers influence it. Do they have a design partnership programme? Can you get on a customer advisory board? Have they shipped the compliance features their last cohort asked for? These are the questions that separate a considered buy from an expensive regret.


The build vs buy decision has never really been about technology. It's about where your organisation's attention and capital should go, and what you're willing to let someone else control. Generative AI has made the options more genuinely interesting, the timelines shorter and the consequences of getting it wrong harder to reverse. The worst thing you can do is let the decision settle itself in a Slack channel, because it will, and nobody will be accountable when it turns out to be wrong.

Notable Quotes
Scroll
  • What I spent in 2023 I now spend in a week.
    Enterprise CIO (anonymous)Quoted in a16z's 2025 enterprise AI survey2025
  • Risks like shadow AI, technical debt, skills erosion, data sovereignty demands, interoperability issues and vendor lock-in represent hidden undercurrents that can undermine long-term success.
    GartnerGenAI Blind Spots Press Release2025
  • Every team should be a software team.
    Andreessen HorowitzNotes on AI Apps in 20262026
Related Reading
Scroll

More to read