The Backhoe and the Blue Screen: Why the “Build vs. Buy” Debate is Broken in the Age of AI
- Govind Davis

- Jan 30
- 5 min read
By Govind Davis
The audio flickered. Connections dropped. The stream went live before we were quite ready, catching the unpolished banter about harmonicas and the bagpipe-like polarizing nature of blues instruments. In many ways, the chaotic start to our latest broadcast was the perfect metaphor for the current state of Enterprise AI: it’s messy, the connection is intermittent, and everyone is trying to figure out if they are actually “live” or just talking to themselves.
I sat down—virtually, through the glitches of modern streaming platforms—with Mike H, a veteran enterprise software sales guy and previous engineer.
Mike came to the table as “The Buy Guy,” a thirty-year veteran of the software industry, armed with a perspective shaped by Internet of Things, firmware, and the heavy machinery of enterprise architecture. Our agenda was simple, yet deceptively complex: Is the age-old IT commandment of the “70/30 Rule” (Buy 70%, Build 30%) still valid when Artificial Intelligence enters the stack?
The answer, as we discovered between audio dropouts and heated debates, is that the math has changed. The shovel has been replaced by a backhoe, but we might be running out of operators.
The 70/30 Fallacy in the AI Era
For decades, the golden ratio for IT leadership—CIOs, CTOs, and the consultants who whisper in their ears—has been the 70/30 split. The logic holds that to achieve speed to market and reduce technical debt, an enterprise should buy 70% of its functionality “out of the box” (think SAP, Oracle, standard ERPs) and reserve only 30% of the budget and effort for custom building, or the “secret sauce” that differentiates the business.
“I am the Buy guy,” Mike H argued early in the stream. “Salespeople like to buy. CEOs want to go to market with AI as a competitive advantage. They don’t want to wait twenty-four months for a build.”
From the boardroom perspective, this makes sense. The driver today is not just automation; it is intelligence. It is the desperate need to move away from the uncertainty of spreadsheets and dirty data silos toward a market-driven dominance. If you aren’t leveraging AI, you are losing market share. You are losing your shirt. Therefore, the instinct is to buy the solution, wrap it in a little bit of IP, and deploy.

However, from my perspective, with the scars of 300 low-code implementations for giants like Walmart, P&G, and Disney, I had to push back. I argued that while the *financial* transaction might look like a 70/30 split, the *effort* inevitably flips the paradigm.
“The tool isn’t going to solve your problem,” I countered. “You’re going to buy the license, sure. But then you’re going to have twenty or thirty people in your IT department who need to train that AI. You’re going to have an army of AI-enabled builders going around the company asking, ‘What do you want your agent to do?'”
This is the crux of the modern enterprise paradox. You can buy the Large Language Model (LLM). You can license Anthropic or Gemini or OpenAI. But that is just the engine. The chassis, the steering, and the destination—the “Build” component—is where the real cost and value lie. The 70% “Buy” is just the entry fee. The 30% “Build” ends up consuming 100% of your attention.
The 95% Failure Rate: Dirty Data and Integration Drag
Why does this distinction matter? Because the current track record is abysmal. Depending on whose statistics you believe—my notebook said 95%, and industry reports often echo similar grim figures—the vast majority of AI pilots fail to deliver measurable ROI.
It isn’t because the AI isn’t smart enough. It’s because the enterprise isn’t ready enough.
We unpacked the reasons for this staggering failure rate. It usually boils down to a few usual suspects:
1. Leadership Disconnect: Buying a tool without a clear mandate or strategy.
2. Integration Drag: The inability to connect the new AI brain to the old legacy nervous system.
3. Dirty Data: You cannot build a skyscraper on a swamp. If your data is fragmented across spreadsheets and legacy servers, the AI will only hallucinate faster than a human could err.
I recounted a project for Procter & Gamble involving a highly customized interface for IT project submissions. It required global deployment, low-bandwidth optimization for the Philippines, and complex logic for budget allocation. You couldn’t just “buy” that. You had to dig deep into the code to make the user experience work.

In the AI world, this complexity is magnified. An AI agent is not a static form. It is dynamic. It needs to interact with an Oracle ERP, check purchase orders, analyze delivery times, and make decisions. If the underlying data is “dirty,” the agent fails. If the integration is brittle, the agent breaks. The failure of the 95% isn’t a failure of technology; it’s a failure of preparation. It is the result of treating AI as a product to be installed rather than a capability to be cultivated.
The Pivot: From “Big Bang” to the 7-Day POC
The market is getting anxious. It is greedy for results. But greed leads to bloated projects that collapse under their own weight. The antidote to the 95% failure rate is not a bigger budget; it is a smaller scope.
We discussed the absolute necessity of the Proof of Concept (POC). But not the POC of yesteryear, which took six months and a committee to approve. We are talking about rapid, low-code, AI-driven prototyping.
“If someone can’t build you an AI agent and show you that right away…” I trailed off, implying the obsolescence of the vendor. I used Google AI Studio to build a functioning agent in an hour. While an enterprise-grade solution takes longer, the timeline has compressed from months to days.
My low-code shop, MCFTech, was built on this model: a free Proof of Concept on a single use case. Give us one problem. Give us one dataset. We will turn it around in three to seven days. If it works, we scale. If it doesn’t, you haven’t burned a quarter’s worth of budget.
This approach forces discipline. It strips away the “Integration Drag” and focuses on the output. It turns the conversation from “What software should we buy?” to “What problem are we solving right now?”
The “Backhoe” and the New Operator
Toward the end of our conversation, we landed on an analogy that stuck. The fear in the market is that AI replaces people. IT departments get downsized because we found tools that automate the coding. I challenged Mike H on this—was he resisting the change?
“I’m not resisting it. I’m all in on it. It’s ridiculous,” he laughed. But he clarified the shift.
The developer isn’t disappearing. The developer is evolving from a guy with a shovel to an operator of a backhoe.
“You want a shovel or a backhoe?” I asked.
“I want to be the bulldozer,” Mike H replied.
But here is the catch: A backhoe moves more dirt than a shovel, but if you hit a gas line, you blow up the whole neighborhood. The tool is more powerful, but the requirement for skill—for “intellectual capital”—is higher. We aren’t just writing syntax anymore; we are orchestrating logic. We are entering the era of the AI Builder or the Agent Builder.
These new professionals will command high value. They aren’t just coding; they are teaching the AI how to navigate the business. They are soldering the connections between the “Bought” LLM and the “Built” internal processes.
Conclusion: The New Hybrid Strategy
So, is it Build or Buy? The binary choice is dead.
The future of enterprise AI is a hybrid. You buy the foundational intelligence (the 70%) because you cannot build your own LLM. But you must ruthlessly build the agents, the guardrails, and the integrations (the 30%) that make that intelligence useful for your specific business model.
The winners won’t be the ones who buy the most expensive licenses. They will be the ones who empower their people to drop the shovels, climb into the backhoes, and start building—one Proof of Concept at a time.


