FEATURED

Why your AI transformation budget should come from the System Integrator's line item

Alex Kim, VP Engineering

Mudassir Mustafa

AI agent identity management - authentication and authorization for enterprise agents
AI agent identity management - authentication and authorization for enterprise agents

Why your AI transformation budget should come from the Big-4 line item

Every CTO and CAIO I've talked to in the last six months is asking the same question. Some version of: "the board wants us to do AI transformation, finance hasn't allocated a new line for it, and the platforms cost real money. Where does the budget come from?"

The wrong answer is "find new money." Finance won't approve it on the timeline AI is moving on, and even when they do the procurement cycle eats most of the year.

The right answer is harder to say out loud but easier to defend: stop paying Accenture and Deloitte and BCG and EY and McKinsey to do software work that now ships as software. The AI transformation budget already exists in your P&L. It's sitting in the Big-4 consulting line item, earmarked for engagements that were scoped before the substrate existed to do this work as product.

This is the pricing-thesis post. It's not a critique of consulting firms. It's an argument about which line item the money should come out of.

The budget already exists

Industry estimates put enterprise AI-flavored consulting spend in the tens of billions of dollars a year. The Big-4 firms collectively report billions in annual AI services bookings. Every Big-4 and MBB firm has shifted a meaningful portion of their consulting book toward AI-themed engagements because that's where the demand is.

That money has to come from somewhere on your P&L. It usually sits in one of four places: a digital transformation budget owned by the CIO or COO, a multi-year master services agreement with one or more of the Big-4 firms, a CIO or CTO discretionary spend line, or a PE sponsor mandate budget if your company is PE-backed. Sometimes it's a separate AI line that's already been created and quietly allocated to a Big-4 engagement.

The point is the budget exists. It's allocated. It's being spent. The question isn't whether your company has money for AI. The question is what shape of work that money is buying.

What the budget pays for today

A typical Big-4 enterprise AI engagement looks like this. Eighteen to twenty-four months. Two to five million dollars. A team of twelve consultants, mostly junior, who rotate off at the end of the engagement and take the institutional knowledge with them.

The first six months are discovery and workshops. Current-state assessments. Stakeholder interviews. Industry benchmark analysis. The deliverable is a slide deck. A long, well-formatted, expensive slide deck.

The next six months are future-state architecture. More slides. Reference architectures. Vendor evaluations. A roadmap that lists thirty initiatives prioritized in a 2x2 matrix.

Then somewhere in months twelve to eighteen, a proof of concept gets built. Usually in a sandbox. Usually disconnected from the production systems it would need to touch to be useful. Sometimes it demos well. Almost never does it ship to production within the engagement window.

What you have at the end: a thick PowerPoint, a sandbox demo, and a recommendation to engage the same firm for the build phase. Which is another twelve to eighteen months and another few million dollars.

What you don't have: an AI agent running against your production systems, doing real work, that your team operates.

This isn't every Big-4 engagement. There are exceptions. But the pattern is consistent enough that most CTOs reading this are nodding.

The work isn't consulting work anymore

Here's the accounting-identity argument.

Five years ago, building a system that connected an enterprise's fragmented data sources, ran retrieval over them, governed model access, deployed agents with audit trails, and orchestrated workflows across SAP and Salesforce and a dozen homegrown apps was custom integration work. Every layer had to be built. There was no off-the-shelf context engine. There was no off-the-shelf agent framework. There was no off-the-shelf governance substrate. The work was bespoke, the work was hard, and the work was correctly priced as professional services because that's what it was.

Today, that substrate ships as product. The integration layer is product. The context engine is product. The agent runtime is product. The governance and observability layer is product. The model gateway is product. The memory store is product. What used to require eighteen months of senior engineering work now ships as software you deploy in your cloud.

Same problem. Same outcome. Different delivery model.

This isn't a Rebase-specific claim. The same shift is true whether the buyer ends up with Rebase, a competitor in our category, or builds it themselves on open-source components. The point is that the work has moved from services to product, and the pricing should follow.

What hasn't moved: change management, organizational design, business process redesign, executive education, regulatory navigation, training the operations team to use AI in their daily workflows, restructuring teams to absorb new capabilities. That work is real consulting work. It's specific to your business. It doesn't ship as a product because it can't. Pay for that.

What has moved is the platform build itself. That's the line item to look at.

The reframe

Stop asking "can we afford an AI platform." Start asking: "are we paying consultants to do software work that now ships as software?"

The test is the SOW.

If the deliverables read like a software build (build a context layer, integrate three systems, deploy a retrieval pipeline, configure governance policy, ship an agent that does X), it's software work. The market price for software work is software pricing.

If the deliverables read like a change program (assess current state, redesign business processes, train two hundred users, restructure the operations team, run executive workshops), it's consulting work. Pay consulting prices.

Most enterprise AI engagements blend both. The math gets sharper when you separate them.

Where to find the money in your P&L

There are five places to look. Do this audit before you tell finance you need new budget.

1. Active Big-4 master services agreements. Most enterprises have at least one MSA already signed with one of the Big-4, often two or three. The unspent portion of those MSAs is the first place to look. Pull the open SOWs. Read what they're actually scoped to deliver.

2. Digital transformation budget. Usually owned by the CIO or COO. Often allocated to consulting first because that's the historical default for transformation programs. The line item exists because the company knows it needs to modernize. The line item doesn't care which vendor it gets spent with.

3. AI-adjacent retainers. Boutique strategy firms, "AI advisory" engagements, fractional executive arrangements that include AI scope. These tend to be smaller line items individually but add up across the org.

4. CIO and CTO discretionary spend. Varies enormously by company size. Often a few hundred thousand to a few million dollars a year that doesn't require board approval. The discretionary line is the fastest path to a parallel pilot.

5. PE sponsor mandate budget. If you're PE-backed and the sponsor told the portfolio "we need to be AI-ready by next year," that mandate almost always came with budget attached. That budget lives somewhere. Usually with the CIO, sometimes with the CFO, occasionally with the sponsor's operating partner who's running portfolio-wide AI initiatives.

The audit question is the same for all five: of every dollar in this line, how much is buying software work that now ships as software?

What this looks like in practice

Here's the pattern we've watched play out in conversations with prospect CTOs over the last six months.

A public-company manufacturer is eighteen months into a Big-4 AI transformation engagement. Original scope: five million dollars over twenty-four months, branded "AI-enabled operations transformation." Three million of that has been spent. Deliverables to date: a two-hundred-page current-state assessment, a future-state architecture document, three proof-of-concept demos running in sandbox environments that don't connect to production systems.

Zero AI agents running against the production stack. Zero workflows changed. The CTO gets asked at the board meeting what the company has to show for three million dollars, and her answer is some version of "we're building the foundation." Which is true, and which lands badly with directors who expected something to ship.

She does the audit. The remaining two million dollars of the engagement is scoped for five workstreams: build the data integration layer, build the agent framework, build the governance substrate, deploy the first three production agents, and train the operations team to operate them.

Three of those workstreams are software builds. Two are a blend of software and change work. None of them are pure services work.

She reallocates four hundred thousand dollars from the remaining two million into a parallel platform pilot. Eight weeks of pilot. First agent in production at week six. By the end of the original engagement window, three agents are live, the ops team is trained, and the platform is running in the company's cloud. The Big-4 engagement narrows to organizational design, training, and change management, which is what it was always good at, and closes out at one-point-two million instead of two million.

The board gets a different answer at the next meeting.

What to do this quarter

Six steps. Most CTOs can run the first four without escalation.

Step 1: Audit the consulting line items. Pull every active SOW that has an AI, automation, or digital component. Most of you already know which ones these are.

Step 2: Bucket the deliverables. For each SOW, sort the deliverables into two columns: software-shaped (build, integrate, deploy, configure, ship) vs services-shaped (assess, redesign, train, restructure, change-manage).

Step 3: Test the software-shaped deliverables against product reality. For each one, ask: does this capability exist as a product today? Almost all of them do. Context layers exist as product. Agent frameworks exist as product. Governance substrates exist as product. Integration layers exist as product.

Step 4: Pick one workstream and run a parallel pilot. Don't kill the consulting engagement. Run an eight-week paid platform pilot in parallel on one workstream. Pick the workstream that has the clearest measurable outcome. Invoice reconciliation. Vendor onboarding. Inventory exception handling. Whatever's tractable.

Step 5: Compare results at week eight. Time to first agent in production. Total cost. Working code in your cloud. Whether the platform passed your security review. Whether the operations team can actually use it.

Step 6: Reallocate next quarter's spend based on what shipped. This is the conversation with finance and the sponsor. Bring the comparison. Bring the working agents. Bring the math.

The honest version

Most enterprises don't have an AI budget problem. They have an AI delivery problem.

The money is allocated. It's just being spent on the wrong shape of work. Pay software prices for software work. Building a platform that connects your systems, holds context, runs agents, and enforces governance was services work in 2020. It's product work in 2026.

The CTOs who figure this out first stop having the "where does AI budget come from" conversation. They start having a different conversation, which is about which workstream to ship first.

What to do next

Request a demo. Thirty minutes. Bring your active consulting engagement scope, or just describe what your current AI workstream is supposed to deliver. We'll give you an honest read on what could ship as software, what should stay as services, and where the math actually lands.

If the answer is "your current engagement is doing real consulting work and you don't need a platform yet," we'll tell you that too.

SHARE ARTICLE

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

WHITE PAPER

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

WHITE PAPER

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

Recent Blogs

Recent Blogs

BECOME AI-FIRST

BECOME AI-FIRST

Transform your enterprise in weeks.

Transform your enterprise in weeks.

Transform your enterprise in weeks.

Transform your enterprise in weeks.

Thirty minutes. Your actual stack. We'll show you what AI-first looks like running on your cloud, connected to your real systems.

Thirty minutes. Your actual stack. We'll show you what AI-first looks like running on your cloud, connected to your real systems.

document.documentElement.lang = "en";