The pressure on enterprise marketing leaders in 2026 is not ambiguous. Boards want AI. Vendors are announcing agent platforms. Every major marketing technology provider - Adobe, Salesforce, Microsoft - is racing to demonstrate autonomous capability. Budgets are following. The question is not whether to invest in AI capability. The question is whether the stack underneath it is ready to support what AI actually requires.

Most are not. And the failure, when it arrives, is being attributed to the wrong cause.

The misdiagnosis that compounds the problem

When an AI initiative underdelivers - when the predictive scoring model produces results that don't translate to pipeline, when the personalisation engine generates content no one can act on, when the autonomous agent can't complete a workflow because it keeps hitting data dead ends - the diagnosis tends to focus on AI maturity. The tools aren't good enough yet. The model needs more training data. The vendor promised capabilities it hasn't delivered.

These are rarely the actual problem. The actual problem, in most cases, is the foundation the AI is sitting on.

AI tools require three things that most enterprise marketing stacks have not systematically built: clean, resolved identity data at the individual and account level; governed consent frameworks that determine which data can be used and for what; and orchestration architecture that connects the AI output to the execution layer where it can actually drive a decision or action. These are not AI problems. They are foundation problems. They exist independently of which AI platform you've chosen. And they will follow you from vendor to vendor if you don't address them first.

"The pattern that emerges consistently: every enterprise that has moved beyond the AI pilot stage has done the same thing first. They fixed the base."

What the evidence from the field shows

At the Salesforce Agentforce World Tour in Utrecht in April 2026 - one of the larger enterprise AI events in the Benelux - the most attended session of the day had no autonomous agents in it. E.ON One, the IT and digital services arm of one of Europe's largest energy utilities, presented a revenue transformation story built on consolidating fragmented sales data, standardising pricing globally, and building a single governed Customer 360 view. Unglamorous work. The kind that rarely gets a board presentation of its own. The room was packed.

The pattern was consistent across the sessions at that event. Shell's agentic journey framework was built on governance guardrails before capability leaps. MuleSoft framed API-led integration as the prerequisite for the agentic enterprise. Gen25 opened with a diagnosis that matched the Value Gravity™ model directly: AI initiatives stall in organisations that treat agents as a bolt-on, disconnecting IT and business initiatives and creating growing compliance risk in the process.

This is not a coincidence or a one-event observation. McKinsey's research on enterprise AI transformation points to the same bottleneck: organisations that have not built robust data and technology foundations cannot realise the value of AI investments made on top of those foundations. The capability exists at the AI layer. The infrastructure at the foundation does not.

Why this happens structurally

The Value Gravity™ framework describes why this misalignment is predictable rather than accidental. Enterprise marketing stacks have three layers. The AI Capability layer at the top - copilots, agents, predictive analytics, attribution - has the highest innovation velocity and the lowest switching cost. New tools appear constantly. Capability gaps between platforms close quickly. This is where announcements are made, and where vendor marketing concentrates.

The Commercial Foundation at the base - CRM, CDP, identity resolution, data governance, consent management - has the highest switching cost and the highest integration depth. Value accumulates here slowly, over years, through data quality work and governance decisions that are operationally intensive and rarely promotable. But every layer above depends on it. AI capability built on an unresolved identity layer produces outputs that look sophisticated and are functionally unreliable.

Investment follows announcement velocity. Announcement velocity is highest at the top. Foundation work has almost no announcement velocity at all. This is why, left to its natural trajectory, enterprise investment concentrates upward while the foundation remains underdeveloped - not through poor decision-making, but through the structural logic of how technology budgets get approved and where executive attention lands.

The diagnostic gap

The problem is not that organisations don't know their foundation has gaps. Most marketing operations leaders can name the issues: the identity model that was never properly unified after the last acquisition, the consent records that live in three different systems and don't reconcile, the CRM data quality that everyone knows is unreliable but that no programme has had the authority to fix. These problems are not hidden. They are on the mental list of every VP of Marketing Operations in the Benelux.

What is missing is a structured account of those problems that carries enough specificity and authority to move a budget line. An internal review produces a list that confirms what people already suspected. A vendor audit produces a list that reflects the vendor's portfolio. Neither of these gives an AI mandate the foundation assessment it actually needs before investment at Layer 3 can be justified.

If your organisation is facing an AI mandate in 2026 - and most enterprise marketing functions are - the question worth asking before the next platform decision is not which AI platform to buy. It is whether the foundation underneath the platform you already have is mature enough to support what you're about to build on top of it. The Gravity Scan is designed to answer that question in three weeks, before the contract is signed and the 18-month implementation begins.

The Gravity Scan maps your stack across 28 assessment areas - identifying where foundation maturity is sufficient, where it is not, and what to address before the next AI investment decision.

Learn about the Gravity Scan →