Marketers Bought The AI. Trust Stayed In The Cart.
Two reports published in April found marketers using AI widely and trusting it almost nowhere. The gap is not maturity; it is the cost of verifying what the model produces.
Neritus Vale
Two reports published in April describe the same gap from opposite ends: most have bought AI, few have given it permission to act. Digiday+ Research, drawing on Q4 2025 fieldwork, found 82% of brand and agency professionals using AI for creative production; the Supermetrics 2026 Marketing Data Report, polling 435 marketers across five markets, found 6% have fully embedded it in operations. One figure measures uptake; the other measures authorisation. Between them sits a question about verification that three years of marketing-AI promotion did not have to answer. The ceiling on AI marketing is no longer the model. It is how much of what the model does a marketer can stand behind.
The pressure to close the gap is arriving from above. Supermetrics found 80% of marketers feeling pressure to adopt AI. The buyer of marketing AI and the operator of it are different people, and the operator carries the verification cost. The board sees uptake numbers; the marketer sees the work that shipped. The asymmetry has been visible in CRM, in attribution, in personalisation, and what is new this cycle is the speed at which leadership is allowed past the operator’s caveats.
The deployment data tracks audit cost almost line by line. Digiday found 82% using AI for creative production, where every draft is reviewed before it ships, and 54% not yet deploying agentic systems that would run unattended. Generation rises where humans can verify and falls where they cannot. The pattern is consistent: the higher the cost of inspecting a task, the lower the adoption. AI is welcome where a marketer can grade the homework and rare where they cannot.
Trust is not a soft variable in this picture; it is a budget line for the time a marketer must spend reading the work.
The standard reply is that the gap is a maturity problem, not a structural one. Supermetrics’ own report blames data ownership for the lag: 52% of marketing teams do not control their data strategy, and the implementation figure is presented as a curve every team will eventually walk. That argument has force; ownership and integration throttle adoption, and the curve has bent before for analytics, attribution, and personalisation. The reply misses what verification means inside agentic systems. Once an agent chains five tasks together without a human checkpoint, no audit log written afterward substitutes for the one a marketer would have run in the middle. The break point inside an agentic chain is where verification ends, and no maturity curve closes it.
The CMO Council reached the same gap from the opposite direction. Its survey of 371 marketing leaders found a 51-point ROI spread between teams it called “Power Partners” and “Emerging Partners,” and credited the gap to workflow redesign rather than tool spend. The teams winning are not the ones running more models; they are the ones engineering verification points back into the work. Where Supermetrics reads “ownership,” the CMO Council reads “judgment.” Both measure how much of the AI’s output a human will stand behind.
Fashion marketers stand to gain most from generation and face the steepest cost on review. Unilever’s Selina Sykes, speaking from the company’s beauty and wellbeing division, told Digiday: “Before, we’d be doing 20 assets per campaign, and now we’re doing hundreds.” The production barrier collapsed, and the bottleneck became inspection. The same dynamic holds in fashion: Gap, cited in the CMO Council report among brands providing executive perspectives, sells in a category where a mis-styled lookbook can undermine conversion for a season. The verification cost rises with every asset because the brand sits inside the asset; agentic systems that ship creative without a checkpoint expose the brand at the speed of generation. Fashion teams keep the human between model and market even when they have already paid for the autonomy.
The ceiling holds because verification, unlike compute, does not get cheaper with scale. A model twice as capable produces twice as much for the same human to read, and the marketer’s day does not double to match. If the next generation of marketing AI cannot supply its own audit trail, one a human can sample rather than re-read in full, adoption will rise slowly, because trust requires evidence the technology does not yet produce. The vendors that win this round will be the ones whose output a brand can defend in a single read. The buy was the easy part. What stays in the cart is permission.