The real shift: people stopped “searching” and started delegating
The most important change in marketing right now isn’t that AI is “in the funnel.” It’s that your customer has quietly stopped searching and started delegating</em.
Google’s AI Overviews, “AI Mode,” semantic search, agentic assistants, AI voice agents, Threads/Pinterest/Reddit algorithms, AI CRM – all the headlines are about one thing:
machines mediating intent. Your buyer increasingly says:
- “Find me the best…”
- “Plan my trip…”
- “Compare these vendors and book a demo…”
- “Summarize the top options and just pick one.”
That’s not search behavior. That’s delegation. And in a delegated world, your real customer is no longer just a human. It’s the AI system deciding what they see, what gets summarized, and what gets silently dropped.
CMOs, performance marketers, and media buyers who keep optimizing for “10 blue links” and last-click ROAS are quietly losing share to teams who are optimizing for AI intermediaries.
The new funnel: human on top, AI in the middle, brand at the bottom
The classic funnel assumed a direct line:
Awareness → Consideration → Intent → Purchase
In an AI-first environment, there’s a new layer:
Human → AI layer → Brand
That AI layer sits between your buyer and:
- Search results (AI Overviews, semantic search, intent extraction)
- Social feeds (algorithmic curation, “recommended for you” content)
- Customer service and sales (AI voice/chat agents, AI CRMs)
- Measurement (modeled conversions, black-box campaign types like Performance Max)
The operators who win will treat that AI layer as a distinct channel with its own:
- Targeting logic (how the AI infers and groups intent)
- Ranking signals (what the AI “trusts” and surfaces)
- Attribution quirks (what the AI hides or compresses)
What this actually breaks in your current plan
This isn’t a thought experiment. You can already see the cracks:
1. Your SEO strategy is still written for humans, not models
Ahrefs and Moz are hammering semantic search, AI Overviews, cannibalization, and title tag rewrites for a reason. The game is no longer “rank a page for a keyword.” It’s:
- Be the source AI Overviews cite.
- Avoid having your own pages compete so badly that the model never confidently picks one.
- Structure content so a model can extract, summarize, and reuse it.
If your content is written only for human skimming, it’s increasingly invisible to the systems doing the summarizing.
2. Your paid media is optimized for the wrong “brain”
Google’s Performance Max, centralized Experiment Center, and new intent extraction methods are all about giving the machine more control. Most teams respond with:
- More budget into black-box campaigns
- More surface-level tests (ad copy, images) instead of structural tests (audience, offer, data feeds)
- More reporting, less learning
But if the AI is the real media buyer, your job shifts from “tweak bids” to “feed the machine the right signals and constraints.” Different skill set. Different metrics.
3. Your AI budget is getting cut because your metrics are fake
CFOs are already cutting AI budgets and asking for three things:
- Clear revenue impact
- Defensible attribution
- Evidence that AI is doing something humans couldn’t do as well
Most AI line items in marketing fail that test. “We bought a co-pilot” is not a strategy. “We reduced response time by 40% and increased qualified pipeline by 18% using AI routing” is.
Designing for the AI layer: four workstreams that actually matter
Here’s how to treat AI intermediaries as a real channel instead of a buzzword.
1. Make your brand legible to machines
AI systems can’t recommend what they can’t parse. You’re optimizing for:
- Semantic clarity: Use explicit, structured language around who you serve, what you do, and in what situations you’re the best choice.
Think “we help B2B SaaS companies with $10-50M ARR reduce churn by 20%” instead of “we’re a customer success platform for modern teams.” - Structured data: Schema markup, clean product feeds, consistent pricing, clear feature lists. Machines love tables, bullet points, and consistent formats.
- Canonical authority: Reduce cannibalization. Decide which page is the authority on each core topic and align internal links, titles, and meta accordingly.
Practical moves for the next 90 days:
- Audit your top 50 pages: is it obvious, in the first 100 words, who this is for and what problem it solves?
- Implement or fix schema for products, FAQs, reviews, and organization.
- Kill or consolidate overlapping pages that confuse both humans and models.
2. Optimize for “AI trust” as a real KPI
AI systems are increasingly deciding: recommend or reject. That’s as true for Google’s AI Overviews as it is for social feeds and AI CRMs.
Signals that drive “trust” for AI systems:
- Consistency: Aligned claims across site, ads, social, and third-party profiles. Models cross-reference.
- Reputation: Reviews, mentions, citations, and inclusion in trusted lists. Think “what would a model see if it had to justify recommending us?”
- Safety: Clear policies, transparent pricing, and low complaint volume. Risk-averse models prefer boring, predictable brands.
Turn this into something measurable:
- Track AI Overview mentions and citations for your brand and competitors.
- Monitor inclusion in “top X tools/platforms/solutions” roundups across the web.
- Set a quarterly “AI trust audit”: can an external analyst, using only public data, confidently explain why you’re a safe recommendation?
3. Feed the ad platforms better signals, not more budget
If Performance Max and similar black-box systems are the real media buyers, your job is to:
- Define what “good” looks like in data the machine can see
- Give it clean, timely feedback
- Constrain it where it tends to go off the rails
For performance marketers and media buyers, that means:
- Event hygiene: Fix your conversion events. Remove junk micro-conversions. Map events to meaningful funnel stages (qualified lead, SQL, opportunity, closed-won).
- Offline conversion uploads: Especially for B2B. Feed CRM outcomes back into Google/META so the AI can see beyond lead form fills.
- Experiment discipline: Use Google’s Experiment Center and equivalent tools not to test button colors, but to test:
- Different value props
- Different audience definitions
- Different landing page frameworks
A simple rule: if your team spends more time debating ROAS definitions than defining conversion events and offline upload processes, you’re feeding the AI garbage.
4. Rebuild measurement around “assisted by AI” journeys
AI Overviews, semantic search, social algorithms, and AI agents compress and obscure paths. You will not get every click. You will not see every touch.
Instead of chasing perfect attribution, design for directional truth that your CFO will accept:
- Minimalist analytics: Use a small set of reports that actually drive decisions (e.g., the 5 Google Analytics reports PPC marketers still use: queries, landing page performance, assisted conversions, device breakdown, geographic performance).
- Incrementality testing: Use geo splits, holdout groups, and time-based tests to answer “what happens if we turn this off?” for key AI-driven channels.
- AI-specific KPIs: Track:
- AI Overview visibility and click loss for top queries
- Share of traffic from branded vs. non-branded queries post-AI rollout
- Downstream metrics from AI agents (CSAT, resolution time, conversion rate)
You’re not going to get pixel-perfect journeys in a world where half the “journey” is a model summarizing the internet. Aim for “confident enough to make budget decisions.”
Where CMOs should actually move money in the next 12 months
The headlines make it sound like you need to chase every new AI feature. You don’t. You need to re-balance your portfolio around the AI layer.
1. Spend less on “AI-branded” tools, more on data and structure
Cut:
- Random AI writing tools that add more content but not more clarity
- Shiny AI “assistants” that duplicate existing workflows
Reinvest into:
- Data engineering support to clean up events, CRM, and product feeds
- Technical SEO and information architecture
- Analytics resources that can run proper incrementality tests
2. Shift from channel specialists to “AI layer” specialists
You still need channel experts, but you also need people who think horizontally:
- Someone who owns “how we show up in AI Overviews, semantic search, and recommendation systems”
- Someone who owns “how we feed and govern black-box campaigns and AI CRMs”
- Someone who can talk to legal and security about AI trust, privacy, and brand risk
Title them however you want. The function matters more than the org chart.
3. Treat trust as a performance metric, not a brand slogan
At Davos, trust is being framed as a performance metric. That’s not fluffy. In a delegated world, if the AI doesn’t trust you, you don’t get recommended.
Make “trust” operational:
- Track complaint rates, refund rates, and review velocity alongside CAC and LTV.
- Give your CX and legal teams a real seat at the media planning table.
- Include “AI-safe” guidelines in creative briefs (claims, evidence, disclaimers).
What to do this quarter
To make this concrete, here’s a 90-day plan for a CMO or head of growth:
- Run an AI visibility audit
- Identify your top 50 non-branded queries by revenue.
- Check how often AI Overviews appear and whether you’re cited.
- Do the same for key social platforms: where do algorithms already favor you?
- Clean your conversion data
- Standardize event names across web, app, and CRM.
- Remove low-signal events from optimization (e.g., time-on-site, generic clicks).
- Set up offline conversion uploads for at least one major ad platform.
- Restructure 10-20 core pages for semantic clarity
- Add explicit “who this is for / not for” sections.
- Use FAQs and bullet lists that models can easily summarize.
- Resolve obvious cannibalization: one page per core intent.
- Launch one serious incrementality test
- Pick a major AI-heavy channel (Performance Max, META Advantage+, etc.).
- Design a geo or time-based test with a clean control.
- Agree upfront with finance on what outcome counts as “keep, cut, or scale.”
- Define “AI trust” for your brand
- List the top 5 public signals a cautious AI system would use to judge you.
- Assign owners and quarterly targets for each (reviews, complaints, citations, etc.).
- Bake those into your media and content briefs.
The teams that adapt to delegated behavior now will quietly compound an advantage: they’ll be the default answer the AI gives when your next customer says, “Just pick the best option for me.”