Structured reference documents for the questions senior measurement people get asked: how to evaluate a vendor, how to verify independence, when to use which methodology, how to audit programmatic delivery, where to find search allocation efficiency, how to sequence conversion data, and how to read AI search visibility. Built for the conversation you take into a CFO or operating partner meeting.
Each reference document is a 3-page PDF, 12 questions plus an at-a-glance summary on the final page. Bring them to any vendor conversation, internal review, or board discussion — including ours.
Twelve questions across independence, capability, privacy, and partnership dimensions, with a vendor scorecard for comparing up to three vendors side by side. The choice of measurement vendor shapes every optimization decision the marketing team makes; this is the framework for making that choice deliberately.
Independence is a structural property, not a marketing claim. This document covers data origin, environment control, consent and data use, and economic independence. The verifiable mechanism: how the vendor's tag is classified by the Consent Management Platform on your own site — a check that takes under a minute and reveals what the vendor's brochure cannot.
Each methodology answers a different question and assumes a different set of conditions. The decision framework surfaces methodology fit as a deliberate choice rather than a default. The page-3 grid maps eight decision criteria across MMM, MTA, incrementality, and combined approaches. When methodologies disagree, the disagreement is information.
Platform-reported invalid traffic metrics describe what the platform's quality systems chose to surface. Independent delivery-quality auditing reads the raw signal — impression logs, beacon fires, conversion records — for patterns that platform reporting does not flag. Includes the four-signal pattern C3 looks for in client engagements: viewthrough beacon ratio, impression-spike timing, peer-volume outliers, and CPM-weighted fraud cost.
Each search platform reports conversions on its own attribution basis. The cost-per-conversion figures are internally coherent and structurally not comparable across platforms. This framework describes what a credible cross-platform allocation methodology requires, plus a reference grid of typical findings: independently-attributed CPA differentials of 50–80%+, same-spend efficiency recovery of 6–23%, day-of-week CPC differentials of up to 140%+.
Conversion data arrives from several sources with different confidence levels. Digital deterministic at the foundation; independent offline attribution at a structural ceiling of 4–20%; online-to-CRM matched in the middle; platform offline imports and modeled inference filling out the lower tiers. This framework sequences the tiers by confidence and surfaces the structural ceilings every measurement vendor's match-rate claims should disclose.
AI search surfaces show up in measurement at two distinct layers: click-through traffic captured at the site tag, and citation visibility captured by publisher-side tools. Standard reporting covers parts of each layer with different reliability across surfaces. The page-3 inventory maps current AI surfaces — ChatGPT, Copilot, Bing AI summaries, Gemini, Perplexity, Claude, Apple Intelligence — against the reporting access available for each.
The reference documents describe the questions; a direct conversation with our team applies them to your specific program, spend, and channel mix. No deck. The relevant document above is the agenda.
Talk this through →