If a marketing service can’t tell you what it’s for—and what it will be judged by—assume you’re buying vibes.
I’m not anti-agency. I’m anti-fog. The stuff that pays back tends to look boring on a slide deck: clean measurement, tight iteration loops, ruthless pruning. The stuff that doesn’t pay back usually sounds incredible in a pitch and collapses the second you ask, “Cool—how do we prove incremental lift?”
One-line truth: ROI isn’t a feeling. It’s an accounting problem with better creative.
The ROI that actually counts (not the “we grew awareness” kind)
Look, clicks are not revenue. Neither is “engagement” unless it’s correlated to a downstream event you can name and track.
When digital marketing earns its keep, it’s because someone did the unglamorous work:
– Defined a conversion that matters (purchase, booked call, qualified lead, retained customer)
– Connected it to a real cost (fully loaded CAC, not just ad spend)
– Measured payback over time (LTV, retention curves, repeat purchase rate)
– Built attribution you can defend in a room full of skeptics
Technical aside: if you’re reporting ROAS without accounting for margin, refunds, retention, and incremental lift, you’re not reporting ROI—you’re reporting a partial metric that can be gamed.
A helpful anchor: Google’s economic impact report has repeatedly cited that businesses earn multiple dollars back per $1 spent on Google Ads (figures vary by year and methodology). One commonly referenced claim is ~$8 in revenue per $1 of ad spend (Google Economic Impact Report). That stat is not a guarantee—honestly, it’s not even a plan—but it’s a reminder that paid channels can print returns when measurement and offer economics aren’t a mess. For a practical example, Full details here.
What I’ll happily pay for again
Some services don’t just “perform.” They compound because they create a system you can iterate.
Performance creative + landing page testing (the unsexy gold mine)
I’ve seen more growth come from fixing message-market mismatch than from doubling budgets. A tight loop here wins:
Run ads → isolate the message that gets qualified clicks → land those clicks on a page that answers objections → measure down-funnel → iterate weekly.
Not quarterly. Weekly.
A good provider won’t obsess over cleverness; they’ll obsess over clarity. They’ll have a testing backlog. They’ll kill losing variants fast. And they’ll argue with you (politely) when you want to keep the “pretty” creative that converts like a brick.

Analytics + attribution cleanup (yes, pay for it)
Now, this won’t apply to everyone, but if your tracking is shaky, everything downstream becomes theater.
Worth-it work looks like:
– Server-side tracking where appropriate
– Event taxonomy that doesn’t change every two weeks
– UTMs that match reality (not chaos)
– Consistent definitions for MQL/SQL/qualified lead
– A source-of-truth dashboard that doesn’t mysteriously “update”
If an agency can’t explain how conversions are deduped across platforms, they’re guessing. Guessing gets expensive.
Lifecycle marketing (email/SMS) that’s tied to economics
Email is where a lot of “hidden ROI” lives—if you treat it as a product, not a newsletter.
In my experience, the best lifecycle programs are brutally practical: abandoned cart flows, post-purchase education, replenishment reminders, upsell based on behavior, winback sequences with actual segmentation. Personalization is great, but only when it scales and produces measurable uplift (not just nicer copy).
Short section, big point: Retention is cheaper than reacquisition. Always has been.
Hot take: Influencer marketing is mostly overpriced… until it isn’t
The problem isn’t creators. The problem is lazy structure.
When influencer spend works, it’s because you treat it like performance media with creative upside:
You demand trackability: unique codes, tagged links, landing pages, clear terms on usage rights. You evaluate audience relevance with more than follower counts. You compare results against what that budget would’ve done in paid social.
When it fails, it fails in a predictable way: you buy reach, not intent. Then you report impressions like they’re deposits in a bank account.
Here’s the thing: if you can’t pause it without “damaging the relationship,” you’ve built a program that’s emotionally fragile and financially dangerous.
The 5 criteria I use to clamp down on value (no wiggle room)
Some teams call this “vendor evaluation.” I call it self-defense.
- Outcome alignment: KPIs map to business goals, not platform metrics.
- Pricing transparency: fees, pass-through costs, tooling, creative production—spelled out.
- Attribution sanity: a model you can explain, with known limits, and a plan to improve it.
- Test plan: hypotheses, success thresholds, stop criteria, iteration cadence.
- Accountability rhythm: monthly readouts, what changed, what’s next, and what got cut.
If a provider acts like any of that is “extra,” you’re about to buy a lot of activity and very little impact.
Services that tend not to move the needle (and why they keep getting sold)
This is where people get mad, but… you asked.
Generic “SEO packages” with templated deliverables
If the pitch sounds like: “We’ll do X blog posts, Y backlinks, Z optimizations,” run a little diagnostic in your head: How does that translate to revenue? For which queries? For which pages? Over what timeline?
Template SEO often produces busywork: content no one searches for, links that don’t move rankings, reports full of “visibility” charts that never connect to pipeline.
Good SEO is strategy plus information architecture plus ruthless prioritization. Bad SEO is a content treadmill.
“Brand awareness” campaigns with no measurement plan
Awareness can matter. It can also become a blanket excuse.
If there’s no incrementality design—geo tests, lift studies, matched markets, holdouts, anything—then you’re essentially trusting that the money worked because the money was spent.
Opinionated line: Awareness without a measurement thesis is just expensive hope.
Dashboard-only “consulting”
I like dashboards. I don’t like dashboards as a substitute for decisions.
If the service is 80% reporting and 20% action, you’re buying narration, not performance. A strong partner uses reporting as a trigger: keep, cut, scale, test, fix.
A practical ROI framework you can run next quarter (without rebuilding your whole stack)
You don’t need a grand transformation. You need a controlled loop.
Step 1: Baseline like a skeptic
Capture current conversion rate, CAC, LTV, payback period, lead quality rates, and channel mix. Document assumptions. Freeze definitions.
Step 2: Pick 2–3 bets, not 12 “initiatives”
Examples that are scoped enough to measure:
– New offer + landing page rebuild for a single audience segment
– Creative testing sprint: 10 angles, 3 formats, measured on qualified conversion
– Lifecycle improvement: post-purchase flow that targets repeat rate within 45 days
Step 3: Predefine success and failure
Not “we’ll see.” Put numbers on it: +15% qualified lead rate at flat CAC, or -10% CAC at stable close rate. Add stop-loss rules so you don’t bleed budget.
Step 4: Control for noise
Seasonality, promos, inventory constraints, sales team changes—write them down. If you can, use holdouts or split tests. If you can’t, at least avoid changing five variables at once.
Step 5: Audit tracking before you celebrate
This is where teams get fooled. Duplicated conversions, misfired events, platform-reported sales that don’t match backend numbers (it happens constantly).
One-line paragraph for emphasis.
If you can’t trust the measurement, don’t trust the win.
Patterns I keep seeing: what shines vs. what fizzles
The winners have a few things in common, even across industries:
– They’re built to be repeated, not “launched”
– They reward honesty about what’s working
– They make it easy to cut losses quickly
– They connect creative to conversion, not creative to applause
The losers? Misaligned incentives and vague attribution. You overpay for reach. You celebrate vanity metrics. You keep spending because stopping would mean admitting the story wasn’t true.
And yeah, sometimes you’ll run a campaign that pops once and never again. Bright-but-brittle wins happen. The teams that grow treat those like data, not destiny.
Where I’d draw the line before signing anything
Ask one question and wait for the reaction:
“What would make you tell us to stop spending?”
A real operator will answer quickly. They’ll name thresholds, scenarios, and leading indicators. A fog merchant will pivot to reassurance.
That difference—comfort versus clarity—is usually the difference between services that were worth every penny and the ones you pay for twice: once in fees, and again in opportunity cost.