CV GIANCARLODINARDO.6@GMAIL.COM
Case Studies / Epoxy Flooring

It wasn't a creative problem. It just looked like one.

An epoxy flooring account had a month of solid creative and a CPL that kept climbing. The issue was not the ads. Testing and scaling were sharing the same campaign and the algorithm had already decided who it liked.

Vertical
Epoxy Flooring
Platform
Meta
Monthly Spend
$25k → $85k over 6 months
Role
Lead Media Buyer
Five-month performance delta from engagement start
44%
CPL reduction
22%37%
Set rate improvement within 5 months
3
Hook types tested to isolate winner

The account looked like it had a creative problem. It had a structure problem dressed up as one.

When I took on this account it had been in paid ads for three months. A regional epoxy flooring installer operating across three states in the Midwest. The previous setup had one campaign handling everything: broad cold traffic, retargeting, and testing all sitting together under a CBO. The client had been consistently briefing and delivering new creative assets from their side, four to five concepts a month, all going into that same campaign alongside whatever was already running. What I saw was what I inherited.

CPL was sitting at $65 and had drifted up steadily over the prior six weeks. The natural read was that the creative had fatigued. But when I looked at the spend distribution, one ad from the previous month was absorbing roughly 65% of the daily budget. New concepts were each getting $30 to $50 in spend before the algorithm moved on.

Nothing new had a fair chance. The algorithm had already settled on a comfortable winner and had no reason to go looking for something else. More creative was not the fix. Separate infrastructure was.

Core constraint

"Testing and scaling inside the same campaign means your winners always win the budget before the tests have run."

In this vertical, set rate is the signal that matters most. A CPL that looks acceptable can mask a lead quality problem that only surfaces when the sales team starts making calls. The structural fix had to come first before any creative conclusions could be trusted.
Budget allocation: inherited state
Dominant ad (1 ad) 65%
New concepts (4–5 ads) 35%
Each new concept averaging $30–$50 in spend before the algorithm moved on.

Separate the infrastructure. Then read the creative data.

The account was generating leads, but the signal behind them was unclear. It was not obvious which buyer the system was finding or why.

01
Split testing from scaling: ABO for new concepts, CBO for proven winners only
02
Audit every running ad at the set rate level, not just CPL
03
Hook type as a deliberate buyer-selection test, not a copy variable
04
Geo concentration backed by zip-level data, not geographic ambition
01
Separate testing from scaling before judging creative
The account was producing leads, but the setup made the signal hard to trust. New concepts were competing against proven winners inside the same system, which meant the algorithm protected what already had history. I separated testing from scaling first so future creative reads would mean something.
02
Audit what was actually running
Pulled all-time performance by ad and evaluated at the set rate level, not just CPL. Two ads that looked acceptable on cost per lead had set rates below 15%. Both came off. Budget concentration shifted to the two ads producing qualified leads at a defensible cost.
03
Hook type as a buyer selection test
Once the structure was clean and producing readable data, ran a deliberate hook comparison. Three versions of the same ad body: a problem-aware hook calling out garage floor damage, a curiosity hook about what professional installers use that homeowners don't, and a social proof hook leading with local installs completed. Same body, same CTA, different entry point. The goal was to understand which buyer the algorithm found with each one.
04
Geo concentration for quality
The account was running broad across three states. Pulled zip-level CPL and set rate data. One state was generating volume but the leads were not converting at the sales level. Reallocated that budget into the two geos producing installs and added location callouts in the headline copy.
Creative

Three hooks. One clear winner. One finding that reshaped how I think about this vertical.

The hook test ran for three weeks inside the ABO testing campaign. Equal daily budget per ad set, same landing page, same offer. What came back was not just a performance ranking. It was information about who the algorithm was finding with each hook type and how far along the buying decision they were.

Winner
Problem-aware hook
Opened with a direct call-out: stained, cracked garage floor described plainly. No hyperbole. The viewer either recognized it or did not. The ones who did were closer to the decision already and that showed up in the downstream numbers.
Highest set rate of the three. Audience found was further along the awareness curve than the other hooks reached.
Learning
Curiosity hook
"What professional installers use that most homeowners never hear about." Generated strong hook rate. But the leads it produced were earlier in their thinking and set rate reflected that. Good for awareness stage, wrong for where this account needed to focus.
High hook rate, lower set rate. Different buyer stage from the winner.
Tested
Social proof hook
Led with installs completed locally. Solid in accounts where brand recognition is low. In this market the client had some existing presence and the proof angle did not differentiate meaningfully from what competitors were already running.
Mid-table across all metrics. Not the right starting point for this geography.
Winner
Before and after timelapse
Video showing a worn garage floor being coated. No voiceover, just the transformation. Ran alongside the hook test as a format comparison. Outperformed static on set rate, lower on raw volume. Promoted to the CBO scaling campaign once it had enough history behind it.
Best cost per demo of any creative in the account over the test period.
Learning
Financing angle copy
Tested monthly payment framing instead of total project cost. Generated leads but set rate dropped. Buyers responding to the payment hook were earlier in the financial decision and not as close to booking as the problem-aware creative was reaching.
Lower set rate than problem-aware hook. Pattern holds across this vertical consistently.
Tested
Geo callout static
Static with city name in headline and a tight service radius message. Tested in the two concentrated geos after the reallocation. Set rate was comparable to the problem-aware hook in those markets. Worth keeping for geo-specific ad sets but not a broad campaign asset.
Strong in market-specific ad sets. Too narrow for the main testing pool.

The structure unlocked the data. The data guided the creative.

The structural changes produced cleaner data within the first month. The creative work that followed was built on a foundation where the numbers meant something. Five months in from the start of the engagement — the account had been live for about eight months total by this point, but the delta below reflects the period of active management.

$36
Cost per lead
Down from $65 at the start
44%
CPL reduction
Structural cleanup drove most of this before any creative changes
37%
Set rate
Up from 22% account-wide at baseline, up from 15% on the worst ads
2
States (down from 3)
Budget concentrated into geos producing qualified installs

What I took from this one

01
Structural problems produce creative symptoms. The CPL drift looked like fatigue. It was the algorithm protecting a comfortable winner while new creative starved. The fix was not a better brief. It was a campaign structure that gave new concepts room to generate signal without competing against a proven ad's head start.
02
Hook type tells you which buyer shows up, not just how many. The curiosity hook and the problem-aware hook had similar click-through rates. Their set rates were eight points apart. The hook is not just about stopping the scroll. It is selecting for a specific person at a specific stage of the decision. In lead gen, that distinction shows up downstream, not in the ad metrics.
03
Geographic concentration beats geographic ambition at moderate spend levels. Running three states at $85k a month sounds like scale, but split three ways each market was getting roughly $28k. That is not enough to build meaningful signal across a spread that wide when the geos have different competitive dynamics. The more useful move was stepping back and reassessing with the client: which geos actually aligned with their growth priorities, and which ones were just generating noise? Concentrating spend in the two markets where installs were closing, with location callouts in the creative, improved both CPL and lead quality without changing the total budget. The third state did not get abandoned. It got treated differently at a lower spend level until it could justify more.
Before the work
Testing and scaling in one CBO with the algorithm deciding allocation before new ads had generated any meaningful signal
CPL at $65 with no systematic way to evaluate which ads were producing qualified leads versus raw volume
Two ads with sub-15% set rates running because their cost per lead looked acceptable in isolation
Budget spread across three states with no zip-level data showing where leads were actually converting to installs
Hook type treated as a copy decision rather than a targeting and buyer-stage decision
After the work
ABO testing campaign giving each concept equal budget for a minimum of five days before any verdict was drawn
CBO scaling campaign running confirmed winners only, with performance history worth trusting behind each one
Set rate used alongside CPL for all creative decisions, evaluated at the same level as the sales team read the account
Budget concentrated in two high-converting geos with location callouts reinforcing geographic relevance in the creative
Hook type tested deliberately to understand which buyer each angle qualifies before any scaling decisions were made
The structural fix came from asking the right diagnostic question first
The Diagnosis playbook covers how to separate creative problems from structural ones before touching anything in an account. The question this case study starts with (creative problem or structure problem?) is documented there with the signals that point to each. Read it →
Why the hook type test was a targeting decision, not a copy decision
The Hooks Are Targeting playbook covers how different hook types signal different buyer states to the algorithm and qualify different people. The eight-point set rate gap between the curiosity hook and the problem-aware hook is exactly what that piece explains. Read it →
The ABO and CBO split is documented in the Creative Coverage playbook
The three-campaign model behind this fix, why CBO starves new creative in a mixed testing and scaling setup, and how to give each stage its own infrastructure so both actually work. Read it →