CV GIANCARLODINARDO.6@GMAIL.COM
Case Studies / Windows & Doors

The fix wasn't a new campaign. It was removing three old ones.

A US windows and doors installation company spending $60k a month on Meta had the creative assets and the market demand. What it lacked was focus. A structured consolidation and scaling approach fixed the account and doubled monthly spend over eight months.

Vertical
Windows & Doors
Platform
Meta
Monthly Spend
$60k–$130k/mo
Role
Lead Media Buyer
Eight-month performance delta
38%
CPL reduction
$170$105
+10pt
Set rate lift
18%28%
23%
Duplicate lead rate
eliminated

The account was generating leads. The math just didn't work.

CPL had drifted to $170. Cost-per-demo was above the $750 target. The set rate, the share of leads that booked a sales appointment, had fallen to around 18%. On paper, volume looked acceptable.

Underneath, too many campaigns were competing with each other. Budgets were spread across unvalidated geographies. The best creatives had been running long enough that frequency was quietly eroding performance. Nobody had applied a consistent framework to decide what stays on and what comes off.

The account kept growing horizontally: more campaigns, more adsets, more markets. What it needed was the vertical discipline of cutting what wasn't earning its budget.

Core constraint

Three problems running at once: inflated CPL, falling set rate, and a 23% duplicate lead rate that made everything look better than it was.

Set rate is the percentage of leads that book a sales appointment. Cost-of-marketing (CoM%) is total ad spend divided by revenue generated. In home improvement lead gen, CoM% is the primary profitability signal at the business level.
Baseline metrics
Cost per lead $170
Set rate ~18%
Duplicate lead rate 23%
Cost of marketing % ~29%

Four decisions, applied in sequence.

Every strategic move came from the data already in the account. No new campaigns launched until the existing structure was cleaned up and the budget freed from underperformers had somewhere productive to go.

SCATTERED CONSOLIDATED $170 $105 CPL TRAJECTORY
01
Enforce the performance filter
Every live ad and adset was evaluated against four thresholds: CPL below $150, set rate above 24%, cost-per-demo below $750, CoM% below 23%. Anything failing two or more thresholds across a 28-day and a trailing 14-day window was cut. The rule sounds mechanical. The discipline is not making exceptions for ads that feel like they're about to turn around.
02
Concentrate budget by geography
Kansas had consistently produced the best cost-per-sold metrics. It got its own dedicated adset instead of competing inside a general campaign. Budget freed from cuts went there first.
03
Eliminate the duplicate lead contamination
At 23%, nearly one in four leads was a repeat submission — inflating volume metrics and masking real set rate performance. A full exclusion list rebuild using CRM data and phone-number deduplication removed the noise. Downstream numbers improved without touching a single campaign structure.
04
Switch to county-level CBOs with callouts
A zip-level analysis showed certain counties were dragging account CPL without contributing proportionate volume. County-level CBOs with location callouts in the creative improved relevance and reduced out-of-area lead contamination. Airport-proximity zips were excluded.
Creative

Two winners. Everything else came off.

The data had already identified what worked. The strategy was to stop running underperforming ads alongside the winners and use the freed budget to run a structured copy test on top of them.

Winner
Candid static — Brown Door
Homeowner photographed in front of their home. Authentic, unposed, property-focused. Outperformed studio and graphic formats consistently across Kansas and Missouri markets.
Best performance in Kansas broad 35+ adset
Winner
New Candid 2
Second candid variant proving itself across both Kansas and Missouri in broad adsets. Sufficient volume to trust the pattern across multiple markets before scaling.
Strong across Kansas + Missouri broad
Learning
Cost-angle copy test
Tested copy emphasizing actual project affordability instead of percentage-off incentives. Early results showed a lower duplicate rate and higher set rate, suggesting better lead quality at the top of funnel.
Tracked as separate test track from incentive copy
Tested
Heavy incentive copy
50%-off hook both in headline and primary text. Generated volume but set rate suffered. When the ad's entire value proposition is the discount, the leads who respond are there for the discount, not the product.
Set rate below 12% on this variant
Winner
3-requirements timelapse video
State-specific hook qualifying leads by location and window age. Proven format across multiple accounts in the vertical. Used as primary creative in the state CBO rebuild alongside the candid door variant.
Deployed in State CBO in 1:1 and 9:16
Tested
Group staff collage
Team photography in collage format. Tested in Kansas-specific adset. Showed early promise in the 560k–650k audience bracket at 35+. Held for further evaluation before scaling.
Kansas audience 560k–650k, 35+

The numbers moved. More importantly, so did the sales team's experience.

CPL dropped from $170 to $127 in the first two months, then to $105 by month eight. Set rate recovered from 18% to around 28% as lead quality improved alongside targeting precision. For the client's sales team, the shift was felt before it was fully measured: fewer calls that went nowhere, more homeowners who were in-area and expecting a callback.

$105
Cost per lead
$170$127 in 2 months, then $105 by month 8
28%
Set rate
~18%28% as in-area lead quality improved
$735
Cost per demo
Down from ~$930 at baseline
Target: $750 — achieved month 8
21%
Cost of marketing %
~29%21%, within target range
What I Took From This

Three things that now inform every account I run.

1.
The performance filter only works if you apply it without exceptions. Every account has ads that feel like they're about to turn around. The ones that consistently failed two or more thresholds across multiple time windows never did. Cutting faster is always the right call.
2.
Geographic concentration beats geographic ambition at most budget levels. The Kansas campaign outperformed broad multi-state campaigns not because the market was inherently better, but because creative, copy, and budget were aligned to one geography. Alignment compounds.
3.
Duplicate lead rate is an underestimated CPL distorter. At 23%, nearly one in four leads was a repeat submission inflating volume metrics and masking real set rate performance. Fixing the exclusion lists improved downstream numbers without touching a single campaign.
Before the work
CPL at $170 with no systematic framework for deciding when to cut
Set rate at 18%, with strong lead volume masking poor lead quality
23% duplicate lead rate inflating volume metrics across the account
No exclusion logic in place — same leads re-entering the funnel repeatedly
Budget spread thinly across all-zips campaigns with no geo concentration
After the work
CPL at $105, driven by cutting underperformers and concentrating on validated markets
Set rate recovered to ~28% as in-area lead quality improved
Duplicate rate eliminated following CRM-backed exclusion list rebuild
Monthly spend scaled from $60k to $130k on the back of validated performance
County-level CBOs with geo callouts focused spend on the highest-converting markets
The first question before touching anything: creative problem or structure problem?
The Diagnosis playbook covers how to separate creative performance from media buying performance before drawing any conclusions. The thresholds and decision logic used in this account came from running that process first.
Read the Diagnosis playbook →
How the county CBO structure and geo callout creative work in practice
The Creative Coverage playbook covers the three-campaign model, the CBO versus ABO decision, and how to give testing and scaling separate infrastructure so they stop competing for the same budget. Read it →