CV GIANCARLODINARDO.6@GMAIL.COM
Case Studies / Basement Renovation

How I brief a new account before the first dollar moves.

Premier Basements had no pixel history, no proven creative, and no prior account to learn from. The question before launch was not what to test. It was what signal the account should be trained on first.

Vertical
Basement Renovation
Platform
Meta
Monthly Spend
$70k, scaled to $160k
Role
Lead Media Buyer
Pre-launch methodology, first 90 days
27%
Set rate on problem-aware concepts by month 2
$158
CPL at week 8 (category norm: $200-280)
3
Buyer personas confirmed before any spend
14
Concepts launched in first 60 days
Pre-launch research, not launch volume, determined what the account learned first.

This is a methodology case, not a results case.

There was no prior account to inherit and no before/after CPL to present. The value of the work is in what the pre-launch process prevented: the wrong signals training the algorithm, the cheapest leads defining the account's learning, weeks of correction that never had to happen.

Starting from zero only works if you control what Meta learns first.

The client had not run paid social before. No pixel data, no audience learnings, no proven creative, no creative coverage to inherit. The easy move would have been to launch broad, collect leads, and let Meta sort it out.

In a high-ticket, high-consideration category that is exactly how an account gets broken early. Cheap leads are not serious. They are not ready. If those are the first signals the algorithm receives, it learns from the wrong people and that learning compounds.

Basement renovation homeowners typically sit on the idea for months before they act. The creative challenge was not to explain the service. It was to identify which situation finally makes someone pick up the phone.

The proof that pre-launch research mattered

The financing angle had a competitive CPL. Without a pre-launch framework for evaluating appointment quality, it would have been scaled. It had the lowest set rate of any concept in the first batch.

Financing angle CPL
Competitive with batch avg.
Financing angle set rate
Lowest in the batch
Moisture urgency CPL
Slightly above batch avg.
Moisture urgency set rate
Highest in the batch

A CPL-only evaluation would have scaled the wrong concept. The research gave us a frame to evaluate against appointment quality instead.

Core constraint

"Without historical data, the first job is to decide which signals you want the account to learn from. The cheapest leads can pull the system in the wrong direction before there is enough data to notice."

High-ticket, high-consideration categories punish bad early signals more than any other. The homeowner who fills out a form because of a monthly payment headline is at a different point in their decision than the one responding to visible water damage in the hook. CPL cannot tell those two buyers apart. Set rate can.
From noise to signal, how the brief shaped delivery
LAUNCH P1 P2 P3 NOISE / VOLUME INTENT SIGNAL

Four steps before the first campaign went live.

The first risk was not performance. It was starting with the wrong assumptions about the buyer. The account needed a way to ground creative decisions before anything went live.

Three buyer personas ranked by appointment urgency
Moisture / damage homeowner HIGH Space conversion buyer MID Resale value planner LOW

Budget priority followed intent level. The highest-urgency persona led. The others were held as expansion territory once the account had signal worth building on.

01
Define the buyer situations before launch
Because the account had no live paid history, the risk was launching with generic basement messaging and learning slowly. The first step was to map the situations that created real urgency: water intrusion, unused square footage, trust concerns, financing hesitation, and project avoidance. That gave the first creative batch a reason to exist before budget was spent.
02
Map buyers by intent level, not demographics
Three buyer situations emerged: the homeowner dealing with moisture or visible damage, the one who wants more usable living space, and the one thinking about resale value. Same demographics. Different triggers. Different urgency levels. Different set rate outcomes. Only the research made those distinctions visible before a dollar moved.
03
Brief every concept against a specific buying moment
Each concept was tied to a named trigger, not a general pain point. Moisture concepts opened with tension and urgency. Space-conversion concepts opened with aspiration and utility. Resale concepts opened with investment logic. The brief answered one question for every ad: what is this person feeling the moment this creative stops them?
04
Evaluate early data against appointment quality, not volume
The first read was set rate and appointment show quality, not CPL. A lead that costs $20 less and never shows is not a better lead. This frame is what caught the financing angle early: competitive CPL, lowest set rate in the batch, would have been scaled under a volume-only read.
Creative

What the hypotheses predicted. What the early data confirmed.

Every concept was briefed against a specific buyer situation. The read after launch was not just which ad performed. It was which buyer type showed up, and how far along in their decision they were when they did.

Winner
Moisture urgency video
Opens on visible water damage and damp walls. Problem framed as something the homeowner had been putting off. No product mention in the first several seconds. The situation had to feel familiar before the solution appeared.
Strongest set rate of the batch. The highest-intent buyer identified in research showed up exactly as predicted.
Winner
Before and after transformation static
Split image: current basement state on the left, finished functional space on the right. No copy overload. The visual qualified the buyer. Reached homeowners earlier in the process but still qualified for a conversation.
Highest lead volume of the batch. Held as a secondary direction alongside the moisture urgency primary.
Learning
3-requirements qualifier video
Hook asked homeowners to self-qualify: own their home, basement older than ten years, experienced recurring moisture. Filtered hard. Volume was lower. The conversations that came through were more specific and more ready.
Best appointment quality despite lower volume. Held for markets where booked calls matter more than raw lead count.
Learning
Financing angle
Monthly payment framing in headline and primary text. Competitive CPL. The volume looked fine on the dashboard. Set rate was the lowest of any concept we ran. The buyers responding to a payment hook were researching, not deciding.
The key proof point. Would have been scaled under a CPL-only read. Research gave us the frame to catch it.
Tested
Space-conversion lifestyle video
Home gym, playroom, teen bedroom. Aspirational framing around what the space could become. Reached a real buyer type, the space-conversion persona from the research map. Lower urgency than the moisture angle, treated as expansion creative rather than a launch primary.
Persona confirmed real, but softer. Held for once the account had signal worth expanding from.
Tested
Authority and warranty static
Years in business, project count, warranty terms. The trust signals were real. The placement was wrong. As a first touch it asked for trust before it had earned attention. Performed better in retargeting where context already existed.
Held for retargeting. Right content, wrong funnel position as a cold opener.

The account skipped the usual first phase of expensive learning.

By month two the account had a clear creative hierarchy and better early lead quality than a typical cold launch. The real outcome was not the CPL. It was the weeks of correction that never happened because the brief had already done that work before spend began. The numbers below are week-8 snapshots on a brand-new account — they reflect the quality of the pre-launch research, not account maturity.

27%
Set rate
On problem-aware concepts by month two. Up from 19% at account open. The moisture urgency and transformation concepts produced the cleanest appointment signal.
$158
Cost per lead
Week 8. Before full optimization. Category typically runs $200-280 at maturity. The research gave the first batch a structural advantage.
14
Concepts launched
First 60 days. Every one tied to a specific buyer intent level, not general creative volume. The brief determined which concepts existed, not instinct.
3
Personas confirmed
All three pre-launch personas produced a distinct performance signal. The urgency ranking held: problem-aware buyers showed the most appointment intent.
Category CPL at maturity
$200-280
This account at week 8
$158
Context
Before full optimization. Week 8 CPL in a new account reflects the research quality, not account maturity.

What I took from this one

01
The first signal matters more than the first volume spike. In high-ticket categories, cheap leads make the account look healthy while teaching it the wrong lesson. The financing angle tested that directly. Its CPL was competitive. Its set rate was the lowest of any concept. Without a pre-launch evaluation framework, it would have been scaled. The research gave us the frame to catch it before that happened.
02
Personas are only useful if they change budget decisions. The point was not to cover every possible homeowner. It was to understand which buyer should get budget priority first. Moisture-aware homeowners showed more urgency. Space-conversion buyers were real but softer. Knowing that prevented chasing the highest-volume angle when the better appointment signal was somewhere else.
03
The comment section surfaces what a client brief misses. The most useful creative inputs came from how homeowners described the problem in their own words before any brand had turned it into copy. Questions, hesitations, the specific language of delay and procrastination. That gave the first batch a hook register that felt like the buyer's voice, not the brand's voice.
01 Signal quality over volume CPL SET 02 Personas that move budget HIGH MID LOW INTENT 03 Comments as creative input " PREMIER BASEMENTS
Without the pre-launch phase
Generic first batch based on assumed pain points instead of real homeowner language
No intent ranking, so budget drifts toward volume instead of appointment quality
Financing angle scaled because CPL looked competitive, set rate problem discovered weeks later
Weeks of corrective testing that the research phase made unnecessary
No map for deciding what to keep, cut, or build next after the first read
With the pre-launch phase
First batch built from buyer language pulled from reviews and competitor comment sections
Three buyer situations with ranked intent levels and distinct creative briefs before launch
Financing angle identified as a volume play, not a quality play, before it could mislead scaling decisions
27% set rate on stronger concepts by month two, up from 19% at open
Every subsequent brief had a clearer reason to exist and a clearer buyer to speak to
How the pre-launch research phase works as a structured process
The AI Workflow playbook documents the four-stage process behind this work: review mining, competitor analysis, persona extraction, and angle ideation. The pre-launch phase in this case ran from that framework. Read it →
How comment sections generate creative briefs
The Comment Response Ads playbook covers how to mine comments for the inputs that map directly to brief structure: the objection, the question, the friction point. The problem-aware angle in this case came from exactly that process. Read it →
The persona and awareness mapping framework behind the brief
The Roofing framework shows how to build a creative strategy map from persona extraction through awareness-level thought mapping to brief. This case applied that same logic before a single ad went live. Read it →