CV GIANCARLODINARDO.6@GMAIL.COM
Playbooks / AI Workflow

AI does the research.
You make the calls.

The question isn't whether to use AI in your creative workflow. It's knowing exactly where it helps and where it doesn't. Get that wrong and you end up with faster mediocrity instead of faster quality.

Channel
Meta (Facebook & Instagram)
Tools
Claude · ChatGPT
Focus
Research · Angle development · Audit
Where AI earns its place in the workflow
🔍
AI-led
Deep research — reviews, competitor ads, category patterns
💡
AI-led
Angle ideation — surface patterns, generate hypotheses
Human judgment
Strategy — select angles, brief creative, decide what to test
✍️
Human-led
Production — scripts, briefing creators, editing
📊
AI-assisted
Audit — diagnose underperforming creatives against benchmarks
4
Workflow stages
2
Where AI leads
1
Where humans decide

Faster research. Sharper briefs. The same human judgment.

The most common mistake I see with AI in creative workflows is using it as a content generator. You prompt it for ad copy, it gives you something that sounds plausible, you clean it up and send it to production. The problem is that the output is only as good as the input — and if you haven't done the research to know what angle to pursue, the AI is just generating plausible-sounding guesses faster than you could write them yourself.

The better use is as a research accelerator. AI can process a product's review landscape in minutes — categorising thousands of comments by persona, by buying reason, by objection type — work that would take a strategist hours to do manually. It can surface patterns in competitor ad copy that would take days to spot through manual review. It can take your documented research process and critique it against known direct response principles, flagging where you're missing angles or making assumptions without evidence.

What it can't do is decide what matters. It can surface that 40% of reviews mention durability concerns — but whether that's the dominant concern for your target buyer, or a secondary one that's being drowned out by vocal edge cases, requires judgment about who your buyer actually is. That judgment is yours. AI gets you to the decision point faster. It doesn't make the decision for you.

The workflow below reflects how I actually use these tools: front-loaded on research and angle ideation, where AI has a genuine edge, and human-led from strategy through production, where it doesn't. The goal is better briefs in less time — not faster creative that skips the thinking.

Where AI genuinely helps vs. where it doesn't
Helps — pattern recognition at scale
Processing hundreds of reviews, comments, or competitor ads to find recurring themes. Spotting what language buyers use to describe their problem. Surfacing patterns a human analyst would miss or take days to find manually.
Helps — structured angle generation
Given a rich research brief, AI can generate a wide range of angle hypotheses across hook types, personas, and awareness levels — more options than a solo strategist would write in the same time. The output needs editing and selection, but the raw material is useful.
Doesn't help — creative judgment
Knowing which angle fits which buyer at which moment. Understanding whether a hook is too broad or too niche for the account's goals. Reading whether copy sounds like a real person or like an ad. These require experience and market knowledge AI doesn't have.
Doesn't help — production
Scripts need human editing. Creator briefs need tone and context a prompt can't fully capture. Visual direction, casting, pacing decisions — all require a human in the loop who understands both the brand and the platform. AI-generated video scripts are usually detectable. That's a problem when native authenticity is the goal.

Four stages.
Two where AI leads.

This is the workflow I use when approaching a new product or a new account from scratch — no historical data, no existing winners. The AI-led stages compress the research phase dramatically. The human-led stages are where the strategic and creative quality actually gets made. Each stage feeds the next; shortcutting any of them produces weaker output at the next stage.

01
AI leads
Deep research
Feed the product's review landscape into a structured prompt — ask AI to categorise reviews by persona type, buying reason, and objection. Do the same with competitor ad copy from the Meta Ad Library. Ask it to identify what angles competitors are running and, more usefully, what angles are missing — the objections nobody is addressing, the buyer type nobody is speaking to.
What you add
Prioritisation
AI surfaces patterns. You decide which ones are strategic — which gaps are worth pursuing based on your understanding of the buyer, the competitive landscape, and the account's goals. The research doesn't tell you what to do with it. That's your call.
02
AI assists
Angle and hook ideation
Given the research output, prompt AI to generate angle hypotheses across multiple hook types — emotional, curiosity, call-out, myth-busting, problem stack. Ask it to vary by awareness level: how would this product be positioned for someone who doesn't know they have a problem vs. someone actively comparing options? Generate broadly, then cut hard.
What you add
Selection and specificity
AI generates plausible hooks. You assess them against what you know about the buyer — whether the emotional register is right, whether the specificity level will find the right audience, whether the language sounds like someone who would actually buy this product. Most of what AI generates gets cut. That's expected.
03
Human leads — no AI involvement
Strategy and briefing
This stage doesn't involve AI. You take the selected angles and build a creative brief that specifies hook type, awareness level, persona, pain point, proof format, and offer framing. The brief is the creative strategy made concrete — it's what gets handed to a creator or a designer. AI can draft copy from a brief, but the brief itself requires strategic judgment that AI can't supply. The quality of the brief determines the quality of everything downstream. A brief that specifies the emotional entry point, the buyer's state at the point of seeing the ad, and the specific proof format required is a creative brief. A brief that says "make it engaging and relatable" is not.
04
AI assists
Creative audit
Once ads are live and have accumulated data, AI can help diagnose underperformers systematically. Give it the creative details, the performance metrics, and the benchmark targets — ask it to work through the metric sequence (hook rate, hold rate, CTR, CVR) and identify where the creative is losing people. It's faster than doing this manually across a large creative slate.
What you add
Contextual interpretation
AI can flag that hook rate is below benchmark. It can't tell you whether that's because the hook is wrong for this buyer or because the audience has already seen this angle four times. The data diagnosis narrows the problem. The fix still requires your judgment about what's actually happening in the account.

What the research stage looks like in practice

This is a condensed version of the research stage as I ran it approaching a home security lead gen account — an account where we had no historical data and needed to build a creative strategy from scratch. The prompts are paraphrased; the output types and the human decisions they required are accurate.

Stage 01
AI-led research
Here are 200 reviews from home security product listings on Amazon and Google. Categorise them by: the primary reason someone bought, the primary concern before buying, and any recurring language around fear, safety, or peace of mind. Identify the top three buyer personas implied by the language used.
AI output
Pattern summary
Three distinct buyer types emerged: recent incident buyers (triggered by a break-in nearby or personal experience), proactive protectors (planning ahead, often new homeowners), and remote monitors (primary concern was checking in on family or property while away). The language split clearly — incident buyers used fear-adjacent words, remote monitors used control and visibility language.
Human decision
Strategic selection
The incident buyer persona was the clearest brief — high urgency, specific emotional state, a hook almost wrote itself. But it also had the most competition: every home security brand was already running fear-based creative. The remote monitor angle was underserved in competitor ads and matched a segment that was growing. We prioritised that persona for the first test wave, specifically because the category wasn't there yet.
Stage 02
Angle ideation
Based on the remote monitor persona — someone who wants to check in on their home or family while away — generate 10 hook hypotheses across these types: emotional driver, curiosity-led, direct call-out, and problem stack. Vary awareness level: some for someone who hasn't considered a home camera, some for someone actively comparing options.
Human decision
Selection and edit
Of ten hooks, two were worth testing. The rest were either too generic, too long to work as a visual hook, or used language that didn't sound like the buyer's own words. The two I kept were rewritten significantly before going into the brief — the AI gave me the angle direction; the specific line required editing to not sound like an ad. That edit is not a small thing. It's where the actual creative quality gets made.

What the workflow doesn't change

Being clear about where AI doesn't help is as important as knowing where it does. These are the four things that remain entirely human in this workflow — not because the tools aren't capable enough yet, but because they require judgment that's contextual, strategic, and market-specific in ways that general-purpose AI can't replicate.

Limit 01
Knowing which angle fits this buyer at this moment
AI can generate a list of plausible angles. It can't tell you whether a fear-based hook is appropriate for a category that's already oversaturated with fear-based creative, or whether a curiosity hook would outperform a call-out hook for a buyer who's already actively looking. That requires knowledge of the competitive landscape and the buyer's real state — which comes from time spent in the category.
Limit 02
Recognising when copy sounds like an ad
AI-generated copy tends toward a particular cadence — declarative, slightly elevated, structurally predictable. On Meta, where native content outperforms polished advertising, this cadence is a liability. The edit that makes AI-generated copy sound like a real person talking about a real problem is not cosmetic. It's the difference between creative that converts and creative that gets scrolled past. This judgment is human.
Limit 03
Deciding what the test is actually testing
AI can help structure a testing plan, but the decision about which variable matters most — which gap in your knowledge would be most valuable to close right now — requires strategic prioritisation that's specific to the account and the moment. The same testing structure produces different value depending on what you already know and what you need to find out next.
Limit 04
Reading performance in context
AI can identify that CTR dropped 20% week over week. It can't tell you whether that's creative lifecycle, an audience saturation problem, a seasonal effect, or something that changed in the competitive auction. Reading performance in context — knowing what's normal for this account at this time — requires pattern recognition that's built over time in accounts, not synthesised from general knowledge.
AI as a content generator
Prompts written without research input — output is plausible-sounding guesses at scale
AI copy goes to production with light editing — the cadence and phrasing read as generated
Speed increases but brief quality doesn't — faster output of the same uncertain thinking
Research stage skipped or compressed — AI is asked to invent angles rather than surface real ones
Human judgment displaced rather than supported — the strategist becomes a prompt engineer
AI as a research accelerator
Research prompts are built with real data — reviews, competitor ads, comment mining — so output reflects actual buyer language
AI output is treated as raw material, not copy — significant human editing before anything goes to a creator
Research stage is faster, so more time is available for the brief and the strategy — where quality actually gets made
Angle ideation covers more ground than a solo strategist could in the same time — without skipping the human selection step
Creative audit is structured and consistent — AI applies the same diagnostic questions to every ad, flagging what a manual review might miss