A Field Guide to Bad AI Advice
Six archetypes whose AI takes you should stop forwarding around your office — what each one wants from you, and what they get wrong.
By Matt Gardner
A client forwarded me a LinkedIn post last week. Guy in a half-zip, standing in front of a whiteboard, claiming his agency saved $4 million using ChatGPT. No methodology. No headcount. No link to the agency. Forty-seven thousand likes.
“Should we be doing this?” the client asked.
No. The thing the client should be doing is learning to ignore that post — and the dozen others like it that will land in their inbox this week. Most of what gets published about AI right now isn’t wrong. That would be useful. It’s plausible. It sounds right enough that someone who only reads headlines will repeat it in a board meeting, and nobody in the room will know how to push back.
Here’s a field guide. Six archetypes you’ll recognize, what they actually want from you, and what they get wrong. Print it. Hand it to whoever forwards you the next AI screenshot.
1. The Thought-Leader-Influencer
You know the format. Screenshot of a ChatGPT output labeled “INSANE.” A hook line: Most people don’t know this prompt. A carousel that ends in follow for more.
What they want: engagement. The post is the product.
What they get wrong: every screenshot is one prompt, one model, one moment. Yours won’t be. The output that “changed everything” was cherry-picked from twelve attempts you didn’t see. Treat the whole genre the way you’d treat a magic show. Don’t run your finance team off it.
2. The Doomer
Bio says “AGI by 2027.” Feed is OpenAI org-chart speculation, screenshots of Anthropic policy papers, the occasional 4,000-word essay on substrate-independent consciousness.
What they want: to be taken seriously by other doomers. The alignment debate is real and serious. The version that reaches your LinkedIn is dread, packaged.
What they get wrong, for you: even if they’re right about 2030, you still have Q3 to plan for. Existential takes don’t help you decide whether to roll out Copilot to the finance team. They are answering a question you didn’t ask.
3. The Cheerleader
Every tool is “game-changing.” Every demo is “the future of work.” Their feed is a chain of breathless reposts. They have no scars because they’ve never deployed anything.
What they want: a consulting pipeline. The enthusiasm is the lead magnet.
What they get wrong: the demo always works. The deployment never works the same way. The ninety-second video and the ninety-day rollout share a name and nothing else.
4. The Big Four Consultant
Eighty-page deck. Three frameworks. A maturity model with five tiers and a self-assessment quiz. Zero shipped code.
What they want: billable hours.
What they get wrong is structural. Their incentive is to make AI sound complicated enough to require them. So it does. The deck is internally consistent. It is also useless. You will read it, feel sophisticated for an afternoon, and execute none of it. Six months later, someone will hand you a different deck that contradicts the first one. They will charge you again to read it.
5. The GPT-Wrapper Founder
“I built an AI for [thing].” Twitter thread with a Stripe link. Demo video has thirty thousand views.
What they want: their seed round.
What they get wrong: most of these are a system prompt, a thin React app, and twelve months of runway. Some are real businesses solving real problems. Telling the difference is harder than the demo suggests — and you should be doing that work before you sign the annual contract, not after the founder pivots to crypto and the tool stops getting updates.
6. The “I Taught Myself Prompting in a Weekend” Coach
Course launch. Screenshots of curated outputs. I’ll teach you the framework that 10x’d my output. Email capture above the fold.
What they want: your email address, then $297.
What they get wrong: prompting is a skill, but it’s a small one. Telling your team to “learn prompting” is like telling them to “learn Googling” in 2008. It’s table stakes. It’s not a strategy. The course is selling you the easy part — the part that gets cheaper every six months as the models get better at understanding what you actually meant.
How to read any AI advice
Two questions. Run anything through them.
- Could the person giving this advice have shipped what they’re describing?
- Does the advice survive contact with my actual constraints — my data, my team, my budget, my Q3?
If both answers are no, you’re reading entertainment. That’s fine. Entertainment is a legitimate use of an afternoon. Just don’t run your business off it.
The good news: once you can name the archetype, the post stops working on you. You’ll see the half-zip guy in front of the whiteboard and your eye will skip to the next thing. That’s the goal. Not cynicism. Just a working filter.