A Field Guide to Good AI Advice
Six archetypes whose AI commentary is worth your time — what each one offers, and where to find them.
By Matt Gardner
Two weeks ago I wrote about the half-zip guy in front of the whiteboard — the LinkedIn archetypes whose AI takes you should stop forwarding around your office. The mail I got back was almost all the same question.
Who should I read instead?
Fair question. Here’s what I actually look for.
The thing about good AI advice is that it doesn’t trend. It doesn’t have a hook line. It rarely gets screenshotted. It often arrives as a footnote or a four-paragraph aside in a newsletter that fewer than ten thousand people read. The signal is quiet. You have to know what you’re listening for.
Six archetypes. What each one offers, and where they tend to live.
1. The Practitioner With Scars
Built things. Broke things. Will tell you about both, often in the same paragraph.
What they offer: receipts. When they say “this works,” they mean it worked at our shop, on our data, with our constraints — and they will name the constraints. When they say “we rolled it back,” they will tell you why. They are valuable because most AI commentary is about what should happen. They report what did.
Where to find them: personal blogs that haven’t been redesigned to look like media properties. Newsletters with footnotes. Conference talks where the slides are ugly and the demo is live.
2. The Operator Who Actually Deployed
Runs an ops, finance, legal, or HR function inside a real org. Has gone through the boring middle of an AI rollout: governance, change management, the meeting where security said no, the rollback, the second attempt that almost worked.
What they offer: the parts of an AI deployment that nobody photographs. The procurement conversation. The Slack channel where the early users complained. The decision to not deploy something even though the demo was great. They make you smarter about what happens between “we bought it” and “it works.”
Where to find them: LinkedIn, but only the long-form posts — not the carousels. Internal-newsletter writers who occasionally publish externally. Small-conference speakers who show up on the agenda once and then vanish for a year.
3. The Researcher Who Admits Uncertainty
Academic, lab person, or independent evaluator. Will say “we don’t know” out loud and mean it. Talks about failure modes, eval methodology, the gap between benchmark performance and real-world performance.
What they offer: epistemic humility, applied. They are not telling you what to do. They are telling you what is and isn’t currently known. That is a different and more useful thing than most takes deliver. When the field is moving this fast, knowing the shape of your uncertainty is worth more than someone else’s confidence.
Where to find them: arXiv, occasionally. Substack pieces with embedded charts. Podcasts where the interviewer doesn’t interrupt.
4. The Builder Who Documents
Ships side projects. Writes up the lessons, including the failures, including the dead-ends. The blog post is usually titled something like How I built X or What I learned trying to Y. Demo videos include the moments where it didn’t work.
What they offer: a working mental model of what is currently buildable. They have actually done the thing. The ratio of signal to gloss is the highest in any genre. You will learn more from one of these write-ups than from a dozen “future of work” essays.
Where to find them: Hacker News, GitHub READMEs, personal sites that look like they were made in 2007 and never repainted.
5. The Translator
Bridges technical and business audiences without dumbing either side down. Explains the implication of a technical fact rather than parroting the fact. Will tell you why a benchmark score matters for your finance team specifically, not just that the score went up.
What they offer: the bridge. Most AI commentary is either too technical to act on or too business-y to be true. The translator is rare because the role rewards generalism in a market that pays for specialism. Find one and subscribe to everything they publish.
Where to find them: thoughtful corporate blogs (rare but real), longform newsletters that span more than one beat, podcast hosts who can interview both an ML researcher and a CFO and ask the same kind of question.
6. The Generalist Who Reads Widely
Connects AI to history, to other technologies, to durable human patterns. Doesn’t treat AI as the only thing happening. Will mention printing presses, electrification, spreadsheets, search — not because it’s clever but because the analogies actually load-bear.
What they offer: perspective. They keep you from the two failure modes of the moment: thinking AI is everything, and thinking AI is nothing. Both are wrong, and both are easy to fall into when your information diet is a single feed.
Where to find them: book reviewers who occasionally write about technology. Magazine essayists. Older bloggers who survived the last three hype cycles and didn’t get cynical.
How to read any AI advice
Two questions. Same format as last time, opposite direction.
- Has the person giving this advice sat with the problem long enough to be wrong about it once?
- Are they telling me what they did, or what I should do?
Advice from people who have been wrong is more useful than advice from people who haven’t. Advice that describes their experience is more useful than advice that prescribes yours. You can act on the first. The second is just a slide.
The good news: once you can name the archetype, the signal gets easier to find. You’ll skim past the half-zip guy and your eye will catch the operator’s four-paragraph post about why their first rollout failed. That’s the goal. Not gatekeeping. Just a working filter on the way in.