This issue presents my viewpoint on an AI Concept, examining the core ideas, frameworks, and principles shaping how we design and use AI.
AI's unwavering confidence combined with our natural overconfidence creates a dangerous feedback loop where we skip verification and amplify mistakes.
Ever notice how AI makes everyone sound like an expert?
You ask ChatGPT about tax law (usually when you’re doing your taxes), and it delivers a crisp, confident answer.
No hedging. No "I'm not sure, but..." It just tells you what to do.
And unless you're already a tax attorney, you're probably going to trust it.
That's where things get dangerous.
When Overconfidence Meets Artificial Confidence
You may know the Dunning-Kruger effect: people who know very little about a subject tend to overestimate their expertise.
The less you know, the more likely you are to think you've got it figured out.
Meanwhile, actual experts often underestimate themselves because they're aware of all the nuances they still don't understand.
Humans are already terrible at self-assessment.
But now we've got generative AI in our pockets, and it's creating something I think of as a double Dunning-Kruger effect.
Here's how it works:
You're already a little out of your depth on some topic—programming, medical advice, financial planning, whatever.
Classic Dunning-Kruger kicks in, so you don't fully grasp how much you don't know.
Then you ask an AI for help, and it gives you an answer that sounds incredibly smart and confident.
The AI doesn't hedge. It doesn't say "I'm not really sure." It just predicts the most likely next words based on its training data—sometimes right, sometimes completely wrong, but always with the same authoritative tone.
So now you've got two layers of overconfidence stacking up: your own inflated sense of understanding, plus the AI's unwavering certainty.
Neither of you necessarily has the right answer, but you both sound like you do.
The Feedback Loop Problem
This is where individual mistakes become systemic issues.
When you're doubly confident—trusting both yourself and your AI assistant—you skip the verification steps.
You don't double-check. You don't ask follow-up questions.
You just act on the information, often spreading it to others with extra conviction because you've got "backup" from this supposedly super-smart system.
I see this constantly. Someone gets an AI to help debug code or explain a complex news story—and they take it at face value.
The answers feel so polished, so authoritative.
But the AI can be confidently wrong, and unless you already have enough expertise to challenge it, you might not realize there's a problem until it's too late.
Scale this up across thousands of people making decisions based on AI-generated information, and you start getting dangerous feedback loops.
Mistakes don't get caught—they get reinforced and amplified.
Building Better Guardrails
The solution isn't to stop using AI. It's to get more comfortable with intellectual humility, especially when answers come too easily.
Treat AI as a co-pilot, not a captain.
Use it to accelerate your thinking, but don't let it replace your judgment.
Get comfortable saying "I don't know" or "I should double-check this"—especially on anything important.
Cross-reference AI outputs against other sources.
Ask people who know more than you do.
Look for feedback mechanisms that catch mistakes instead of burying them. If something feels off, slow down and investigate.
Most importantly, stay curious about what you don't know.
The real danger isn't getting something wrong—it's being so confident that you don't even realize you could be wrong.
The Humility Advantage
As AI gets better at sounding authoritative, our ability to question that authority becomes a competitive advantage.
The people who learn to work with AI while maintaining healthy skepticism will make better decisions, build more reliable systems, and avoid the pitfalls that trap the overconfident.
So watch out for the double Dunning-Kruger effect in your own work.
When you and your AI assistant are both feeling certain about something, that might be exactly the moment to pause and ask: "What are we missing?"
Stay curious. Stay humble. And don't be afraid to say, "Let me check on that."
What's your experience been with AI overconfidence?
Have you caught yourself trusting an AI answer a little too quickly?
I'd love to hear how you're building verification into your own AI workflows—we're all figuring this out together.
Scraps Worth Saving
There were a number of topics in my voice memo that didn’t make the cut!
If you’re interested in reading a future issue about any of these topics, all I need is for you to leave me a comment comment with the text in bold.
Workflow Automation - Discussion of how Plumb helps consultants turn expertise into automated systems with no-code tools.
Technical Execution - References to building products and tackling technical challenges in startup development.
Insights Summary
Double Dunning-Kruger Effect - AI's confident responses combined with human overconfidence creates a dangerous double layer of false certainty that leads people to skip verification steps.
AI Authority Illusion - AI systems deliver answers with unwavering confidence regardless of accuracy, making users trust information that may be completely wrong but sounds authoritative.
Systemic Feedback Loops - Individual mistakes become amplified across thousands of users who act on AI-generated information without verification, creating dangerous reinforcement cycles.
Co-pilot Not Captain - The solution is treating AI as a thinking accelerator rather than a decision maker, while maintaining intellectual humility and cross-referencing outputs.
Humility Competitive Advantage - As AI becomes more persuasive, the ability to question its authority and maintain healthy skepticism becomes a key differentiator for better decision-making.
Field Notes
This is the part of the newsletter where I share tools, updates, experiments, and anything else worth passing on.
AI Builders Club: Episode 0!
I’m excited to finally share something new we’ve been building behind the scenes: The AI Builders Club—not just a podcast, but a living workshop, a group chat, and a haven for everyone obsessed with building quality automations using AI.
Each week, my co-founder Aaron Dignan and I dive deep into real-world messes (think inbox chaos, scrambled meetings, or clunky client proposals) and walk through how we’d design an agentic, no-code AI workflow to actually fix it.
We’re sharing hands-on tools, our thought process, and the honest, sometimes messy breakdowns it takes to make AI useful in real life—not just on Twitter.
Our kickoff episode pulls back the curtain on:
Why the AI hype keeps repeating (and how to break out of it with real solutions)
What we’re building at Plumb, and why empowering creators to craft agentic, no-code AI workflows matters
The weekly format: practical use cases, raw build breakdowns, and candid debates on what’s worth building
Our philosophy: smaller bites, start small, reduce risk—always build for real people
The vision for our community: a club for builders, solopreneurs, founders, and internal champions who sweat the details and share the blueprints
If you’re someone who always tries to make things run smoother, smarter, or more human, you’ll feel right at home here.
Whether you’re freelancing, consulting, or just tinkering after hours—if you’re reading this, you’re already in the club.
Check it out Spotify (and be sure to Subscribe for future episodes!):
You can also find it on Apple Podcast and anywhere you listen to podcasts.
Avoid Shiny Object Syndrome
In this short Aaron and I talked about how to think about using new models, when to use pinned (stable) models and when to experiment with frontier models.
That’s it for this week!
Stay curious, stay kind,
Chase ✨🤘