The Pink Elephant In The Room
AI models mirror our minds by focusing on what we tell them not to do, making positive prompting essential for better results.
I’d like you to try something for me: Don’t think of a pink elephant.
Chances are, you pictured one—bright, absurd, and impossible to shake.
That counterintuitive trick of the mind is the essence of the pink elephant problem, and it plays out every time we talk to AI.
I wanted to explore this idea by having AI models generate images, so I ran a quick experiment in MidJourney.
I prompted it with “Anything except a pink elephant.” I tried “give me everything but a pink elephant.” I tried a number of permutations of this prompt.
Yet every single result was exactly what I said not to show me. No matter how I phrased the negation, it doubled down on the forbidden subject.
This wasn’t surprising to me, but here’s what I realized: Large language and diffusion models mirror our own ironic processes.
They’re built to predict the next token, the next pixel—so if “pink elephant” sits in the prompt, the model’s probability calculations anchor on it, regardless of “don’t” or “no.” It’s not defiance; it’s math.
This insight should change how you write prompts: Negations are traps.
When you tell AI what not to do, it hears what to do.
So when you’re thinking about giving an AI model negations, try these switches:
Ask for what you want, explicitly. (“Show me a desert landscape with dunes and star trails.”)
Break big asks into smaller steps. (“First, list three Sahara features. Then generate the scene.”)
Use positive phrasing. Avoid “don’t,” “no,” or “except.”
In practice, this means scoping your requests into narrow, concrete targets. If you need bullet points, ask for five bullets on topic X.
If you need an image without pink, say “a gray elephant” or “an elephant in monochrome.” Don’t rely on “anything but.”
Next time you’re wrestling with a wandering AI, remember the pink elephant paradox isn’t just a party trick—it’s a lens on how these models think. And if you can’t stop an idea by telling it to vanish, maybe it’s time to specify exactly what should appear instead.
Give it a try on your next prompt and let me know what changes.
Where else do you see these ironic loops in AI? Drop a comment and share your experiments.
Insights Summary
Pink Elephant Problem - AI models focus on forbidden concepts mentioned in prompts because they're built to predict based on all tokens present, regardless of negation words like 'don't' or 'no.'
Negations Are Traps - When you tell AI what not to do, it hears what to do because the model's probability calculations anchor on the mentioned concept regardless of negative framing.
Positive Prompt Phrasing - Instead of using negations, ask explicitly for what you want using concrete, positive language to guide AI toward desired outcomes.
Break Into Steps - Divide complex requests into smaller, sequential steps to maintain better control over AI output and avoid unwanted results.
Specify Exact Targets - Scope requests into narrow, concrete targets rather than relying on exclusionary language to get precise AI responses.
Scraps Worth Saving
There were a number of topics in my voice memo that didn’t make the cut!
If you’re interested in reading a future issue about any of these topics, all I need is for you to leave me a comment comment with the text in bold.
Model Comparison - A quick compare between ChatGPT’s and MidJourney’s handling of negations revealed similar failure modes despite different architectures.
LLM Models - What is a model?
What’s New
Last week, I wrote about clarity—the hard-won kind that only comes after wandering through the fog. This week, we moved like a team that knows exactly what it’s building and for whom.
New In Plumb
This week, I want to take a beat to celebrate what we shipped. Here’s your ✨sizzle reel✨ of highlights:
🚀 Smarter Publishing & Intelligent Release Notes
We launched a new publishing flow that separates version management from flow deployment. You can now create release notes with AI-generated summaries for clients and have fine-grained control over what gets updated—and when.
Demo here →
🧭 Consultant Dashboard
A major unlock for consultants: manage client workspaces, view and preview templates, and handle workspace-specific libraries without hacking permissions. It’s the last puzzle piece in making Plumb a true operating system for AI consultants.
Walkthrough here →
🧩 One Dashboard to Rule Them All
We unified the builder and runner dashboards into a single interface. Before, users got lost toggling between building and using workflows—now it’s one seamless experience. This small change reduces confusion and sets the stage for truly scalable consultant-client collaboration.
🧠 Schema Designer Evolution
Our schema system got a massive upgrade. From togglable output formats to smarter UI components and a cleaner step config layout, it’s never been easier—or more satisfying—to define inputs, metadata, and flow logic.
📊 Subscriber-Scoped Data Sources
This one’s huge: builders can now design flows that let subscribers bring their own data—Notion, Google Docs, whatever—without breaking the build (or having to copy the whole build 😉). It means you can design once and distribute widely, with each client mapping their own sources while still getting updates as your flow evolves.
🚀 Team Velocity
And that’s just the tip of the iceberg. We shipped fixes, tightened constraints, upgraded the UX for comment nodes, made file uploads smarter, and refreshed the icon/color system to align with our brand palette. The velocity is real.
This momentum feels different. Like we’re not just building quickly—we’re building from a deeper understanding of what matters.
AI Builders Club Podcast
Aaron Dignan (my co-founder) and I recorded a couple episodes this week for our new podcast, AI Builders Club—for people building with, around, and through AI.
Episode 0: Why we started the podcast, what we’re building at Plumb, and the principles that shape how we build agentic workflows.
Episode 1: A breakdown of external meeting prep flows, the current limitations of assistants, and how to build something better (and more human).
Be on the look out for a link on Spotify and iTunes in the next few weeks!
Newsletter Architect
This is the second issue that started as “speaking thoughts into a voice memo” (this time on the treadmill!).
The raw audio? Rambling, messy, very “in the weeds.” The result? This newsletter.
Here’s how it works:
🎙 I record a voice memo and email it to a custom address.
🤖 Plumb grabs the audio, transcribes it, and runs it through an LLM.
✍️ The model returns:
A draft issue in my voice
A summary of insights and structure
A list of unused but interesting scraps for future issues
📥 All of it lands in Notion, ready for editing.
It took me ~10 minutes to build. Now it’s part of my weekly creative toolkit.
If you want to try this workflow yourself, I’m happy to share it. Just reply or drop a comment.
👉 Try the Newsletter Architect Flow
That’s A Wrap
Every week feels like it’s just full of goodies and this week was no different.
Let me know if you found any of this interesting, any of it boring or…any kind of signal helps me know how to write better for the people who get this far in my newsletter.
Until next time,
Chase ✨🤘
What ii you use the negative prompt provided by diffusion models? In stable diffusion(maybe midjourney doesn't have this), it feels like it does avoid what I put in there... Although I have not tried only a negative prompt.