One of the worst parts about being human is that it's in our nature to pick sides.
And nowhere have I seen it more clearly than when someone finds out I have a startup dedicated to helping people leverage AI.
The mood changes instantly depending on the "side" they've picked.
The doomers lean in, brow furrowed, face scowly, "AI is going to take all the jobs" or "AI is going to kill us all". They're certain we're headed towards catastrophe
The optimists light up, the evangelical energy building: "AI is going to solve all of humanity's problems!" This group is ready to merge with the machines.
And then we have the probers, the cautiously curious, asking "what do you think about AI and where it's going?" - trying to figure out which camp I belong to before revealing a position or that they don't have one at all.
What all three camps miss is that picking a side on AI is like picking a side on the future itself - it reduces something vast and complex into something small and manageable.
The real conversation isn't about choosing between artificial extremes. It's about mapping the vast territory that lies between them.
The reality is that most people working directly with AI technology don't fit neatly into any of these camps.
They see both extraordinary potential and legitimate concerns.
They recognize that different applications require different approaches.
They understand that the future isn't predetermined, but shaped by the choices we make today.
Beyond the Camps (A New Way to Think About AI)
This instinct to pick sides isn't just affecting casual conversations - it's actively shaping how AI is being developed, discussed, and deployed.
Watch any AI debate and you'll see it play out in real time.
One side argues we're racing toward digital superintelligence that will solve all of humanity's problems.
The other warns we're on the brink of losing control of increasingly powerful systems.
Meanwhile, businesses and developers caught in the middle are forced to choose between being labeled as reckless optimists or fearful luddites.
But what if there was a better way to think about our relationship with AI?
A framework that captures the nuance that most practitioners actually experience in their daily work?
After a lot of conversations with people building and implementing AI systems, patterns emerge.
Not three camps, but rather nine distinct positions - each with its own perspective on how AI should be developed and deployed.
This isn't just an academic exercise.
Understanding these positions - and more importantly, the underlying dimensions that shape them - gives us a much richer vocabulary for discussing AI's future.
It helps us move beyond simplistic debates about whether AI is "good" or "bad" and toward more meaningful conversations about how we want to shape its development.
In the next few newsletters, we'll explore each of these positions in detail.
But first, let's look at the two key dimensions that define this space: the pace of development and the nature of human-AI interaction.
The Two Dimensions That Matter
When you strip away the rhetoric and emotion, two fundamental questions shape how people think about AI:
"How optimistic should we be about AI's development?"
"Should we focus more on technical or cultural solutions?"
The first question - about outlook - reveals three distinct positions:
The Optimists
The Moderates
The Cautious
The Optimists
The Optimists see AI as a powerful force for positive change.
They believe in humanity's ability to handle the challenges AI presents, whether through cultural adaptation, pragmatic innovation, or technical solutions.
In their view, the potential benefits of AI far outweigh the risks.
The Moderates
The Moderates take a balanced view, seeking evidence-based evaluation and measured progress.
They advocate for thoughtful development that considers multiple perspectives, whether through institutional reform, balanced assessment, or rigorous engineering approaches.
The Cautious
The Cautious voices emphasize the importance of getting things right over moving quickly. Whether focused on systemic risks, robust validation, or technical safety measures, they believe we need to deeply understand and address potential challenges before proceeding.
The second question - about approach - also reveals three distinct mindsets:
Cultural/Systemic Thinkers
Balanced Pragmatists
Technical Focusers
The Cultural/Systemic Thinkers
The Cultural/Systemic Thinkers focus on broader societal implications and structures.
They emphasize how AI will interact with and reshape our social fabric, institutions, and ways of being.
They believe successful AI development requires addressing fundamental questions about society and human values.
The Balanced Pragmatists
The Balanced Pragmatists seek to bridge technical and cultural considerations.
They look for practical solutions that account for both human and technical factors, often serving as translators between different viewpoints.
The Technical Focusers
The Technical Focusers emphasize engineering solutions and concrete implementations.
They believe most challenges can be addressed through careful technical work, whether that means rapid innovation or rigorous safety measures.
When we combine these dimensions, we get nine distinct positions - each offering a different vision for how AI development should proceed.
Some see AI as a catalyst for human evolution, others as a technical challenge to be solved, and still others as a systemic risk requiring fundamental restructuring.
None of these positions is inherently right or wrong.
Each represents a thoughtful response to the unprecedented challenges and opportunities AI presents.
Understanding them helps us move beyond oversimplified debates and toward a more nuanced conversation about AI's future.
Where The Dimensions Meet
When these two dimensions intersect, they create nine distinct positions in the AI landscape.
Each captures a unique perspective on how we should approach AI development.
Let's explore each one.
The Cultural Optimists
They believe AI will fundamentally reshape human society - and that's a good thing.
"AI will help us understand ourselves better" isn't just a slogan for them; it's a deep conviction that our engagement with AI will lead to profound cultural and personal growth.
The Pragmatic Innovators
These thinkers balance enthusiasm with practical considerations.
Their motto - "Let's build thoughtfully but boldly" - captures their approach to keeping pace with AI advances while carefully considering their implications.
They're focused on real-world applications and evidence rather than pure speculation.
The Techno-Optimists
They maintain a strong belief that technical solutions exist for any challenges AI might present.
Their mantra is "Move fast, we can solve the problems," reflecting their confidence in humanity's ability to address issues as they arise.
The Reform Advocates
Their focus isn't just on the technology but on the systems that shape its development.
"Fix the systems, then advance" captures their belief that we need to address fundamental societal issues to develop AI responsibly.
The Balanced Moderates
"Let's consider all angles carefully" isn't just caution - it's a methodology.
These thinkers work to find consensus between different viewpoints, weighing evidence and considering multiple perspectives before drawing conclusions.
The Technical Pragmatists
This is an engineering-focused but measured approach.
Their motto - "Build carefully, test thoroughly" - reflects their commitment to rigorous testing and incremental progress.
They believe in the power of technical solutions but advocate for careful validation.
The Systems Risk Focus
These thinkers emphasize existential and systemic risks.
Their call to "rethink everything" stems from a deep concern about the fundamental structures shaping AI development.
They push us to question our basic assumptions about progress and development.
The Cautious Validators
Stuart Russell leads this group focusing on validation and verification.
Their approach - "Verify first, advance second" - emphasizes the importance of robust safety measures.
They believe we can develop AI successfully, but only with proper precautions and thorough validation.
The Technical Safety Advocates
These thinkers are focused on technical safety measures and mathematical rigor.
Their emphasis on "Solve alignment before scaling" reflects a deep commitment to addressing fundamental technical challenges before pursuing more advanced AI capabilities.
These positions aren't static camps but rather perspectives that often overlap and evolve.
Many thoughtful people find themselves drawing from multiple positions depending on the specific issue at hand.
A researcher might align with Technical Pragmatists on development methodology but share concerns with Systems Risk Focus thinkers about long-term implications.
Understanding these positions helps us move beyond simple pro/anti AI stances.
It reveals the rich landscape of thought shaping AI's development and gives us a better vocabulary for discussing its future.
Most importantly, it shows us that the most interesting conversations happen not when we pick sides, but when we understand the full spectrum of perspectives informing this crucial field.
Finding Your Position
Now comes the fun part - figuring out where you stand in this landscape.
You might find yourself drawn to multiple positions at once. That's not just okay - it's actually a good sign. It means you're thinking beyond the artificial extremes I mentioned earlier.
Try asking yourself these questions:
How do you fundamentally feel about AI's impact on society? Are you excited about the possibilities, carefully weighing the evidence, or deeply concerned about the risks? This helps locate you on the optimistic-to-cautious dimension.
When you think about addressing AI's challenges, do you find yourself drawn to cultural and systemic solutions, technical approaches, or a balance of both? This places you on the cultural-to-technical axis.
Your position might shift depending on the context.
You might align with Pragmatic Innovators when it comes to AI in education (enthusiastically but thoughtfully pushing for new learning tools), but share the Cautious Validators' perspective when it comes to autonomous systems.
You might even find yourself drawing from different positions on different aspects of the same issue.
Perhaps you share the Technical Safety advocates' concerns about AI alignment while appreciating the Cultural Optimists' vision of how AI could transform human understanding.
This framework isn't about picking a permanent camp - it's about understanding the landscape of possibilities and making conscious choices about where to stand on different issues.
Think of it as a map rather than a set of boxes.
Your position can move as you learn and as the technology evolves.
Most importantly, it gives us a better way to understand and discuss AI's future.
Instead of arguing about whether AI is "good" or "bad," we can have nuanced conversations about specific approaches and their implications.
We can better understand why someone might be a Systems Risk thinker on one issue but a Technical Pragmatist on another.
Next week, we'll dive deeper into each position, exploring real-world examples, the people who are most vocal in each position and the rationale behind different approaches.
For now, try mapping your own thinking - and maybe ask a few friends where they stand.
You might be surprised at the conversations that emerge when you move beyond the usual camps.
AI Generated Art
I’ve been experimenting with a style for AI generated images for content like this.
I really like the style that I used for the share image. Here are a few more that I’ve generated for similar content.
I’d love to hear what you think about this style (as opposed to the miniature diorama style below).








For reference, here are some of the miniature diorama images I was generating and using.









Such a great write-up, Chase! I’ve been learning more about polarity in society, and it’s pretty wild how polarized news, content, or formed beliefs impact how we interact with technology and one another—I'm looking forward to your next post! I also enjoy the style of graphics that you’ve been generating; they feel very aligned with how I’ve seen you think and share your thoughts 💯
Love a good 2x2 matrix. Reminds me of some bits from a book I read recently... I think Life 3.0 by Max Tegmark (but not 100% sure).
I like the new visual style btw. Feels like it has a bit more depth. A bit more distinct. The colors feel less typical "AI image". Are you using Midjourney? I've been having success with them for my images lately.