LLMs are powerful tools – but credulous users risk being stuck in a dangerous place: Mediocristan, the land of the average.
Mediocristan appears in Nassim Nicholas Taleb’s Incerto series. It’s a domain where outcomes are predictable, smooth, and derived from averaging all inputs.
Sound familiar?
LLMs predict the most likely next token based on massive training data (yes, yes – I know about RLHF, etc.). They are statistical engines of mediocrity by design.
And like it or not, LLM use pushes us deeper into Mediocristan daily.
A recent viral piece in NY Magazine exposed how university students rely utterly on ChatGPT. But it’s hardly limited to academia—I’ve encountered memos, emails, and pitch decks that bear the unmistakable hallmarks of AI slop.
We’re outsourcing our thinking to Mediocristan with great enthusiasm.
On the other side lies Extremistan—the domain of consequential outliers where one event’s probability is uncorrelated with another. Mathematically, it’s the fat tails of distributions where Black Swans lurk.
Extremistan is where interesting and unexpected things happen—where growth and destruction co-exist. The very release of ChatGPT in 2022 was itself an event straight from Extremistan!
I’m as enthusiastic an LLM user as any, but comparing my writing from 2020 to today, I’m clearly on the express train to Mediocristan.
This realization is jarring. So what now? Should we embrace the slop and relocate to Mediocristan? Angrily denounce AI and revert to writing screeds on clay tablets?
The critical skill for navigating our new knowledge economy will be deciding where and how to use AI.
Meanwhile, Mediocristan steadily expands, assimilating new domains and making them ripe for disruption from—you guessed it—Extremistan.
AI tools are supercharging individual productivity—but are they also undermining team cohesion?
As a technology executive straddling engineering leadership and client advisory roles, I’ve been an early and enthusiastic adopter of generative AI. Tools like Claude and ChatGPT have transformed my workflow. I can go from idea to prototype in hours, not days. Strategy memos, design documents, and new product concepts come together faster than ever before.
This feels like progress—and in many ways, it is. But there’s a growing paradox I can’t ignore: the more productive I become with AI, the more I risk overwhelming the very teams I lead.
From Brainstorm to Broadcast
I’m all about writing things down. Multi-page emails, long JIRA comments, multi-message Slack threads -> I am THAT guy. This was already a challenge. Now, with generative AI in the mix, it’s even easier for me to take ideas and turn them into fully fledged messages or documents.
It feels productive. But I know that every new AI-assisted memo I send can also create confusion—or even dread—on the receiving end. It’s not just messages, it’s also code, designs, presentations, etc.
What used to be a collaborative back-and-forth now feels like a broadcast. Instead of whiteboarding ideas together, I’m unintentionally showing up with something that already feels “decided.” Even when it’s not.
Fermenting Context Collapse
Teams don’t just need to know what to do—they need to understand why. That context often emerges organically: a passing comment, a shared concern raised in a meeting, a collective moment of clarity. But when AI tools let leaders bypass that messy, human process and jump straight to the output, something critical gets lost.
We’re seeing a form of context collapse: the shift from shared understanding to unilateral information delivery. It might be efficient, but it chips away at clarity, trust, and momentum.
Losing the Plot (Together)
Teams don’t just execute plans—they co-create the narrative that gives those plans meaning. That narrative helps people understand how their work fits into a bigger picture, and why it matters. This helps reduce confusion and leads to clear execution.
When leaders lean too heavily on AI to shortcut the narrative-building process, teams are left with tasks but no story. This can be especially damaging in cross-cultural or distributed environments, where communication already carries more friction. The result? Misalignment, low engagement, and missed opportunities for innovation.
The Risk to Innovation and Ownership
Harvard Business School’s Amy Edmondson talks about psychological safety as the bedrock of high-performing teams.
When people feel like decisions are made without them—or worse, that their input doesn’t matter—they stop contributing. They play it safe. They wait to be told what to do.
AI acceleration makes it dangerously easy for leaders to skip past the slow, participatory parts of leadership. But those are the very moments that create buy-in, spark creativity, and foster innovation.
Developing Restraint
Here’s the paradox: to lead effectively in an AI-accelerated world, we may need to slow down.
What I’ve come to see as an essential leadership skill is what I call AI restraint—knowing when not to use the tools at your disposal.
That means:
Creating space for co-creation: Holding regular “no-AI” brainstorms where ideas emerge collaboratively
Thinking out loud: Sharing early thoughts, not just polished AI-assisted conclusions
Rebuilding narrative: Giving teams time to shape the story around the work—not just deliver on tasks
Signal your intent: When sharing early ideas, explicitly say you’re thinking out loud. Make it clear that these aren’t directives—they’re starting points. This invites dialogue instead of quiet compliance.
Winning Together By Slowing Down
It is easy to generate what looks like a polished strategy doc in five minutes. But in a world already overrun with AI slop, the real differentiator isn’t speed. It’s discernment.
It’s learning how to balance velocity with clarity, and productivity with participation.
The future of leadership isn’t about issuing more brilliant ideas.
It’s about knowing which ideas matter, and creating the space for teams to make them real – together.
It turns out that in this exponential age, judgment, self-discipline, and the wisdom to slow down may be our most valuable leadership capabilities.