LLMs are powerful tools – but credulous users risk being stuck in a dangerous place: Mediocristan, the land of the average.
Mediocristan appears in Nassim Nicholas Taleb’s Incerto series. It’s a domain where outcomes are predictable, smooth, and derived from averaging all inputs.
Sound familiar?
LLMs predict the most likely next token based on massive training data (yes, yes – I know about RLHF, etc.). They are statistical engines of mediocrity by design.
And like it or not, LLM use pushes us deeper into Mediocristan daily.
A recent viral piece in NY Magazine exposed how university students rely utterly on ChatGPT. But it’s hardly limited to academia—I’ve encountered memos, emails, and pitch decks that bear the unmistakable hallmarks of AI slop.
We’re outsourcing our thinking to Mediocristan with great enthusiasm.
On the other side lies Extremistan—the domain of consequential outliers where one event’s probability is uncorrelated with another. Mathematically, it’s the fat tails of distributions where Black Swans lurk.
Extremistan is where interesting and unexpected things happen—where growth and destruction co-exist. The very release of ChatGPT in 2022 was itself an event straight from Extremistan!
I’m as enthusiastic an LLM user as any, but comparing my writing from 2020 to today, I’m clearly on the express train to Mediocristan.
This realization is jarring. So what now? Should we embrace the slop and relocate to Mediocristan? Angrily denounce AI and revert to writing screeds on clay tablets?
The critical skill for navigating our new knowledge economy will be deciding where and how to use AI.
Meanwhile, Mediocristan steadily expands, assimilating new domains and making them ripe for disruption from—you guessed it—Extremistan.
On the 18th of May, I’ll be suiting up and riding my beloved Triumph Bonneville to Leesburg, VA, for the Distinguished Gentleman’s Ride (DGR). The DGR is a global movement to raise awareness and funds for men’s mental health and prostate cancer.
I’m in my mid-40s and, yes, fully leaning into the midlife-crisis stereotype of riding motorcycles. But the truth is, the last five years have been full of change — and extremely challenging at times. I moved countries, stepped into the most demanding role of my career, and became a father to two beautiful children. I also lost my father suddenly and faced both personal health challenges and the passing of other close family members in a short span of time.
Grief, stress, and the daily pressures of life and work have, at times, taken a real toll on my mental health. Through it all, rediscovering my love of motorcycles has been a gift.
For me, riding is more than just a love of beautiful machines and the joy of the open road. It’s therapy. It’s a sanctuary — a way to clear my head, feel grounded, and process everything life has thrown my way.
Mental health is a tough conversation for many men — especially those of my generation, and particularly those raised in cultures where talking about emotions just wasn’t something men were supposed to do. I never really had (or still have) the tools to talk openly about what it means to struggle, to age, or to ask for help when things feel overwhelming.
That’s why this ride matters. The Distinguished Gentleman’s Ride has partnered with Movember, a charity committed to changing the face of men’s health by bringing people together and creating space for these important conversations.
Support the Cause If this resonates with you, I’d be grateful for your support. You can donate to my DGR campaign here. Every little bit helps.
AI tools are supercharging individual productivity—but are they also undermining team cohesion?
As a technology executive straddling engineering leadership and client advisory roles, I’ve been an early and enthusiastic adopter of generative AI. Tools like Claude and ChatGPT have transformed my workflow. I can go from idea to prototype in hours, not days. Strategy memos, design documents, and new product concepts come together faster than ever before.
This feels like progress—and in many ways, it is. But there’s a growing paradox I can’t ignore: the more productive I become with AI, the more I risk overwhelming the very teams I lead.
From Brainstorm to Broadcast
I’m all about writing things down. Multi-page emails, long JIRA comments, multi-message Slack threads -> I am THAT guy. This was already a challenge. Now, with generative AI in the mix, it’s even easier for me to take ideas and turn them into fully fledged messages or documents.
It feels productive. But I know that every new AI-assisted memo I send can also create confusion—or even dread—on the receiving end. It’s not just messages, it’s also code, designs, presentations, etc.
What used to be a collaborative back-and-forth now feels like a broadcast. Instead of whiteboarding ideas together, I’m unintentionally showing up with something that already feels “decided.” Even when it’s not.
Fermenting Context Collapse
Teams don’t just need to know what to do—they need to understand why. That context often emerges organically: a passing comment, a shared concern raised in a meeting, a collective moment of clarity. But when AI tools let leaders bypass that messy, human process and jump straight to the output, something critical gets lost.
We’re seeing a form of context collapse: the shift from shared understanding to unilateral information delivery. It might be efficient, but it chips away at clarity, trust, and momentum.
Losing the Plot (Together)
Teams don’t just execute plans—they co-create the narrative that gives those plans meaning. That narrative helps people understand how their work fits into a bigger picture, and why it matters. This helps reduce confusion and leads to clear execution.
When leaders lean too heavily on AI to shortcut the narrative-building process, teams are left with tasks but no story. This can be especially damaging in cross-cultural or distributed environments, where communication already carries more friction. The result? Misalignment, low engagement, and missed opportunities for innovation.
The Risk to Innovation and Ownership
Harvard Business School’s Amy Edmondson talks about psychological safety as the bedrock of high-performing teams.
When people feel like decisions are made without them—or worse, that their input doesn’t matter—they stop contributing. They play it safe. They wait to be told what to do.
AI acceleration makes it dangerously easy for leaders to skip past the slow, participatory parts of leadership. But those are the very moments that create buy-in, spark creativity, and foster innovation.
Developing Restraint
Here’s the paradox: to lead effectively in an AI-accelerated world, we may need to slow down.
What I’ve come to see as an essential leadership skill is what I call AI restraint—knowing when not to use the tools at your disposal.
That means:
Creating space for co-creation: Holding regular “no-AI” brainstorms where ideas emerge collaboratively
Thinking out loud: Sharing early thoughts, not just polished AI-assisted conclusions
Rebuilding narrative: Giving teams time to shape the story around the work—not just deliver on tasks
Signal your intent: When sharing early ideas, explicitly say you’re thinking out loud. Make it clear that these aren’t directives—they’re starting points. This invites dialogue instead of quiet compliance.
Winning Together By Slowing Down
It is easy to generate what looks like a polished strategy doc in five minutes. But in a world already overrun with AI slop, the real differentiator isn’t speed. It’s discernment.
It’s learning how to balance velocity with clarity, and productivity with participation.
The future of leadership isn’t about issuing more brilliant ideas.
It’s about knowing which ideas matter, and creating the space for teams to make them real – together.
It turns out that in this exponential age, judgment, self-discipline, and the wisdom to slow down may be our most valuable leadership capabilities.
AI will save us. AI will doom us. AI will enslave us. AI will enlighten us.
Every week brings a new wave of hyperbolic AI headlines, each more dramatic than the last.
The discourse around AI is mostly ill-informed. Take a recent article from Fortune magazine (see comments). The headline goes: “AI doesn’t just require tons of electric power. It also guzzles enormous sums of water.”
In the article, there this statement: “In order to shoot off one email per week for a year, ChatGPT would use up 27 liters of water, or about one-and-a-half jugs… that means if one in 10 U.S. residents—16 million people—asked ChatGPT to write an email a week, it’d cost more than 435 million liters of water.”
Predictably, the article (from September 2024) has lots of likes and replies on social media talking about how AI is going to doom us all and is a waste of precious energy.
So this article assumes the amount of power required to run inference – i.e. when ChatGPT helps compose an email and maps to the amount of water required to generate that power.
Interestingly, the cost of running inference has gone down substantially over the last year. Recent research by DeepSeek (see comments) also shows how it is possible to train a state of the art model for a fraction of the cost of training foundation models.
Discourse about how AI is ruining the planet conveniently takes data from today and projects it infinitely into the future. Let me put it this way – in 1965 the average gas mileage for a small car was around 15 – 20 MPG. A modern car is 3X more fuel efficient (Toyota Prius or Honda Civic). And it is still fundamentally an internal combustion engine.
Software and AI move much, much faster than the automobile industry. So the next time you see a headline about AI’s apocalyptic resource consumption, remember – you’re probably reading tomorrow’s equivalent of “The Internet Will Crash Under Its Own Weight” articles from 1995.
Is it really doomsday for U.S. AI companies? The harbinger of the apocalypse appears to be a blue whale.
Nvidia’s stock is down 12.5%. There’s a broad tech sell-off, and Big Tech seems a little uneasy.
The reason? A Chinese hedge fund built and trained a state-of-the-art LLM to give their spare GPUs something to do.
DeepSeek’s R1 model reportedly performs on par with OpenAI’s cutting-edge o1 models. The twist? They claim to have trained it for a fraction of the cost of models like GPT-4 or Claude Sonnet—and did so using GPUs that are 3-4 years old. To top it off, the DeepSeek API is priced significantly lower than the OpenAI API.
Why did this trigger a sell-off of Nvidia (NVDA)?
It shows that building cutting-edge models doesn’t require tens of thousands of the latest Nvidia GPUs anymore.
DeepSeek’s models run at a fraction of the cost of large LLMs, which could shift demand away from Nvidia’s high-end hardware.
For U.S. companies, this is a wake-up call. The Biden-era export restrictions didn’t have the intended impact. But for anyone building on AI, there’s a silver lining:
Building LLMs and reasoning models is no longer limited to companies throwing billions at compute.
This will likely kick off an arms race as U.S. companies race to optimize costs and stay competitive with DeepSeek.
Data sovereignty will still matter—most companies won’t want their data processed by a Chinese-hosted model. If DeepSeek’s approach proves viable, expect U.S. providers to replicate it.
Recently, I put this into practice when prototyping a UI change for Adapt, Jeavio‘s LLM-powered knowledge platform. Instead of writing requirements docs, I tried something different.
I uploaded a screenshot to Claude, described the changes I wanted, and got back a working React component – turning what typically takes days of back-and-forth into a clear demonstration in under an hour.
Over the last few months, we’ve enhanced Adapt’s capabilities to handle complex queries like: “Summarize last week’s meeting notes and identify action items.” The platform breaks these down: ☑ Actions (what to do) 🗄️Resources (what to use) 🔎Constraints (how to filter)
While this approach has made Adapt very powerful, it also makes it difficult to understand how a user prompt results in a set of outputs.
Inspired by GitHub Copilot’s inline explanations, I wanted Adapt to provide similar transparency about its reasoning. Using Claude’s Artifacts feature, I quickly created and shared a high-fidelity prototype with my team, showing how this could work.
I read most of “Playground” in a rattly old plane as it shook and juddered over the Atlantic and then the vast emptiness of Russia before landing in New Delhi. I finished the book in a crowded airport, in tears and in awe of what Richard Powers has achieved.
Playground has a beautiful cover
The novel weaves together an exploration of friendship and the games people play with one another, a hypnotic love letter to the ocean, and a deep meditation on technology and meaning. Like memory itself, the story refuses to follow straight lines. Instead, it spirals and circles, guided by a narrator whose version of events becomes increasingly complex and layered as the story unfolds.
At its heart are four people – Todd, Rafi, Ina, and Evie. Todd and Rafi both call Chicago home, but they might as well be from different planets. Todd is wealthy, white, and obsessed with computers; Rafi is poor, African American, and a precocious reader. What bridges their worlds is a shared love of games – chess, Go, and eventually the intricate game of their own peculiar friendship. When they meet Ina in college, their duo becomes a trio, and their lives become permanently entangled in ways that echo across decades.
In contrast stands Evie – a scientist and pioneering diver whose sections contain the book’s most luminous writing. Through her eyes, we discover coral reefs, sunken ships, and manta rays in passages that evoke pure wonder about the ocean’s depths. While others build virtual worlds, Evie explores an actual one, until all four lives ultimately converge on the Pacific island of Makatea – a place strip-mined for phosphate in the 20th century and slowly being reclaimed by jungle. The island stands as a testament to both human intervention and nature’s resilience.
Threading through these human stories runs the history of modern technology and machine learning, embodied in Todd’s journey. He transforms his obsession with computers and gaming into a wildly successful social platform that crosses Reddit with Facebook. But as his success peaks, tragedy strikes – a debilitating neurological disease that leads him to narrate his story to an AI assistant before memory fails. This creates layers of uncertainty about perception and reality that build toward a wonderful (and slightly puzzling) final act that questions what it means to be alive and how technology might reshape our understanding of consciousness and truth.
As a technologist, I found “Playground” to be a powerful lens for examining both my relationship with technology and my feelings about the natural world as we venture deeper into the Anthropocene. The book doesn’t choose sides. Instead, it shows us how the awe inspired by a coral reef and the possibilities of artificial intelligence can coexist, each raising questions about consciousness and reality that the other helps us explore.
If only I could write like Mr Powers
There’s still so much to process in this book. Like the games its characters play, each move reveals new possibilities, new uncertainties to consider. And I’m nowhere near done processing.
An under-appreciated facet of LLMs is just how *weird* they are.
Claude, ChatGPT, and pretty much every other application built on top of an LLM have a system prompt. This is a set of instructions that drives the application’s behavior. The good folks at Anthropic recently released the system prompts used for the Claude application (see link below).
Anyone building applications on top of LLMs should examine Claude’s system prompts to understand how “prompt engineering” is done in production.
Take this example:
“Claude provides thorough responses to more complex and open-ended questions or to anything where a long response is requested, but concise responses to simpler questions and tasks. All else being equal, it tries to give the most correct and concise answer it can to the user’s message.”
This is how “programming” in an LLM-powered world works. As a recovering Java programmer, this blows my mind 🤯.
Here is the thing—we are going to see wild new software experiences built on top of LLMs in the coming years.
But this will only happen once software engineers shed decades of iterative or declarative approaches to “programming” and learn how to work with LLMs.
A paradigm shift will be required to move us beyond the idea that LLMs are just another fancy API that we can integrate into existing applications.
We call working with LLMs “prompt engineering,” but there isn’t much engineering here. This art or skill should probably be called “LLM Whispering” or “LLM Negotiation.” Because what we will be doing isn’t engineering so much as negotiating or working with a very strange peer.
In her insightful piece, “The Turing Test and our shifting conceptions of intelligence,” Melanie Mitchell challenges the traditional view of the Turing Test as a valid measure of intelligence. She argues that while the test may indicate a machine’s ability to mimic human conversation, it fails to assess deeper cognitive abilities, as demonstrated by the limitations of large language models (LLMs) in reasoning tasks. This prompts us to reconsider what it truly means for a machine to think, moving beyond mere mimicry to a more nuanced understanding of intelligence.
Our understanding of intelligence may be shifting beyond what Turing initially imagined.
Turing’s point was that if a computer seems indistinguishable from a human (aside from its appearance and other physical characteristics), why shouldn’t we consider it to be a thinking entity? Why should we restrict “thinking” status only to humans (or more generally, entities made of biological cells)? As the computer scientist Scott Aaronson described it, Turing’s proposal is “a plea against meat chauvinism.”
A common criticism of the Turing Test as a measure of AI capability
Because its focus is on fooling humans rather than on more directly testing intelligence, many AI researchers have long dismissed the Turing Test as a distraction, a test “not for AI to pass, but for humans to fail.”
In times of rapid change, fiction serves as a reflective lens, casting light on current anxieties and offering insights beyond simple commentary. “The Mountain in the Sea” by Ray Nayler navigates the complex relationship between humans and technology.
But Nayler’s work goes further. While the book revolves around first contact with a civilization of Octopii, it delves into the nature of consciousness. It critiques our relentless drive to build, optimize, and consume. Nayler raises pertinent questions about loneliness, isolation, and the role of technology in our lives.
In the pages of “The Mountain in the Sea,” these themes come alive through well-realized characters and intricate plotlines, providing a vital tool for understanding our relationship with the worlds we live in – social, internal, external, and digital.
There are three PoV characters – Ha Nguyen is a scientist who has spent years studying Cephalods – the family of animals that include Octopus, Squid, and Cuttlefish. The second character is a hacker, Rustem, who specializes in breaking AIs. The third is a young Japanese man, Eiko, who, through a series of unfortunate events, ends up a slave aboard an AI-powered fishing vessel.
Each character in the book deals with loneliness and isolation and has somewhat awkward if dependent, relationships with technology.
In general, AI, or the nature of intelligence, is a key theme that runs through the various plot lines of the book. Ha Nguyen and her team try to make sense of the culture and symbolic language of the Octopus civilization. Eiko has to deal with a murderous and indifferent AI driven by optimization algorithms built to maximize the amount of protein the ship hauls from the depleted oceans.
While I picked up the book because of the striking cover and because I love First Contact books – I read it in a couple of sittings because of the underlying themes of our relationship and dependence on technology and what it does to us and the world around us resonated deeply with me. As someone excited about technology’s promises and challenges, this book prompted me to consider where our pursuit of innovation is taking us.
For example, people in “The Mountain..” have AI companions called point-fives. These companions form relationships but do not make any demands on their human owners. They give, but they do not take. There is only one point five instead of two “people” in a relationship. Hence the moniker.
The loneliness of people in this world is mollified by technology, but it is not solved. The only way is through genuine contact, through a process of both taking and giving.
I spend a lot of time working on and thinking about systems that would save time, optimize workflows, and make more money. Despite the potential for disruption and displacement, I welcome new technology like Generative AI.
But, there are clearly issues and risks in the somewhat reckless attitude to embracing technology. Threats not just to our environment but also to society and to ourselves.
“The Mountain in the Sea” is a cautionary tale and a story of hope. Each character’s arc in the novel is discovery and possible redemption. This book had me thinking long and hard about where our obsession with optimization and technology is taking us.