
LLMs are powerful tools – but credulous users risk being stuck in a dangerous place: Mediocristan, the land of the average.
Mediocristan appears in Nassim Nicholas Taleb’s Incerto series. It’s a domain where outcomes are predictable, smooth, and derived from averaging all inputs.
Sound familiar?
LLMs predict the most likely next token based on massive training data (yes, yes – I know about RLHF, etc.). They are statistical engines of mediocrity by design.
And like it or not, LLM use pushes us deeper into Mediocristan daily.
A recent viral piece in NY Magazine exposed how university students rely utterly on ChatGPT. But it’s hardly limited to academia—I’ve encountered memos, emails, and pitch decks that bear the unmistakable hallmarks of AI slop.
We’re outsourcing our thinking to Mediocristan with great enthusiasm.
On the other side lies Extremistan—the domain of consequential outliers where one event’s probability is uncorrelated with another. Mathematically, it’s the fat tails of distributions where Black Swans lurk.
Extremistan is where interesting and unexpected things happen—where growth and destruction co-exist. The very release of ChatGPT in 2022 was itself an event straight from Extremistan!
I’m as enthusiastic an LLM user as any, but comparing my writing from 2020 to today, I’m clearly on the express train to Mediocristan.
This realization is jarring. So what now?
Should we embrace the slop and relocate to Mediocristan?
Angrily denounce AI and revert to writing screeds on clay tablets?
The critical skill for navigating our new knowledge economy will be deciding where and how to use AI.
Meanwhile, Mediocristan steadily expands, assimilating new domains and making them ripe for disruption from—you guessed it—Extremistan.