On Mental Models

A poorly illustrated guide to mental models

I had come across the term “mental model” before, but I could never really articulate or understand clearly what that term met. Today, while listening to an excellent podcast on Farnam Street with Venkatesh Rao (RibbonFarmBreaking Smart), I came across this passage which made a lot of sense:

I don’t think, really, mental models are so much about how the world works as much as they’re about internal consistency. Think of the world, the universe we live in as an extraordinarily confusing place that’s throwing huge amounts of information at you in an extremely high bandwidth way.

The Knowledge Project – Venkatesh Rao

So far so good — we all are familiar with information overload. But it’s not just the information in a particularly dense book or a John Oliver segment on chicken farming, even the very act of perceiving the world involves dealing with a huge amount of sensory input. Anyway, back to Venkatesh..

There isn’t enough processing power in the brain to handle that input raw. Our brain is basically layers and layers of processing that throw out most of it and map it to a sort of toy universe inside our head. It’s this toy universe that we actually play with. The only thing we ask of this toy universe inside our head is that it be much, much simpler than the world itself, and that it be internally consistent.

The Knowledge Project – Venkatesh Rao

So this “toy universe” is the mental model of the world. And the consistency of that mental model is what keeps us sane. When faced with external stimuli that don’t make sense or that does not map to the internal mental model, weird things happen. When faced with rapid societal change, some people turn to religion to try and make sense of the perceived chaos, or perhaps to make things simpler. Some choose nostalgia and the comforts of a simpler time and we end up with religious extremism and nativist politics.

What I took away from the podcast was the importance of tending to your mental model of the world. To let enough stimuli in to refine the mapping form the external to the internal and to be confident in getting rid of models that no longer map well to the external world. Easier said than done!MOSTLY HARMLESS