Playground by Richard Powers

I read most of “Playground” in a rattly old plane as it shook and juddered over the Atlantic and then the vast emptiness of Russia before landing in New Delhi. I finished the book in a crowded airport, in tears and in awe of what Richard Powers has achieved.

Playground has a beautiful cover

The novel weaves together an exploration of friendship and the games people play with one another, a hypnotic love letter to the ocean, and a deep meditation on technology and meaning. Like memory itself, the story refuses to follow straight lines. Instead, it spirals and circles, guided by a narrator whose version of events becomes increasingly complex and layered as the story unfolds.

At its heart are four people – Todd, Rafi, Ina, and Evie. Todd and Rafi both call Chicago home, but they might as well be from different planets. Todd is wealthy, white, and obsessed with computers; Rafi is poor, African American, and a precocious reader. What bridges their worlds is a shared love of games – chess, Go, and eventually the intricate game of their own peculiar friendship. When they meet Ina in college, their duo becomes a trio, and their lives become permanently entangled in ways that echo across decades.

In contrast stands Evie – a scientist and pioneering diver whose sections contain the book’s most luminous writing. Through her eyes, we discover coral reefs, sunken ships, and manta rays in passages that evoke pure wonder about the ocean’s depths. While others build virtual worlds, Evie explores an actual one, until all four lives ultimately converge on the Pacific island of Makatea – a place strip-mined for phosphate in the 20th century and slowly being reclaimed by jungle. The island stands as a testament to both human intervention and nature’s resilience.

Threading through these human stories runs the history of modern technology and machine learning, embodied in Todd’s journey. He transforms his obsession with computers and gaming into a wildly successful social platform that crosses Reddit with Facebook. But as his success peaks, tragedy strikes – a debilitating neurological disease that leads him to narrate his story to an AI assistant before memory fails. This creates layers of uncertainty about perception and reality that build toward a wonderful (and slightly puzzling) final act that questions what it means to be alive and how technology might reshape our understanding of consciousness and truth.

As a technologist, I found “Playground” to be a powerful lens for examining both my relationship with technology and my feelings about the natural world as we venture deeper into the Anthropocene. The book doesn’t choose sides. Instead, it shows us how the awe inspired by a coral reef and the possibilities of artificial intelligence can coexist, each raising questions about consciousness and reality that the other helps us explore.

If only I could write like Mr Powers

There’s still so much to process in this book. Like the games its characters play, each move reveals new possibilities, new uncertainties to consider. And I’m nowhere near done processing.

Negotiating with a strange peer ..

An under-appreciated facet of LLMs is just how *weird* they are.

Claude, ChatGPT, and pretty much every other application built on top of an LLM have a system prompt. This is a set of instructions that drives the application’s behavior. The good folks at Anthropic recently released the system prompts used for the Claude application (see link below).

Anyone building applications on top of LLMs should examine Claude’s system prompts to understand how “prompt engineering” is done in production.

Take this example:

“Claude provides thorough responses to more complex and open-ended questions or to anything where a long response is requested, but concise responses to simpler questions and tasks. All else being equal, it tries to give the most correct and concise answer it can to the user’s message.”

This is how “programming” in an LLM-powered world works. As a recovering Java programmer, this blows my mind 🤯.

Here is the thing—we are going to see wild new software experiences built on top of LLMs in the coming years.

But this will only happen once software engineers shed decades of iterative or declarative approaches to “programming” and learn how to work with LLMs.

A paradigm shift will be required to move us beyond the idea that LLMs are just another fancy API that we can integrate into existing applications.

We call working with LLMs “prompt engineering,” but there isn’t much engineering here. This art or skill should probably be called “LLM Whispering” or “LLM Negotiation.” Because what we will be doing isn’t engineering so much as negotiating or working with a very strange peer.

Melanie Mitchell on the Turing Test

From “The Turing test and our shifting conceptions of intelligence” by Melanie Mitchell.

In her insightful piece, “The Turing Test and our shifting conceptions of intelligence,” Melanie Mitchell challenges the traditional view of the Turing Test as a valid measure of intelligence. She argues that while the test may indicate a machine’s ability to mimic human conversation, it fails to assess deeper cognitive abilities, as demonstrated by the limitations of large language models (LLMs) in reasoning tasks. This prompts us to reconsider what it truly means for a machine to think, moving beyond mere mimicry to a more nuanced understanding of intelligence.

Our understanding of intelligence may be shifting beyond what Turing initially imagined.

From the article:

On why Turing initially proposed the Turing Test

Turing’s point was that if a computer seems indistinguishable from a human (aside from its appearance and other physical characteristics), why shouldn’t we consider it to be a thinking entity? Why should we restrict “thinking” status only to humans (or more generally, entities made of biological cells)? As the computer scientist Scott Aaronson described it, Turing’s proposal is “a plea against meat chauvinism.”

A common criticism of the Turing Test as a measure of AI capability

Because its focus is on fooling humans rather than on more directly testing intelligence, many AI researchers have long dismissed the Turing Test as a distraction, a test “not for AI to pass, but for humans to fail.”